FYI, posting to the board list this time. I mistakenly sent my previous emails to the wrong alias.
I am raising this issue because we are at a cross-roads with regards to deciding on how working groups will operate. The decisions we make will establish a precedent. Since
having working groups is a new mechanism for the board, I would rather err on the side of caution. I would rather we have weak working groups with little authority to move forward, than working groups that have broad authority that can act more independently
of the board. I believe the board should authorize any activity that requires the investment of resources from multiple organizations, or that will result in the production of a public work product. We may decide to change this in the future, but IMHO we should
do so only once we better understand how WGs will work.
As for Harold’s question, I think two more things need to be done:
1)
Any working group activity that requires the investment of implementation resources from multiple organizations, or that will result in the production of a public
work product should be proposed to the board list with a quick summary of the project, who will be participating, and what the implications will be to the CVE community.
2)
The board should discuss the project on the list and address any concerns raised before moving forward.
I don’t think we need to vote. A process that focuses on establishing rough consensus is enough in my view. This should help to ensure that a larger set of stakeholders are
consulted before moving forward.
Thoughts?
Regards,
Dave
From: Booth, Harold (Fed)
Sent: Friday, April 21, 2017 6:00 PM
To: Landfield, Kent <Kent_Landfield@McAfee.com>; Kurt Seifried <kurt@seifried.org>
Cc: Waltermire, David A. (Fed) <david.waltermire@nist.gov>; cve-board-auto-list <cve-board-auto-list@lists.mitre.org>; owner-cve-editorial-board-list@lists.mitre.org
Subject: RE: Notes from April 17 meeting
I am little uncertain then what more is being asked for here since:
a)
A summary of the meeting was posted to the CVE Auto-WG list – which allows everyone on the WG list to participate
b)
During the CVE Board meeting on Wednesday this activity was mentioned and described and all on the call were given the opportunity to ask questions
My understanding from Dave is that he is asking that prior to any WG embarking on any sort of pilot/test the CVE board should be notified and given the opportunity to weigh
in. I had thought that the WG would conduct one or more pilots to collect information, identify problems and develop possible solutions. Once the WG
has some options, these would be presented to board and the board would either approve or say “Go Fish” with hopefully additional guidance to the working group in the latter case.
I see arguments for either approach and will proceed however the board as whole thinks is best. I see this as a healthy discussion
to better identify how the WGs should operate.
Regards,
-Harold
Well that’s the question. I do think there needs to be some readout to the Board that describes the efforts, time box it and then respond with results of the experiments. Not
sure every single detail needs to be laid out but much more than ‘we are working on automation to distribute data’. ;-)
--
Kent Landfield
817-637-8026
kent_landfield@mcafee.com
Right but at what level? Do we have to run every technology choice past? Or can we simply say "we're working on automation which includes how to distribute the data" and then play with various solutions (git,
rss, atom, message queues, twitter, etc) to see what works and doesn't?
Activity coming out of the working groups are supposed to be an extension of the Board activities but, as has been agreed to by the Board, I repeat what was stated and agreed
to...
Real decisions are made on the Board list.
·
WGs need assure the Board is aware of what they are doing and decisions they are making.
·
WG decisions need to be brought back to the Board in the form of
recommendations for the Board to decide on.
·
WGs should provide a report-out to the CVE Board list assuring any decisions made are clearly identified as recommendations.
·
The Board will then have an opportunity, for a specified period of time, to review the recommendations. If Board members have issues or questions, they are expected
to ask for clarification and have the discussions needed to assure consensus one way or another. In many cases, there may be no need for clarification or discussions. In that case, if the specified period of time passes, the recommendation(s) are considered
approved. Silence begets acceptance...
--
Kent Landfield
817-637-8026
kent_landfield@mcafee.com
Comments inline below.
Thanks,
Dave
On Fri, Apr 21, 2017 at 10:10 AM, Waltermire, David A. (Fed) <david.waltermire@nist.gov> wrote:
While I understand that what you are discussing is just a “pilot”, we have all seen in reality that there is always pressure to put in place a piloted implementation as the final
solution. This is often because of expediency and an unwillingness to do any re-engineering. Considering this, I have a few reservations about using GIT as part of this solution.
1) We have not consulted the broader CNA community to see if this is a palatable solution. If the idea is rejected in theory by the CNA community, time and resources spent on a
“pilot” may be wasted.
2) This solution seems to focus almost exclusively on getting data to MITRE from CNAs. While this is needed, we should consider a solution that achieves this while also supporting
broader dissemination to other CVE consumers (e.g., vendors, product customers, NVD, etc.). GIT is limited in what it can do and it seems like we are optimizing “fast” and “cheap” over “good” and “cheaper overall” as a result. I would expect these tradeoffs
to be discussed with the rest of the board and the community before taking any significant actions. It can be argued that a “pilot” is not a significant action, but as I mentioned before, “pilots” often become more than what is intended.
You can trivially clone and keep a git repo up to date, this works for very large repos (e.g. Linux kernel much?). This also makes it possible to add front ends on to it, e.g. you can ingest git and pass out an RSS for example, or any other
number of formats/message queues/etc of the data. We also haven't even run a survey/asked the community what they want, and more importantly what the people having to support this are willing to do (the DWF is a volunteer effort, I'm scratching my own itches,
if they also scratch yours, great, if not, I suggest you get involved!).
[DAW: A frontend can be added to most approaches. My point is git is only a partial solution. You still need to add the front end. If RSS/Atom is used, the solution can support
change control, syndication, and direct access. Such a solution provides more functionality than git in this way, which is potentially “good” and “cheaper overall”, although the upfront investment is probably slower and more expensive.]
3) According to the CVE board charter, working groups are advisory in nature. I believe that any pilot or experiment to be conducted should be with the blessing of the larger CVE
board. This discussion has not happened and hopefully will as a result of this email.
I would disagree with this. Does the board now have to sign off on every experiment of every working group?
[daw: I would argue yes, since such an experiment is a CVE-related activity.]
The whole point of this is to abstract away the details, ideally the WG goes away, does some work (be it discussion, coding, experimental operations, whatever) and then comes back to the board with some results/recommendations.
[daw: You are arguing my point. The board is ultimately responsible. So experiments should be run past the board. This is what advisory means.]
Example: in the last week I've decided on using Google spreadsheets as the pipeline for DWF CVE data, and the workflow has changed several times within that week, e.g. it now looks like:
[daw: BTW, it’s good that you are investigating this. That being said, what you are doing is an individual activity. You do not need to ask permission from the board to do
something as an individual. If you bring this to the WG or board, or are doing this as a CVE experiment then the status changes. I am arguing that since the git experiment is being coordinated as a WG activity, then the experiment is a board matter and that
board approval should be required.]
Web form with CVE request (PUBLIC)[*]
check if email has accepted Terms of Use:
if yes proceed to split merge, place data in "tou accepted" file
if not send terms of use email and wait for reply, place data in "waiting for tou acceptance" file, this file gets revisited and ... stuff times out? I don't have a good answer yet, to be decided.
SPLIT/MERGE CVEs and generally check for correctness (e.g. some are missing data/etc.) - directly edit in Google spreadsheet, which seems to work well, may need to split up by requester long term if we have a lot of these, who knows.
if CVE is well formed and SLIPT/MERGED then generate JSON entry (with no CVE ID) and email it to the requester to confirm they sent it in, and that it is correct, wait for reply, place data in "waiting for data confirmation" file, this
file gets revisited and ... stuff times out? I don't have a good answer yet, to be decided.
if CVE is NOT well formed then request more information from the requester, wait for reply, place data in "waiting for further details" file, this file gets revisited and ... stuff times out? I don't have a good answer yet, to be decided.
waiting for data confirmation
So ideally we now have a well formed CVE, from someone that accepted the terms of use, and they confirmed they sent the CVE request in, and that it's correct, now we assign the CVE, email it to them, and submit the CVE data to MITRE.
As you can see I still have some major open loops (e.g. I currently have 156 emails (some people have multiple addresses) that accepted the terms of use, and 54 that I'm still waiting on a reply for), I'm just SPLIT/MERGING the data I have
and once I email that out we'll see what the response rate is. I'm not sure what to do with "orphans" (open them up to the public in case someone who did accept the terms of use wants to research/rewrite it and submit it?) and so on.
The above of course is subject to change as I gain operational experience and learn what works/doesn't work.
[*] the embargo requests flow will probably look the same but use a separate set of files that has more restricted access than the public requests.
If I had to ask the board to ok every experiment I'd be bugging them multiple times a week.
Thoughts?
Regards,
Dave
|
This sender failed our fraud detection checks and may not be who they appear to be. Learn about spoofing
|
Feedback
|
I had taken notes as the meeting progressed yesterday, but unfortunately I had an unfortunate crash and lost those notes. Below is a reconstruction from memory so if anyone has
anything they would like to add or if I missed something please feel free to do so.
Per request from the CVE Board the WG began to dive into the issue of automating the sharing of CVE Data.
Initial discussion mentioned some issues with the current CVE JSON format.
-- A major issue is that the minimal format requires too much information to represent CVEs in the rejected and reserved state. A way to handle this issue may be to document that
the minimum format is for CVEs in the public state and each state may have its own minimal format. This will complicate the JSON schema and perhaps make it difficult to use the schema in a general sense.
-- Talking about states brought up the issue that the current documentation is a bit sparse and could use some additional detail and explanation.
ACTION: Need to determine who can work improving this documentation and on what timeline
Discussion then progressed to how to address the issues of CVE/Vulnerability sharing. Initial discussion was how to keep the information relevant for the CNAs. In other words providing
capabilities to allow selective retrieval or filtering of the data based on certain fields or data points. Possible data points were:
- CNA Source
- Product Affected
- Vendor Affected
- Published Date
- Last Modified Date
It was decided to set the filtering and selective retrieval aside in favor of just setting up the “plumbing” to allow smooth exchange of the data. A simple pilot of just a few of
the WG members was then discussed with git as the mechanism for the exchange and maintenance. Other alternatives were briefly mentioned including the ongoing work in the IETF MILE WG around ROLIE but no interest in those alternatives at this time. I know in
previous discussions other syndication formats were mentioned (Art?) as well such as ATOM and RSS but those did not come either.
In terms of what we want to accomplish in the pilot two main goals were identified, with a possible third “stretch” goal.
- Allow for MITRE to ingest CVE information from CNAs
- Allow CNAs (and others) to receive updates on CVE information
With extra goal:
- Allow NVD to update/provide analysis information (CVSS, CWE, CPE, etc…)
The basic process to allow for ingest was:
- A GIT repository will be set-up populated with the CVE information in JSON format
- A CNA will clone/fork this repository
- A CNA will make updates/changes in their local repository
- A CNA will issue a pull-request with their changes
- MITRE will process the pull-request
The basic process to allow others to receive updates:
- Clone/fork the MITRE provided repository
- Initialize data
- Pull to receive updates
- Process updated files
A discussion was also had regarding the quality of the information provided by the CNAs in these updates and how assessing this quality could be automated.
- the current format still allows too much freedom to auto-generate a description
- Harold Toomey’s spreadsheet and the Vulntology work could be examples of methodologies to automate
Automation of quality was deferred until we are just able to demonstrate the sharing.
For the interim a manual assessment of quality will be performed as needed and the common issues identified and criteria used to assess will be captured in order to then figure
out a way to automate.
At the end of the call MITRE agreed to provide information on how to access their GIT repo (below) and two WG members agreed to participate in this trial to test things out.
GIT access instructions:
George has set-up a GIT repo holding a collection of files, one per CVE, based on the JSON 4.0 draft spec.
The files are organized in subdirectories based on the year portion of the id and further split into directories
based on the numeric portion such that no directory holds more than 1,000 files.
For example, Fortinet sent MITRE a JSON file with information about CVE-2017-3125 last week; their submission as well as a BID that was added is found in 2017/3xxx/CVE-2017-3125.json.
The GIT repo is hosted on MITRE's CoDev facility, which is based on Atlassian BitBucket / Stash.
Access to that is tied into an account CNAs and Board members should have on MITRE's Handshake service.
Assuming one has an account on that, they need to visit
https://login.codev.mitre.org and log in using the MPN button,
which automatically creates an account for them on MITRE's CoDev service.
After that, they must let George Theall know they have done that and he will give them access to the repo (CVEPROJECT/repos/cvelist).
If you have any questions about how to access GIT please contact George Theall (gtheall@mitre.org).
--
|