Skip to content

Instantly share code, notes, and snippets.

@fjahr
Last active March 11, 2023 18:16
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
Star You must be signed in to star a gist
Save fjahr/bf0ff0917e03a4e49fac0617b2b35747 to your computer and use it in GitHub Desktop.
Thoughts on ASMap for Bitcoin Core releases

ASMap in Bitcoin Core releases/ASMap data sources

Background

To get a general overview/refresher of the ASMap project please (re)read the post from Gleb on the Bitmex blog [1]. What is described there is still the status of ASMap file data sources to my knowledge.

Some further questions of mine about the process were discussed in November 2021 in an IRC meeting [2]. It was discussed that fresh ASMap files will be generated for and shipped with every release. The historic files that are part of the releases should be available in a separate repository under the bitcoin core GitHub organization. It was also discussed where the tools used during the release process should be maintained.

Since then, my focus has been on what would be the best possible data sources and quality assurance process for the input data of the ASMap file, i.e., a prefix to AS mapping that most accurately reflect the reality of the internet and that stays up to date for as long as possible.

ASMap in the release process

Maintainer expectations

I obviously cannot speak for maintainers but my impression from being active on the Bitcoin Core project for the last ~3 years is the following: Maintainers have a janitorial role and hence can only be expected to follow a mostly predefined process. If possible, processes should be automated, if not there should guidelines that are as clear as possible. There is only a small number of maintainers, and their time is limited. In the context of ASMap in releases this means maintainers cannot be expected to become experts in internet infrastructure topics, BGP, etc. Maintainers cannot be expected to do explorative qualitative or quantitative analysis on ASMap files unless this analysis can be automated and leads to clear outcomes. The Bitcoin Core release process is already quite complicated with many moving parts and has been delayed for various reasons in the past. Integrating ASMap into the release process should happen in a way that avoids potential delays in the release.

ASMap data analysis

By the time I started looking into this issue the plan for releasing ASMaps in Bitcoin Core could be very roughly summed up as: We get an ASMap from the internet (i.e., RIPE RIS) and then we run analysis on the ASMap file to prove its legitimacy. This analysis could compare a newer ASMap file to a historical file and differences could be looked at in more detail.

The problem with this is that I have found that there seems to be no good way to improve or audit the ASMap's data quality by comparing it to generic rules we can define or comparing data from one source to itself from different points in time. This makes even basic auditing of the ASMap data input nearly impossible as it functions today. The reason is that the global internet routing table is a very complex data set that is constantly in flux and does not follow any systematic rules. I have tried to find any sort of solution to this used in practice or suggested in academic research and have spoken to many practitioners in the industry while doing it. Unfortunately, there seems to be no silver bullet.

Even if we could develop a canary system that would show a warning if, for example, the data set is 2% smaller than the one used for the last release: what would be the consequence of that? As laid out in the previous section, I don't think that maintainers can or want to become (constantly up to date) experts on this matter. Even just trying to interview experts or reaching out ISPs/IXPs that are affected by changes in the ASMap file seems like an improbable amount of work. This could potentially hold up the release process for a long time and lead to a lot of confusion around the ASMap feature for users. And if we were instead to develop a decision tree type of system on what to do in each scenario, I think this would just be too complex.

Also, if maintainers were taking on the responsibility for checking the quality of the ASMap file candidate this would potentially make them liable/target of blame if something goes wrong and an actual issue with the ASMap file in the release arises. Such a situation needs to be avoided.

The best we can do (IMHO) for ASMap releases

What we should be using instead is a process that utilizes the best possible data sources (a lot more on that in a bit) and combines them in a pure function that results in an output that we believe is the best we can get from public data.

Otherwise, the process should be as transparent as possible to allow other contributors to audit and investigate if they see issues with the resulting file. This can be achieved by sharing the verbose logging from the build process, statistics, intermediate artifacts, and initially downloaded raw files after the ASMap file to be used in the release is uploaded.

The role of maintainers in ASMap releases

The maintainers only execute the process as described above. In addition, there may be very simple checks to run that expose clear bugs in the system and these should simply already be included in the file generation process.

The single check that currently seems feasible to run for me is that the output covers a sizable amount of the publicly known bitcoin network (>99% should be feasible) as well as a feasible part of the routable IP space. Aside from a targeted attack, this could be a hint that the program building the ASMap has hit a bug or that key infrastructure was down/unavailable/blocked during the build process and thus part of the network view is missing.

In terms of timing: The process to generate the ASMap file candidate should be run by a maintainer in the days before the tagging of rc1. The file is then presented at the same time rc1 is tagged. This file is used no matter how long the release process takes and how many release candidates follow. The file is then treated similarly to a PR for Bitcoin Core: Until the release is final testers should use the file together with the release candidates and report back if they had issues. I am not sure if we should require an explicit ACK from some people or if the absence of issues is enough. Probably there should be an explicit issue opened for each ASMap file candidate so that feedback can be collected in one place.

The role of testers and auditors in asmap releases

Everyone can be a tester or auditor of the ASMap file candidate that is presented along with rc1. Between rc1 should be enough time to use the file in test nodes, investigate suspicious data changes, check if BGP leaks that became public knowledge were included, re-run the file generation process to see if the result is the same, etc. It may be especially interesting to compare the file with personal data that nobody else might have or be interested in. This might be checking that the peers the own node is connected to are all covered by the ASMap candidate and that the own IP is mapped to the correct AS.

What is important however is that this should not hinder the release process unless testers/auditors find issues that are of concern. This might then trigger a modification of the ASMap candidate or a complete re-run of the ASMap generation process.

General knowledge of data sources

Feel free to skip this section if you already have good knowledge of RIPE RIS, RPKI, IRR, and their trade-offs.

RIPE RIS

RIPE RIS (Routing Information Service) is a data collection service operated by RIPE for research purposes. Partners of RIPE around the world run so-called collector software and feed it their BGP traffic. That data is then forwarded to RIPE.

A quote from RIPE's own description of the goals for RIS: "The Internet routing system has no built-in security mechanisms, so it’s important to collect data to make this system observable and ultimately more secure. That’s where RIS comes in. By collecting and displaying routing data, RIS lays bare the routing system, exposing malicious actors and allowing operators to identify and address security risks." [3] And: "This data is useful for looking at the state of the BGP Internet, debugging/post-mortems of events in BGP, and tracking of long-term trends in BGP." [4]

RPKI

A summary of RPKI (Resource Public Key Infrastructure): "RPKI allows holders of Internet number resources to make verifiable statements about how they intend to use their resources. To achieve this, it uses a public key infrastructure that creates a chain of resource certificates that follows the same structure as the way IP addresses and AS numbers are handed down.

RPKI is used to make Internet routing more secure. It is a community-driven system in which open-source software developers, router vendors, and all five RIRs (Regional Internet Registries) participate, i.e., ARIN, APNIC, AFRINIC, LACNIC, and RIPE NCC.

Currently, RPKI is used to let the legitimate holder of a block of IP addresses make an authoritative statement about which AS is authorized to originate their prefix in the BGP. In turn, other network operators can download and validate these statements and make routing decisions based on them. This process is referred to as route origin validation (ROV). This provides a steppingstone to provide path validation in the future." [5]

IRR

The IRR (Internet Routing Registry) is a distributed set of databases that are individually operated by organizations and contain routing information for networks on the internet. It is used by network operators to register and maintain information about the internet routes that they use to reach other networks. IRR is the de-facto bridging option for route filtering today even though they have a very loose security model. This has been known for a long time. There’s no cryptographic signing of records. Records exist within some IRRs that are both clearly false and incomplete, often due to a lack of maintenance of the party that announced the outdated information.

There are multiple suppliers of IRR data, and some are better than others. Particularly those IRRs that are not bound to a specific RIR, such as RADB and AltDB seem to be more vulnerable. A recent attack in the crypto space was able due to AltDB's open nature [6]. RADB is also open, only requiring a membership fee. However, the IRR DBs operated by the RIRs provide not perfect but much better security since they have the actual data of their customers, they can compare received data with. They manage all the prefixes and can verify the registration of IRR objects with address ownership information [7].

Evaluation of data sources

From the section on the release process, we now know that it seems unlikely that we take data and enhance it by simply applying a set of rules or comparing it to a historical version of itself. But the one part of the equation that we can improve is the input. This section explains what I believe is the best possible input we can use to generate ASMap files.

RIPE RIS: Status quo with issues

(This paragraph speaks about RIPE RIS but basically all information also applies to CAIDA Routeviews)

So far RIPE RIS has been used as the source for all ASMaps used but the quotes on its intended purpose in the previous make obvious what the issues with using RIS as the primary data source are: Data is collected for research purposes to observe and examine malicious behavior after the fact and learn from it. This means RIS would go out of its way to include data such as BGP hijacks and fat-finger type of leaks. There is no filtering or clean-up (after a leak was identified) of this data. The RRC's location and functionality are also publicly known so sending malicious announcements with the goal of getting them into RIS is comparatively easy.

This means the RIS data is not used anywhere else to inform routing decisions directly, to my knowledge we would be the first to use the data in this way. This leads to my secondary concerns that RIS as a service is not a very high priority for RIPE as far as I can tell. This means the APIs/data structures could be changed short term or the service could be defunded or even discontinued. And RIS is also a more centralized solution than the alternatives, though RIPE remains an important data source. I don't have insight information here, but I also doubt that the RIS data hosting is protected from hacks with the same level of scrutiny as the alternatives. I have similar concerns about possible downtimes, how quickly a downtime of the hosted data or since RRCs would be acted upon.

To conclude, I believe we fare best if we only use RIPE RIS and comparable data sources as the input of last resort, i.e. we only use it if we don't get information on specific mappings from anywhere else.

RPKI: Good but partial solution

We can download RPKI ROAs and build a prefix to AS mapping from it that is validated by the trust chain from one of the RIRs. This gives us a much higher quality of data.

Open-source validator software that does most of the heavy lifting for us is available. The most likely candidates to be used are rpki-client [8] and Routinator [9] as shown by analysis of a variety of factors recently [10].

The downside of RPKI is that its data is not complete since RPKI is not as widely deployed as we would like. If RPKI is not deployed by an AS their prefixes cannot be validated and will not be present in an RPKI repository. In my last test a few weeks ago, I found about 60% of the Bitcoin network can be validated. This is a good base but not enough. More on that in the next section.

Furthermore, we may need some adaptation to the bucketing logic in Bitcoin Core since there is not a one-to-one relationship between a prefix and an AS in RPKI since RPKI does not prevent multiple AS's from announcing and validating the same prefix. But I think this is a positive for the security of the bucketing logic. It is well known that large corporations don't just have one ASN and this is what leads to these announcements where multiple ASNs are associated with a single prefix. This means we can identify when two ASNs are one entity or two very closely tied entities and they can go into the same bucket together. To reflect this the ASMap format will need to be changed from a simple map to a multimap.

On the plus side: RPKI adoption is growing [11] so we can expect further improvements in data completeness over time. To some degree, I think it would also be interesting to call the wider bitcoin community's attention to this and ask them to check if their hosting provider has already implemented RPKI and if that is not the case message them that it would be a good idea to do it.

IRR: Good enough to complete our data set

IRR is not a perfect data source, as laid out in the respective general knowledge section. However, the data should still be of higher quality than that of RIPE RIS. This is why I propose to use the RIR IRR DBs as additional input for the prefix to AS mapping and prefer it's input over RIPE RIS mappings.

Further filtering/input potential

Further options for filtering the resulting/validating the ASMap file still need to be tested but this should happen by way of reviewers first, as described above. Once reviewers have developed and tested a rule set for a new validation/filtering step that is shown to be robust over time and realistic for maintainers to adapt, they should do so.

One idea that seems promising is to use the data collected by someone running their own BGP collector to filter any entries in results that have not been observed as publicly announced by them as well. A personal BGP dump could also serve as an additional input source for users that have access to such data and replace inputs from RIPE RIS/Routeviews in the process of building the final map. The tooling for this should be provided but of course it is not possible for a lot of users to run their own BGP collector.

Code

Data collection

The Kartograf project [12] allows the building of an ASMap file (without the compression part) from validated RPKI ROAs, RIR IRR DBs and CAIDA pfx2as maps. It is also possible to check coverage of lists of IPs (could come from a personal node or another source [13]) and to merge different maps into one. There are still many things to improve in terms of documentation, performance, etc. but it should work today. Be aware that processing can take a while (I have seen ~6h on my system).

Though still, a very much imperfect system aspires to fulfill the transparency goals expressed above: The output should be sharing all kinds of potentially relevant statistics during the processing. And all the artifacts (downloaded files, intermediate results) should be available to share after the process.

Bitcoin Core "multimap" bucketing

WIP

Acknowledgements

Special thanks to Gleb, Bruno and Duncan for feedback on earlier versions of this write-up.

References

  1. https://blog.bitmex.com/call-to-action-testing-and-improving-asmap/
  2. https://bitcoin-irc.chaincode.com/bitcoin-core-dev/2021-11-11#736102
  3. https://www.ripe.net/analyse/internet-measurements/routing-information-service-ris
  4. https://ris.ripe.net/docs/20_raw_data_mrt.html#name-and-location
  5. https://rpki.readthedocs.io/en/latest/about/introduction.html#about-resource-public-key-infrastructure
  6. https://www.kentik.com/blog/bgp-hijacks-targeting-cryptocurrency-services/
  7. https://blog.apnic.net/2022/04/07/irr-hygiene-in-the-rpki-era/
  8. https://www.rpki-client.org/
  9. https://routinator.docs.nlnetlabs.nl/en/stable/installation.html
  10. https://ripe85.ripe.net/presentations/25-rpki-validators-ripe85.pdf
  11. https://blog.apnic.net/2023/01/18/rpkis-2022-year-in-review-growth-and-innovation/
  12. https://github.com/fjahr/kartograf
  13. https://gist.github.com/fjahr/29dafc31d5e4297dbe9ecaf540461564

Addendum 1

The ASMap candidate file does not need to be produced by a maintainer. It may be done by another contributor just as well. There could be an asmap-data repo in the Bitcoin Core org and there anyone could open a PR for a candidate asmap file, here a first example of what this might look like: fjahr/asmap-data#1

Thanks Jon Atack.

@jonatack
Copy link

Could a regular/long-term contributor open a PR to propose the ASMap, rather than requiring that a maintainer do it; other parts of the release process are done this way. Does the ASMap file generation require additional (difficult-to-verify) trust?

@mzumsande
Copy link

What is the problem with "comparing data from one source to itself from different points in time"? Shouldn't this data change much less in a time span of days compared to, for example, months, so that some algorithm resulting in a similarity metric for two ASMaps might work?

I can see why it might be hard to use this to improve the quality, but wouldn't it already improve confidence if other contributors built their own ASMaps, and reported a high similarity metric compared to the one suggested for the release?

@fjahr
Copy link
Author

fjahr commented Jan 31, 2023

What is the problem with "comparing data from one source to itself from different points in time"? Shouldn't this data change much less in a time span of days compared to, for example, months, so that some algorithm resulting in a similarity metric for two ASMaps might work?

I can see why it might be hard to use this to improve the quality, but wouldn't it already improve confidence if other contributors built their own ASMaps, and reported a high similarity metric compared to the one suggested for the release?

Thanks, a good question that is at the core of this write-up, I am not very good at getting my point across I think (you are not the first to make a similar comment).

Of course, people can do this, comparing their own file to a public one, but I can not imagine what the result would look like that should make people suspicious. If there is a much lower number of entries maybe, but that should be caught in the coverage tests and the logging of the build process. In the transparent process that I am describing, of course, the total number of entries etc. will be shown and if a reviewer is curious they can dig in. But I don't think it's feasible for maintainers to do this.

Targeted BGP attacks on crypto projects last year were based on very few changes, see https://freedom-to-tinker.com/2022/03/09/attackers-exploit-fundamental-flaw-in-the-webs-security-to-steal-2-million-in-cryptocurrency/ or https://www.kentik.com/blog/bgp-hijacks-targeting-cryptocurrency-services/. Otherwise, within days you would see probably see legit changes that are in hundreds, over months it's thousands or tens of thousands. I don't have great resources that show this super clearly but maybe this post is good to get a feeling for what we are talking about here: https://blog.apnic.net/2023/01/06/bgp-in-2022-the-routing-table/ Just in 2022 and just in IPv4 the routing table has grown by 35.000 entries. Then in terms of churn, there were about 200.000 update messages daily (https://blog.apnic.net/2023/01/11/bgp-in-2022-bgp-updates/). Of course, only a fraction of these may end up causing a change in the final asmap. But these are also just averages, there could be a big transfer of IP blocks happening on a single day and there would be a big change. If we (as in Bitcoin Core) would be saying, ok, if the ASMap you have built is <1% different than the one that is signed by the maintainers it’s probably ok, that is not a credible claim IMO. Individuals can still do this, of course, that is what the transparent process is for. But I don't think it's feasible as a blocker to the release process.

I can not prove that this is not possible at all but I have tried to find a feasible way for months and I have looked at any research I could find and spoken to as many practitioners as would talk to me and I simply couldn't find it. If someone wants to give it a shot and define a diffing logic for comparing files and also describe a rule set that gives clear instructions on what should be done in which case then I am happy to review it. I just don’t think it’s possible and am describing the best thing that I could find that is possible so that we are finally able to make progress on this :)

I know this sounds kind of discouraging but this is a much more murky area than what we expose users to usually. When people run a reproducible build or just verify the signatures, it’s always a clear yes/no answer. I would need someone to describe how we can get to at least a somewhat clear yes/no answer for this to advise users to do this exercise and I fail to see how we get there.

@fjahr
Copy link
Author

fjahr commented Jan 31, 2023

Could a regular/long-term contributor open a PR to propose the ASMap, rather than requiring that a maintainer do it; other parts of the release process are done this way. Does the ASMap file generation require additional (difficult-to-verify) trust?

Yepp, thanks, there is no reason that the candidate has to be produced by a maintainer. I will add this in an addendum.

@fjahr
Copy link
Author

fjahr commented Feb 1, 2023

I created a fake asmap-data repo here, just to demo how a pr review process might look like as a basis for further discussion: fjahr/asmap-data#1

Such a repo could exist in the Bitcoin Core org and there the asmap file candidates could be suggested by anyone when a release is coming up.

@Emzy
Copy link

Emzy commented Feb 3, 2023

The ASMap file will always be only a snapshot in time. So if Prefixes change in the future we will use an old view of the network.

I think we are only interested in Prefixes that are stable over some longer time like months. Others that change often, will be wrong short after release.
We should analyze data additionally from different points in time and only include stable Prefixes.
As errors or BGP hijacks are fast detected and fixed, these will that way also not included.

@fjahr
Copy link
Author

fjahr commented Feb 6, 2023

The ASMap file will always be only a snapshot in time. So if Prefixes change in the future we will use an old view of the network.

I think we are only interested in Prefixes that are stable over some longer time like months. Others that change often, will be wrong short after release. We should analyze data additionally from different points in time and only include stable Prefixes. As errors or BGP hijacks are fast detected and fixed, these will that way also not included.

It is definitely true that stable prefixes are better for us however I would prefer to have an AS in the table which might change to not having any entry in the table. RPKI ROAs should not change as often as Routes that are announced without it on average from what I have heard, but I haven't collected statistics on it yet. I will keep this in mind.

Over several months I think we will have a huge number of false negatives that are excluded from the asmap file that are legitimate. I think that may be too big of a cost, though it will be interesting to see how volatile the ASNs of Bitcoin Nodes are. That may give us hints if there is a way to use this data. But there are several things that make me very sceptical. One is that the routing table is steadily growing. Just for IPv4 alone there were 35k new entries between the start of 2022 and the end 2022 (the net of 112k added and 77k removed), see https://blog.apnic.net/2023/01/23/ip-addressing-through-2022/ (below table 9). In the same paragraph they mention that 23k prefixes changed their ASN over the same period of time. So per month we may remove ~2k legitimate mappings just in IPv4.

This is what I mean by that the data is very noisy and that IMO prevents diffing logic from being very helpful for us. We should still try to prevent hijacks and leaks from ending up in the file as much as possible. But if a legitimate mapping is outdated by a legitimate transfer there a high chances that this is actually not that big of an issue for us. The main case that I see that could be bad for us is if prefixes are consolidated, i.e. if AWS were to buy Heise for example. The peers would be in different buckets but they would actually need to be in the same in that case. On the other hand, if such a merger would occur probably Heise would still keep their ASNs and it may take a while until we see an effect in the routing tables, highest chance probably via multiple ROAs announced for the same prefix.

Still, I don't want to discourage anyone from doing this research! Once we have consolidated on one canonical way to build the map I will definitely create one every week or so to see what the churn looks like over time. But I think it will take a long time until this yields something usable (if ever) and we can already use the asmap feature today with the approaches that we have on the table.

@brunoerg
Copy link

brunoerg commented Feb 6, 2023

Of course, people can do this, comparing their own file to a public one, but I can not imagine what the result would look like that should make people suspicious.

For me, the problem is not identifying changes between 2 files from different points in time but what that changes mean and how much it matters for us. What kind of changes should look me suspicious? And, ofc, it could require some manual work IMO.

I made an experiment by generating 2 asmap files from 2 Core's release time (2021-09-14 and 2022-04-25), and then I compared them relationing with some addresses that matter for me (extracted from new/tried table), I did it with asmapy. Some diffs are (from -> to):

58543 -> 13335
26636 -> 213250
8987 -> 16276
204644 -> 51167
57743 -> 58061
3265 -> 60131
39891 -> 200350
49867 -> 13030
39651 -> 1257
207790 -> 197540
204526 -> 40994
10349 -> 31898
56905 -> 52106
395228 -> 14061
46930 -> 14061
3525 -> 24940
47524 -> 41164
200505 -> 61400
213395 -> 25394
64237 -> 26832
51937 -> 49223
12322 -> 29447
# 794520832 (2^29.57) IPv4 addresses changed; 42519978114391872820895708002704687104 (2^125.00) IPv6 addresses changed

I tried to understand the changes and most seems big transfers but from same institution what might not look suspicious or weird for us, however I had to check it manually, couldn't do it automatically (maybe PeeringDB could help?), that is the problem.

@mzumsande
Copy link

This is what I mean by that the data is very noisy and that IMO prevents diffing logic from being very helpful for us.

It depends on the time scale on which the data changes: Whoever would create the asmap from a public source could in advance announce a specific date and time at which they would do so and their exact methodology. Then a few others could attempt the same, starting at exactly the same time, so the difference would be a few minutes, not days or months. Does the data change that quickly that even this wouldn't be helpful?

@fjahr
Copy link
Author

fjahr commented Feb 7, 2023

This is what I mean by that the data is very noisy and that IMO prevents diffing logic from being very helpful for us.

It depends on the time scale on which the data changes: Whoever would create the asmap from a public source could in advance announce a specific date and time at which they would do so and their exact methodology. Then a few others could attempt the same, starting at exactly the same time, so the difference would be a few minutes, not days or months. Does the data change so quickly that even this wouldn't be helpful?

It changes more over time of course but even in a single day or hour, it could change a lot. Sometimes it could be 5% different within a day without anything malicious going on and on another day it could be 0.1% different but there could be a critical hijack in there. The churn as a total number is just not a helpful metric in any time frame.

One example: if we want to do this, we both agree to run kartograf on the same day at the same time and it turns out there are only very minor differences (let's say <100 ASNs changed). I could have gone into the file and just deleted the 10 or 20 prefixes where I know the most nodes are hosted (Heise, AWS, etc.). Now the churn suggests that both our files are safe but only yours is safe and mine is not. We would only know from looking at the data qualitatively. Well, in this example `kartograf cov' could also catch it if you would run it on my file, but the point remains that the difference in files does not give me any confidence in the trustworthiness of our files. A bad change doesn't have to be a big change and bad changes that are big changes can be discovered through easier means than diffing.

Also, if the date and time for creating the file are publicly announced in advance this would invite attackers to run a BGP hijack right around this time to get it into collector data or they could try other temporary DOS attacks on data sources.

I would still invite people to generate their own files and run diffs on them, but that should be an individual effort and the release process should not depend on it because that would mean a very high likelihood that the release process is delayed due to lack of confidence that the file is free of issues.

And I think I have mentioned this before: I am happy to be proven wrong here and maybe there is some academic research that I overlooked. I am also happy to review and test a proposal for using diffing in the release process, but it can not be just "we'll diff something and see what happens". There would need to be a decision framework in place up front that clearly says: we believe if the diff(s) of files collected under these specific circumstances do not show differences above these specific thresholds, then it is safe. If they are above the thresholds, this is how we get to a new candidate file that is acceptable without blocking the release process.

Maybe the most likely way of making this work I can see right now is the following: When you want to open a PR as described in Step 1 here then you could tell @brunoerg privately about it and ask him to generate the file at the same time with the same process. You open the PR and then Bruno comments, saying that you both collaborated and what his differences were, and that he thinks the file is good. I personally would still give this file the same scrutiny and I would also discount Bruno's ACK a bit (if he even gives it explicitly), similar to how I would view a PR that would include individual commits from both of you. But this may give you an additional boost from other individual reviewers and maybe over time this turns into a standard as the files that were created in this way get more reviews/ACKs and tend to be used in the binary over those created by only one contributor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment