One does not commit or compile credentials
Context:
This meme was brought to you by the PyPI Director of Infrastructure who accidentally hardcoded credentials - which could have resulted in compromissing the entire core Python ecosystem.
If I had a dollar for every API key inside a config.json…
Here’s the thing, config.json should have been on the project’s .gitignore.
Not exactly because of credentials. But, how do you change it to test with different settings?
For a lot of my projects, there is a config-<env>.json that is selected at startup based the environment.
Nothing secure in those, however.
Fun fact: if you search for “removed key” or something similar in GitHub you will get thousands of results of people removing accidentally committed keys. I’m guessing the vast majority of those removed keys haven’t been revoked.
don’t commit credentials; split them up and place each part in a different place in the code and use code comments as a treasure map and make them work for it.
Ah, the horcrux technique.
At my workplace, we use the string
@nocommit
to designate code that shouldn’t be checked in. Usually in a comment:// @nocommit temporary for testing apiKey = 'blah'; // apiKey = getKeyFromKeychain();
but it can be anywhere in the file.
There’s a lint rule that looks for
@nocommit
in all modified files. It shows a lint error in dev and in our code review / build system, and commits that contain@nocommit
anywhere are completely blocked from being merged.(the code in the lint rule does something like
"@no"+"commit"
to avoid triggering itself)I also personally ask myself how a PyPI Admin & Director of Infrastructure can miss out on so many basic coding and security relevant aspects:
- Hardcoding credentials and not using dedicated secret files, environment variable or other secret stores
- For any source that you compile you have to assume that - in one way or another - it ends up in the final artifact - Apparently this was not fully understood (“.pyc files containing the compiled bytecode weren’t considered”)
- Not using a isolated build process e.g. a CI with an isolated VM or a container - This will inevitable lead to “works on my machine” scenarios
- Needing the built artifact (containerimage) only locally but pushing it into a publicly available registry
- Using a access token that has full admin permissions for everything, despite only requiring it to bypass rate limits
- Apparently using a single access token for everything
- When you use Git locally and want to push to GitHub you need an access token. The fact that article says “the one and only GitHub access token related to my account” likely indicates that this token was at least also used for this
- One of the takeaways of the article says “set aggressive expiration dates for API tokens” - This won’t help much if you don’t understand how to handle them properly in the first place. An attacker can still use them before they expire or simply extract updated tokens from newer artifacts.
On the other hand what went well:
- When this was reported it was reacted upon within a few minutes
- Some of my above points of criticism now appear to be taken into account (“Takeaways”)
Yes kids, the only stuff in ANY repo (public or otherwise) should be source code.
If it is compiled, built, or otherwise modified by any process outside of you the developer typing in your source code editor, it needs to be excluded/ignored from being committed. No excuses. None. Nope, not even that one.
No. 👏 Excuses. 👏
Two choices: Either the production software isn’t in the exact state the repo was when the software was built. Or I can’t get build timestamps in the software.
To err is to be human… right?
To be honest, this doesn’t instill me with much confidence, but who am I? If someone looked at my OpSec, probably they’d be horrified.