• wlkr 18 hours ago

This might just be the frequency illusion at play, but there seem to have been a number of high-profile supply chain attacks of late in major packages. There are several articles on the first few pages of HN right now with different cases.

Looking back ten years to `left-pad`, are there more successful attacks now than ever? I would suspect so, and surely the value of a successful attack has also increased, so are we actually getting better as a broad community at detecting them before package release? It's a complex space, and commercial software houses should do better, but it seems that whilst there are some excellent commercial products (e.g. CI scan tools), generally accessible, idiot friendly tooling is somewhat lacking for projects which start as hobby/amateur code but end up being a dependency in many other projects.

I've cross-posted my comment from the current SAP supply chain attack thread [0].

[0]: https://news.ycombinator.com/item?id=47964003

• jefftk 11 hours ago

>This might just be the frequency illusion at play, but there seem to have been a number of high-profile supply chain attacks of late in major packages.

It's real. As of the beginning of April we'd had 7 in the past 12 months vs 9 in the two decades before that: https://www.jefftk.com/p/more-and-more-extensive-supply-chai...

• godelski 8 hours ago

I think the real question is "are we just hearing about it more now or has the actual rate of attack increased?"

• jefftk 2 hours ago

I looked pretty hard, with some LLM assistance, so if it was "are we just hearing about it more now" it would have to be old attacks that happened without being discovered and written up.

• esseph 2 hours ago

Rate of attack increased over the past 5 years and multiple wars and proxy wars have broken out.

• JohnMakin 18 hours ago

People are ramming tons of code into places without ever looking at it, it would follow that supply chain attacks would increase thusly.

• eddythompson80 18 hours ago

Yeah, and ultimately no body cares. Everyone assumes it’s just some process miss, and we need to add another step to the process and move on. Fuck ups that would have killed the credibility of projects 10 years ago are now treated as “eeh what are you gonna do. Sometimes you ship malware. Will look into it”

• CoastalCoder 13 hours ago

> Yeah, and ultimately no body cares.

I assume you're using hyperbole.

Some of us are very aware and concerned about the risk. But like Cassandra from Greek mythology, we see the coming disaster and feel powerless to stop it.

• kakacik 5 hours ago

Well yeah but if you don't have some critical mass which is very vocal/influential, at the end 'nobody cares enough'.

• KronisLV an hour ago

> Yeah, and ultimately no body cares.

More like hiding their heads in the sand in circumstances that are outside of their ability to fix. None of the tooling or practices out there push you in the direction of not being at risk, or even provide you with easy ways to stay completely safe: no external packages needed to develop software with everything you NEED being provided out of the box, or a flow where pulling in a new package makes you review all of its source code line by line and compile everything instead of any binary tooling blobs, or built in vulnerability and configuration scanning so you don't get pwned by Trivy or don't leave an open S3 bucket somewhere, which also means that obviously you'd need thorough observability and alerting for any of the cloud stuff you do.

And even when they exist, your org projects might be painfully out of date, too much to use those approaches, or the org culture might not be there, or any number of other issues I can't even imagine. On one hand, people are running out of date software and those have CVEs, on the other using dependencies that are too new also puts you at risks of compromised packages - it's like we're being squeezed by rocks on both sides in a landslide or something. Even at the OS level, the fact that everyone is not running something like Qubes OS or regular VMs for development is absolutely insane. The fact that all software isn't sandboxed and that desktop OSes don't prompt for permissions like mobile apps do is absolutely insane. That we don't have firewalls like Glasswire as standard that prompt you for external connections, or don't allow easily blocking what you don't trust is insane.

Despite lots of people trying their best, on some level, everything both up and down the stack is absolutely fucked for a variety of complex reasons. You'd have to largely tear it all down and rebuild everything starting with your OS kernel in a memory safe language and formal proofs and thorough testing for everything (if it took SQLite as long as it did to get a decent test suite, it might as well take on the order of decades to do it for a production OS kernel and drivers), then do the same for all userland software and DBs and tooling and dependency management and secrets management (not just random files, special hardware most likely) and so on. It's not happening, so we just build towers of cards.

For something more practical: https://nesbitt.io/2026/03/04/package-managers-need-to-cool-...

• Bengalilol 17 hours ago

Good old « release first, fix later »

• fennecbutt 12 hours ago

It's not that nobody cares. It's just that nobody who does care has the money or the power to change it.

Business school. Ahaha.

• fjdjshsh 12 hours ago

Are you talking about open source or commercial products? I can't speak for the pytorch lighting case, but I wouldn't be surprised if the maintainers didn't get any $ from it. They would be sad if the credibility of the package suffers, but ultimately it wouldn't make a big difference to them

• michaelt an hour ago

I feel this is an inevitable consequence of a move towards languages with a culture of many small and transitive dependencies.

If my project has 100 dependencies, the release of an updated dependency will inevitably be a daily occurrence.

• michaelt an hour ago

> idiot friendly tooling is somewhat lacking for projects which start as hobby/amateur code but end up being a dependency in many other projects.

Historically, extra-security-scanned artefact handling has been a paid enterprise option. Whereas the less secure option is the much-less-hassle default.

IDK how good a business model this is, I suspect not very.

• zarzavat 9 hours ago

FWIW left-pad was not an attack, it was a bug in NPM. It should not be possible to unpublish package versions that are depended on by other published packages. On the other hand, it should be possible to unpublish certain package versions that are new and not depended on.

NPM should have returned error codes when the author of left-pad attempted to remove all his data with the intention of leaving the service.

To quote Wikipedia:

> After Koçulu expressed his disappointment with npm, Inc.'s decision and stated that he no longer wished to be part of the platform, Schlueter [author of NPM] provided him with a command that would delete all 273 modules that he had registered.

• saltyoldman 6 hours ago

The attacks from TeamPCP were successful at stealing credentials recursively. So it is very likely that someone working on this pytorch related package may have recently pulled the bad litellm or trivy (or what was there like 8 others?)

And the reason it jumps from npm to pip to whatever is that it's trying to find all the user's keys in well known locations for any of these repos.

So teampcp is sitting on tens of thousands of passwords or keys and they just need time to run tests on them to figure out what packages they can release to get even more attacks out there.

Why all the major repo vendors haven't done a full cred wipe? No idea (unless they have and I just wasn't on the email list)

• mschuster91 16 hours ago

> Looking back ten years to `left-pad`, are there more successful attacks now than ever? I would suspect so, and surely the value of a successful attack has also increased, so are we actually getting better as a broad community at detecting them before package release?

The value has increased, and that is what drives all these attacks. Cryptocurrencies are to blame in particular because they not just provided a way for money laundering the proceeds but also a juicy target in itself.

And what is stolen with today's malware? Cloud credentials. Either to use for illicit mining, which is on the decline, or to run extortion campaigns, which is made possible by cryptocurrencies. All too often it's North Korea or Iran running these campaigns.

• LtWorf 6 hours ago

> All too often it's North Korea or Iran running these campaigns.

I'm sure the NSA does similar things to them but we aren't really informed about that detail.

• crabbone 16 hours ago

> Looking back ten years to `left-pad`, are there more successful attacks now than ever?

I can't vouch for the number of attacks, but, and since we are talking about Python, nothing substantially changed since the time of `left-pad`. The same bad things that enabled supply chain attacks in Python ten years ago are in place today. However, it looks like there are more projects and they are more interconnected than before, so, it's likely that there are either more supply chain attacks, or that they are more damaging, or both.

Here's my anecdotal experience with Python's packaging tools. For a while, I was maintaining a package to parse libconfuse configuration language. It started as a Python 2.7 project, but at the time there was already some version of Python 3 available, so, it was written in a way that was supposed to be future-proof.

I didn't need to change the code of the project in the last ten or so years, but roughly once a year something would break in the setup.py. Usually, because PyPA decided to remove a thing that didn't bother anyone.

When Python 3.13 came out, as clockwork, setup.py broke. I rolled up my sleeves and removed the dependency on setuptools, instead, I wrote some Python code that generated a wheel from the project's sources. I didn't look up the specification of the RECORD file in dist-info directory, and assumed that sha256().hexdigest() will generate the checksums in the desired format. And that's how I shipped my packages...

Some time later, the company added an AI reviewer to the company's repo and it discovered that instead of hexdigest() the checksums have to be base64-encoded and then padding removed...

Now, to the punchline: nobody cared. The incorrectly generated packages installed perfectly fine without warnings. Nobody checks the checksums.

More so: nobody checks that during `pip install` or the more fancy `uv pip install` the packages aren't built locally (i.e. nobody cares that package installation will result in arbitrary code execution). It's not just common, it's almost universal to run `pip install` on production machines as a means of deploying a Python program. How do I know this? -- The company I work for ships its Python client as a... source package. Not intentionally. We are just lazy. But nobody cares.

• zelphirkalt 15 hours ago

It's probably the same people, who think that merely having a requirements.txt stating packages with versions or even without that (2010 sends its regards) is fine. Open a random open source Python project on GitHub, and chances are you will see this kind of thing. Stands to reason, that people in companies are not acting much different.

• pxc 11 hours ago

> It's not just common, it's almost universal to run `pip install` on production machines as a means of deploying a Python program.

Maybe a Python culture problem; maybe a hallmark of Python's status as an "easy to hire for", manager-friendly, least common denominator blub language; maybe a risk that stems from the conveniences of interpreter languages... but this is such a shame in this day and age.

It's seriously not difficult to do better. And if this is what you're doing, you're also missing out on reproducible environments both in dev and in prod. At least autogenerate a Nix package! You still don't need to publish any artifacts, but you can at least have the thing build in a sandbox or yeet the whole closure over SSH.

It's also not that hard to get a Docker image out of a Python project.

You only need one platform-minded person on the whole development team to make this happen.

What is going on???

• ifwinterco 6 hours ago

"Almost universal" is a bit of a stretch, most of the time these days Python apps are deployed as Docker containers, and if you're using k8s this becomes effectively mandatory.

However a lot of the time especially for older codebases the docker build will just run pip install from public pypi without a proper lockfile.

So at least install code isn't being executed on your production machine, but still significant surface area for supply chain attacks

• jrumbut 11 hours ago

As scary as it is right now, it warms your heart a little bit that this system existed for 30 years and is only now reaching a crisis point.

I ran an open source project with tens of thousands of downloads (presumably all either developer machines or webservers, so even a small number is valuable) and never received a malicious pull request, offer of a bribe to install malware, or a phishing attempt with enough effort to even catch my attention.

What it says to me is that there weren't a lot of people working on the crime side of this. It's like dropping your wallet in a bar bathroom and coming back to find it still there.

• hulahoof 13 hours ago

left-pad was an npm issue

• imtringued 5 hours ago

virtualenv isn't relocatable out of the box, so how else would you deploy a python project?

You can call it laziness, but it's not like the python ecosystem has ever developed an answer for this problem. The only reasonable answer has been to use docker, which is basically admitting that the python community did nothing.

• wolfi1 2 hours ago

but is docker the solution, though? I don't think so, docker itself is prone to supply chain attacks from my understanding

• cachius 15 hours ago

No need to invoke frequency illusion when every moderate HN lurker already stopped counting. https://socket.dev/blog gives a good impression, but a dedicated article would be nice. Maybe recurring once or twice a year.

If you're interested in synchronicity and frequency illusion, Sergei v. Chekanov wrote a book that sounds interesting https://jwork.org/designed-world/

Have you ever experienced coincidences that cannot be logically explained? This book helps the readers understand the meaning of synchronicity, or remarkable coincidences in people's lives. This work not only explains the mystery of synchronicity, originally introduced by Carl Jung, but it also shows how to make simple calculations to estimate the chances that coincidences are not due to mere randomness.

• RandyOrion 9 hours ago

One thing that makes me wonder is that there are 4 security issues raised and all of them were automatically commented and closed by some bot called `pl-ghost` [1][2][3][4]. In the end, only this one [4] properly handled, and all bot comments are deleted. You can see the bot comments in another report [5], which is more informative than the OP one.

[1] https://github.com/Lightning-AI/pytorch-lightning/issues/216...

[2] https://github.com/Lightning-AI/pytorch-lightning/issues/216...

[3] https://github.com/Lightning-AI/pytorch-lightning/issues/216...

[4] https://github.com/Lightning-AI/pytorch-lightning/issues/216...

[5] https://socket.dev/blog/lightning-pypi-package-compromised

• jackdoe 18 hours ago

I cant wait to have no dependencies.

An extreme example is now when I make interactive educational apps for my daughter, I just make Opus use plain js and html; from double pendulums to fluid simulations, works one shot. Before I had hundreds of dependencies.

Luckily with MIT licensed code I can just tell Opus to extract exactly the pieces I need and embed them, and tweaked for my usecase. So far works great for hobby projects, but hopefully in the future productions software will have no dependencies.

• mandevil 18 hours ago

The problem with this is now you are solely responsible for managing all of the changes, all of the variation of life. Chrome changed the shape of this API, you are responsible for finding it and updating it. Morocco changed when their daylight savings took effect, now you need to update your date/time handling code. There are a lot of these things that we take for granted because our libraries handle it for us, and with no dependencies you have to do all the work. Not a big deal for making a double-pendulum simulator for your daughter to play with that will stop mattering next week, but is a concern for a company which is trying to build something that can run indefinitely into the future.

• Aperocky 16 hours ago

> you are responsible for finding it and updating it.

vs the dependency broke something and now you're responsible for working around someone else's broken code.

Honestly, I've seen much more of the latter. Especially nowadays with every single dependency thinking they are an fully fledged OS because an agent can add 1000 feature/bug in no time. Picking the right dependency maintaining by a sane maintainer is like digging potatoes in a minefield.

• zdragnar 17 hours ago

As a general principle, I agree with you that large companies and teams benefit from common runtimes (i.e. libraries and frameworks).

I don't buy the notion of things breaking down over time, though. For "first-party" code that sticks to HTML and CSS standards, and Stage 4 / finished ecmascript standards, the web is an absurdly stable platform.

It certainly used to be that we had to do all sorts of weird vendor hacks because nobody agreed on anything and supporting IE6 and 7 were nightmares, and blackberry's browser was awful, but those days are largely behind us unless you're doing some cutting-edge chrome-only early days proposed stuff or a browser specific extension or something else that isn't a polished standard.

Even with timezone changes, you're better off using the system's information with Intl.DateTimeFormat.

• skydhash 16 hours ago

I don’t know where the fear of breaking changes in deps comes from, but most good projects tries to keep their API stable. Even with fast-evolving platforms like Android and iOS sdk.

• awakeasleep 16 hours ago

It comes from trying to use Python apps you found on GitHub before uv tool install was a thing

• zelphirkalt 14 hours ago

In the Python ecosystem making software with reproducibility in mind was a thing before the advent of uv. Some earlier options include Pipenv and Poetry. I used Pipenv already some 6y ago to achieve that and later switched to Poetry.

I think devs who didn't care back then also won't care in the future and will still run around with requirements.txt file in 10 years.

• dualvariable 17 hours ago

In companies, though, you often wind up with three+ massive dependency trees in your software to handle the same problem because people went and added the new hotness without deprecating the old stuff. You also find dependencies that are much heavier than necessary for the actual task at hand because the software developer was also solving the problem of needing that dependency on their resume. And then there's just the relatively tiny dependencies for fairly solved problems, like leftpad, which don't really require deps, and you can accept the maintenance burden, because not everything is an abstraction layer over chrome.

So if you just need to do something simple like fire off a compute heavy background task and then get a result when it is done, you should probably just roll your own implementation on top of the threading API in your language. That'll probably be very stable. You don't need a massive background task orchestration framework.

People might object that the frameworks will handle edge cases that you've never thought of, but I've actually found in enterprise settings that the small custom implementations--if you actually keep it small and focused--can cover more of the edge cases. And the big frameworks often engineer their own brittle edge cases due to concerns that you just don't have.

So anyway, it isn't as simple as "dependencies are bad" or "dependencies are good", but every dependency has a cost/benefit analysis that needs to go along with it. And in an Enterprise, I'd argue that if you audit the existing dependencies you will find way too many of them that should be removed or consolidated because they were done for the speed of initial delivery and greenfielding. Eventually when you accumulate way too many of those dependencies the exposure to the supply chains, the need to keep them updated, the need to track CVEs in those deps, and the need to fix code to use updated versions of those dependencies, along with not have the direct ability to bugfix them, all combine to produce an ongoing tax of either continual maintenance or tech debt that will eventually bite you hard.

• jackdoe 14 hours ago

> The problem with this is now you are solely responsible for managing all of the changes

We seem to greatly overestimate the amount of code needed to do something.

For example, there are billions of lines of code from me pressing a key, to you seeing what I wrote. But if we were to make a special program that communicates via ipv6 and icmp, and it is written for hazard3 pico2350 with wiz5500 ethernet breakout, the whole thing including the c compiler to compile your code (which could very well outperform gcc -O3) will be 5-6k lines of code, including RA, and even barebones spi drivers, and a small preemptive os.

So, it is not unreasonable to manage all of those changes.

• RALaBarge 11 hours ago

I think we are stuck with LLMs. They are already in a place where they can find these issues in the first place. They can access RSS feeds. You could cron an agent to look to see if you are pwned as frequently as you want at literally almost zero cost. When you do ingest the libraries, keep a list and of what version and that can help as well.

• solid_fuel 17 hours ago

And of course, you will go over every line of code that Opus produces with the same scrutiny we expect of open source maintainers, right? Right?

I'm going to go publish some MIT-licensed remote access code and get that into Opus's training data.

• fastball 6 hours ago

Correct (and secure) code is possible and readily doable. It is unclear if supply chain attacks can ever be fully mitigated.

• Aperocky 18 hours ago

I am torn because I like rust over go, and rust is better from an LLM perspective. But the dependency philosophy on rust is basically a security blackhole whereas go is much better.

• kblissett 18 hours ago

I have found Go is an amazing language for LLMs. What do you prefer about Rust?

• Aperocky 18 hours ago

A portion of context and vibe protection that are required is exported to the compiler. In addition rust binaries are generally smaller both in terms of size and footprint.

• Imustaskforhelp 17 hours ago

I sort of agree with you but for me, I prefer golang because I believe that for most use cases, Golang fits perfectly (I run a 500mb 7$/yr vps with debian and use golang binaries)

Cross portability and compilation and its very few dependency/stdlib approach with simplicity, I just really love golang.

I had built[0] a cuckoo.org alternative at https://fossbox.cloud which has only one dependency of gorilla web sockets aside from stdlib

If I were to rewrite it in rust, I couldn't say the same. Golang's stdlib is that good.

My point is, although I understand Rust can have some advantages in other areas, the advantages of golang outweigh rust for me by a very high margin. There is also the factor that I just feel more comfortable reading golang code and picking through it than rust.

It is my opinion that you can go a very very long way with a garbage collector than people imagine even on constrained systems. Unless absolutely necessary, thinking about GC feels like it might be a premature optimization in many instances which is worth thinking about.

[0]: More like (vibecoded?) as this is just a single file main.go which I had prompted on gemini 3.1 pro sometime ago. It was just a prototype which works surprisingly well that I had made because I was using the cuckoo website with friends but it kept on lagging.

• Aperocky 16 hours ago

Well I almost have the same story, my agent harness is a 5mb rust binary that runs as systemd service and occupy 10mb of memory after days. This handles all communciations between 100+ agents.

Now I think go will come close to this number, so in reality, there might not be a real difference. But a leak somewhere is far more likely especially as these are mostly vibe coded (my binary has multiple functionality).

The biggest advantage that go have over rust is the stdlib and ecosystem that doesn't depend on 100 packages. And maybe that will be the deciding factor in the future or someone (I'm getting increasingly itchy for it) will need to reinvent the ecosystem to be less like npm.

• mamcx 18 hours ago

Vendoring don't basically copy what go does?

• Aperocky 16 hours ago

You can trust a single big stdlib more than the 100 dependency that tokio pulls at any given time.

• RALaBarge 10 hours ago

Yeah then you can version lock changes to one thing post-evaluation vs or even easier as noted above, download the stdlib and host it yourself.

• OtherShrezzing 17 hours ago

I think in the relatively near future we’re going to start seeing sophisticated supply chain attacks into language model training data.

It should be feasible to design vulnerabilities which look benign individually in training data, but when composed together in the agent plane & executed in a chain introduce an exploit.

There’s nothing technical really stopping that from existing right now. It’s just that nobody has put the effort in yet.

• lacunary 16 hours ago

The develop-test-refine feedback loop for this kind of attack is so long (or expensive) that it seems likely to limit its real world use. Poison training data, wait months? a year? for the model to come out, see how well it worked, refine... or do you see a faster way to iterate?

• OtherShrezzing 3 hours ago

Continual learning is the next major architectural milestone for the frontier labs. That’d reduce the iteration loop to days instead of years.

If your attacker assumes that all or most software will be generated from language models, the time penalty is worth paying.

• v4nderstruck 17 hours ago

well surely Opus would never introduce vulnerabilities into the code so that sounds like the solution.

• 2ndorderthought 16 hours ago

So true. Whenever I run opus I absolutely do not look at the code at all. That's for luddites.

• gib444 18 hours ago

Your LLM isn't a dependency?

• stronglikedan 17 hours ago

It's a tool for building things. I can build those things equally well with or without it, maybe saving some time with it (arguable)), but I'm not dependent on it.

• gib444 7 hours ago

No, I'd posit the average developer who pulls in hundreds of deps but now uses LLMs to effectively replace them can not build things equally well without either.

Of course most devs lie to ourselves because of our ego that pulling in deps is /just/ a time-saving measure, but of course we know there are some incredibly high quality libraries and frameworks that we don't have the skills or experience to replicate to the same level

• contingencies 9 hours ago

Love it :) Excellent quippy summary of the zeitgeist. Added to https://github.com/globalcitizen/taoup

• mixedbit 17 hours ago

When I was doing Fast.AI Deep Learning course, I was surprised by the number of Python dependencies machine learning projects bring. Web front-end projects were always considered very third-party dependencies heavy, but to me, the machine learning ecosystem looks much more entangled. In addition, unlike web development, which is considered security critical and has over the many years accumulated a lot of wisdom and good security-related practices, machine learning development looks much more ad-hoc, with many common software engineering practices not applied.

For example, at that time, one way to distribute machine learning models was via Python pickles. Which are executable objects with no restriction built in. Models in this format could do anything on a computer where the model was imported. Such an early 'wild-west' ecosystem can definitely make security compromises easier and resulting supply chain attacks more common.

• zelphirkalt 14 hours ago

There are many people in that ecosystem, who are not primarily software engineers. Some just learned some coding along the way. Some are mathematicians. Some are devs who are AI drunk or something. Some have the mindset of "code doesn't matter any longer, if it works it works". For many proper dependency management is just a chore, that they don't want to care about. These things come together in various ML projects, even though ML projects should be amongst the projects most focused on reproducibility.

• mkeeter 19 hours ago

A repository search shows 2.2K repos with the text "A Mini Shai-Hulud has Appeared", all created within the past day:

https://github.com/search?q=A%20Mini%20Shai-Hulud%20has%20Ap...

• rhdunn 19 hours ago

The repository names all look like two terms/words from dune (harkonen, mentat, ornithoptor, etc.) followed by a number. This would indicate that the account (possibly GitHub auth/actions token) has been compromised and then used to create the repository.

• ramon156 3 hours ago

https://github.com/tinin46

this account seems to store a lot of keys, not sure what theyre for

• avaer 15 hours ago

Why can't GitHub get on the case and just block any repo where the README matches the regex? I thought they'd have learned their lesson the last time it happened.

This malware isn't even trying. Then again it's Microsoft so they're not even trying either.

• eddythompson80 14 hours ago

6 minutes later an HN submission "GitHub blocks your account if you mention X in the README" with a top comment "This is absurd, are they just doing regex matching to check for malware?"

• bbor 8 hours ago

1. This happened less than 24 hours ago.

2. This is just one of the four techniques the worm uses to phone home.

• sgskinner 11 hours ago

“Some people, when confronted with a problem, think ‘I know, I’ll use regular expressions.’ Now they have two problems.”

• spate141 19 hours ago

what's this all about?

• progbits 19 hours ago

Malware uploading the credentials it managed to steal

• foo12bar 19 hours ago

FTFA

> The attack steals credentials, authentication tokens, environment variables, and cloud secrets, while also attempting to poison GitHub repositories.

• CodeAndCuffs 19 hours ago

That doesn't really explain why there is a bunch of GitHub repos created as well.

If I remember correctly from Shai-Hulud 2, the attacker extricated creds by posting them in public github repos with minor easily reversible encryption. I believe it was double b64 last time.

I'm assuming the logic there is that every security researcher and company is going to pull and scan those creds for their stuff and their clients' stuff. So the attacker is just 1 of N people downloading it. As opposed to trying to send it to their own machine directly.

• arsome 18 hours ago

I think it's more about convenience and bypassing filters - developers are already logged in to github, already have access to create repos and publish code, firewalls will allow it. Even fancy HIDS systems will think the git push is rather normal.

If they have a clue, the attacker still will not download that without using a botnet tunnel or Tor at a minimum.

Note though that these credentials aren't even encrypted using some lightweight ECC to prevent others from capturing them, they're posted in cleartext. Embarassment might be part of the point.

• bbor 8 hours ago

With HN ettiquette in mind, I must make an exception: this is a case where skimming the first parts of the article would help a lot!

The public repo path is just one of four parallel paths, with the goal of getting around any barriers:

  The exfiltration component shares its design with the "Mini Shai-Hulud" mechanism from their last campaign, using four parallel channels so stolen data gets out even if individual paths are blocked.
• auraham 17 hours ago

This week I was wondering whether using uv for managing Python versions is a good idea.

From their website [1]

> Python does not publish official distributable binaries. As such, uv uses distributions from the Astral python-build-standalone project. See the Python distributions documentation for more details.

It points to this GitHub repo https://github.com/astral-sh/python-build-standalone which mentions this other link https://gregoryszorc.com/docs/python-build-standalone/main/r...

If I understand correctly, the source code for building Python is not fetched directly from python.org. Not so sure how secure is that.

I have the same concern for asdf [2]. However, they use pyenv [3] which, I think, feels more official.

Can someone clarify this? Which tool is better/more secure for installing python: uv or asdf?

[1] https://docs.astral.sh/uv/guides/install-python/

[2] https://github.com/asdf-community/asdf-python

[3] https://github.com/pyenv/pyenv/tree/master/plugins/python-bu...

• woodruffw 16 hours ago

> If I understand correctly, the source code for building Python is not fetched directly from python.org. Not so sure how secure is that.

python-build-standalone fetches CPython sources directly from python.org[1]. I don't even know where else we would get them from!

[1]: https://github.com/astral-sh/python-build-standalone/blob/a2...

• auraham 16 hours ago

Thanks for pointing that out.

• bbor 7 hours ago

I'm really not worried about `uv` and `cpython` -- their processes are robust, their response times fast, and (now) their funding significant

I'm worried about, say, `mdformat` (a widely used formatter mostly maintained by one person in their spare time), not to mention some super-specific dependency that hasn't been updated in years and is 3 levels deep in your dep tree. I really don't want to pin & manually approve every single update for an app that's under active development, but it's beginning to look like that's mandatory for any serious app.

In the meantime, I've gotta go get my API keys out of my unencrypted `.env` files! Getting burned on a large, consumer-facing webapp would be embarrasing but logical, but losing hundreds to thousands of dollars because of some indirect dependency of some silly one-off demo repo that just happens to be on the same host & system as my `.env`s... oof.

Anyone know if OAI or Anthropic will refund you if you get your keys stolen like this? Or is it user error?

• throawayonthe 17 hours ago

i mean... uv is already a binary you run on your computer to manage python binaries, packages (and any binaries with those), systemwide tools etc; how much does it change whether they build the python binaries or someone else?

• auraham 16 hours ago

Both uv and asdf can be compiled from source. I prefer that way.

• nrengan 16 hours ago

Most of my pip installs come from Claude Code suggesting them now and me just hitting enter. Model was trained months ago, so it has no clue what got compromised this week. We built the worst possible filter for "is this package safe right now".

• throwatdem12311 12 hours ago

Stop blaming the LLM for your laziness and lack of due diligence.

• zarzavat 8 hours ago

Indeed, I also use LLMs to suggest dependencies but:

- I ask the LLM for multiple options

- I tell it what I need and what I don't need

- I then look at the packages it has suggested. Sometimes LLMs suggest unmaintained packages with 5 downloads a month just because it came at the top of a web search.

- if it's not a very well known project, I look at the code, I have received vibecoded dependency suggestions before that don't even function

LLMs are useful resources for "getting the pulse of the ecosystem", but just pressing enter is crazy.

• throwatdem12311 8 hours ago

exactly

• moritzwarhier 16 hours ago

What filter?

You say you rely on CC to suggest software to install from the internet, and then you install it.

I haven't heard anyone suggest CC or any LLM as a "filter" for "is this package safe right now", and it seems like a very bad heuristic to me, not only, but also for the reason you gave.

• nrengan 15 hours ago

Well, people weren't checking CVEs before pip install before CC either, CC just scaled the habit to a larger audience at a faster cadence. The blast radius for day-zero compromises is what changed.

• vovavili an hour ago

This is easily circumvented by not pressing Enter.

• nulltrace 13 hours ago

Stale training data is part of it. But even a current model can't tell what setup.py is going to run on your box. Nothing actually inspects the package before it executes. You'd want something that pulls the metadata and checks what hooks are in there before anything runs.

• ashishbijlani 13 hours ago

Built Packj [1] to do exactly this.

1. Packj (https://github.com/ossillate-inc/packj) detects malicious PyPI/NPM/Ruby/PHP/etc. dependencies using behavioral analysis. It uses static+dynamic code analysis to scan for indicators of compromise (e.g., spawning of shell, use of SSH keys, network communication, use of decode+eval, etc). It also checks for several metadata attributes to detect bad actors (e.g., typo squatting).

• BrenBarn 16 hours ago

By "the worst possible filter" do you mean "hitting enter when claude tells you to"?

• throwawayqqq11 16 hours ago

"Sandbox this project before you make no mistakes."

• ramon156 2 hours ago

I was actually about to run kaparthy's autoresearch tool. Seemed like that one was safe

https://github.com/karpathy/autoresearch

• throwatdem12311 12 hours ago

Claude Code updates almost every day, sometimes multiple times.

One of these days Anthropic is going to be compromised and we’re all gonna be f*cked.

• woodson 10 hours ago

Not if one is running it in a non-privileged vm/container with restricted network access. But everything is YOLO these days.

• CoastalCoder 2 hours ago

Forgive the tangent, but I'm just starting to learn about using AI for coding, and getting a safe sandbox is one of my next steps.

Any suggestions for a vm/container setup that works on a Linux host, provides the safety net you describe, and is still capable enough to try out all these things that people are talking about?

• iqihs 5 hours ago

already priced into Polymarket too i bet..

• gcapu 18 hours ago

On GitHub, I saw this message from April 20, and I’m a bit confused.

"deependujha hi @thebaptiste, thanks for inquiring. Release of 2.6.2 is blocked due to some internal reasons. Will notify once release is made. "

I'd hate it if they knew of the problem that long ago and didn't warn until now. If someone has more info and can clarify I'd be thankful.

https://github.com/Lightning-AI/pytorch-lightning/issues/216...

• andymcsherry 14 hours ago

Andy from Lightning here. The malicious packages were published today at 12:45 PM UTC to PyPi. Before that, there were no affected distributions, and we were unaware of any leak. The original release on Github did not contain the issue, but we have taken it down to prevent any confusion.

• mil22 17 hours ago
• gcapu 17 hours ago

I appreciate the tip, but your response has nothing to do with my question

• achandra03 19 hours ago

Bless the Maker and His water.

• upupupandaway 18 hours ago

Not a security guy here. How did the dependency get compromised, exactly? Did they submit a PR into the main repo at github and it was approved by the maintainers? Or just host compromised versions in other mirrors?

• andymcsherry 18 hours ago

Andy from Lightning here. The malicious code was not submitted to the main repo at Github. It appears our PyPi credentials were leaked and compromised packages were published directly there for versions 2.6.2 and 2.6.3

• lostmsu 16 hours ago

I vaguely remember PyPi requiring 2FA about a year and a half ago at least for logins.

If they haven't started yet, they should require 2nd factor for publishing as well.

• caycep 19 hours ago

just to clarify it's not PyTorch, it's the library for this Lightning AI company?

• mort96 18 hours ago

Oh shit I had assumed PyTorch Lightning was affiliated with PyTorch. Not a great name for an unaffiliated third party thing.

• lostmsu 19 hours ago

Yes

• JPKab 15 hours ago

Yep. Lighting is a blatant, shameless rip off of Jeremy Howard's FastAI library BTW.

• brahman81 18 hours ago

Thanks to the community for reporting the security issues with PyTorch Lightning 2.6.2 and 2.6.3 - we're actively looking into it.

In the meantime, please use 2.6.1 until we publish 2.6.4.

For more details: https://github.com/Lightning-AI/pytorch-lightning/security/a...

• bandrami 11 hours ago

It's crazy to me how just a year or so after xz people were willing to say "sure I'll take this giant black box so unauditable that even it's creators don't really know what's in it and run all my data through it"

• CoastalCoder an hour ago

I'm guessing it ultimately comes down to the legal / financial / career incentives.

My impression is that the market currently rewards visible software functionality with little concern for invisible risk.

If we flipped the script, and investors were personally, criminally, and civilly liable for computer breaches, I imagine this problem would disappear almost overnight.

• notatallshaw 18 hours ago

> Running pip install lightning is all that is needed to activate

FYI, pip added cooldowns in 26.1:

  * https://discuss.python.org/t/announcement-pip-26-1-release/107108
  * https://ichard26.github.io/blog/2026/04/whats-new-in-pip-26.1/
To use:

  * CLI: pip install --uploaded-prior-to=P1D ...
  * Env Var: PIP_UPLOADED_PRIOR_TO=P1D pip install ...
  * Config: pip config set global.uploaded-prior-to P1D
• riteshnoronha16 10 hours ago

Even if you package manager does not support it, if you generate sboms your implement cooldowns across ecosystems https://www.interlynk.io/resources/cooldowns-with-sboms

• 0fflineuser 18 hours ago

The nixpkg from unstable seems to be infected as it s 2.6.2 https://search.nixos.org/packages?channel=unstable&include_h...

• minkowski 18 hours ago

Nixpkgs uses the GitHub source, not the PyPI dist, for lightning; unclear to me from the advisory whether this should also be considered compromised.

• andymcsherry 18 hours ago

Andy from Lightning here. Thanks for pointing that out, we are updating the CVE. Only the versions from PyPi were affected. The malicious code was not checked into the GitHub repository

• deforciant 18 hours ago

github is fine, the package was only pushed into pypi directly

• c16 5 hours ago

How can we prevent this? Can we not start forcing MFA for git pushes? Use your fingerprint, MFA, or yubikey - reject unsigned commits on public repos?

• ks2048 18 hours ago

I'm curious what they do with various kinds of credentials if they get access.

I can see trying to steal crypto, but what do they do if they get some AWS credentials? Try to run some crypto mining instances? Try to use your account for other types of crimes? Or is it mainly trying to steal data and then ask for ransoms?

• bigfluffydonkey 18 hours ago

It's always crypto. A client got some AWS credentials stolen and without anyone checking the account, the hacker managed to spin up big EC2 instances across many regions. The bill after a month as I recall was around 100K. Since the activity was clearly fraudulent the bill was forgiven eventually. So remember to lock down your AWS keys permissions...

• ajb 13 hours ago

When that happened to a former employer AWS was calling us within a day. Worth making sure a real phone number is on there, as that's how they contact you for anything serious (and also if your finance dept decided to change the credit card without telling anyone)

• 9dev 15 hours ago

That; and also, enable the various monitoring and audit features in AWS now; start with CloudTrail. Nothing worse than being affected by this attack, and AWS not having any audit trail available.

• throwa356262 19 hours ago
• csvance 18 hours ago

The decision to run all of my experiments in a monorepo with a single uv.lock continues to be validated. I usually only update it a few times a year. It was pinned at 2.6.1 for lightning \o/

• debarshri 4 hours ago

Anthropic should release mythos to general public.

• cushychicken 14 hours ago

I was one of probably eight people who played the Emperor: Battle for Dune RTS game, and I always think of the Fremen character sound bite whenever I see the Old Man of the Desert’s true name invoked:

”…for Shai-Hulud!!!”

• fnoef 17 hours ago

Looks like coding is in a downward spiral towards complete chaos

• SupLockDef 16 hours ago

When I was a kid, we've been told to be cautious with third party dependencies, that code can do anything and it's a risk to evaluate.

With the new generation of yolo NPM scripters, they simply don't evaluate the risks. They will even fight back telling you that it's the way of doing things.

In reality, it's the warning we learnt back then, that's the result of be mindlessly importing third dependencies without thinking.

In other words, the risks were always there, the new "modern way", let's put it that way, doesn't put the effort anymore.

• zelphirkalt 4 hours ago

That, and it is combined with not being willing to write a few functions oneself, which one could easily do, and then not have to add a dependency. But it is also a result of trying to do everything quickly quickly! and being pushed to do that.

The more one knows about computer programming, algorithms, data structures, how things are usually implemented in general, the better one can avoid unnecessary dependencies. Needs the right environment though to execute on that.

• andrekandre 9 hours ago

  > that's the result of be mindlessly importing third dependencies without thinking
tbf, most tech-related corporate environments don't want you to think, just do (kpi, mbo, okr et al) and this is one of the results
• raverbashing 34 minutes ago

Ironically, the hardest part might be to meet the actual requirements to run the packages, since python dependencies have been so braindead lately

• sieve 17 hours ago

I find this constant churn in the software world to be tiresome. I get it if there is a security update. Or you are building something new; it takes time and a series of updates to reach feature parity on 1.0. But most software is not like that. All these online registries make the problem worse. Any random tool installation pulls in 300 different dependencies.

This is why I have been building, for my own usecases, a new language + compiler + vm that is completely source based. The compiler does not understand linking. You must vendor every single dependency you use, including the standard library, so that it makes its way into the bytecode. The register VM itself is a few thousand lines of freestanding C. Any competent programmer can audit it over a weekend.

v1 deliberately keeps FFI (outside of a bounded set of linux syscalls) outside the current spec as libc has the habit of infecting everything it touches and I want to keep Vm0 freestanding. The last time I compiled the VM, it produced a 70KB binary and supported a loader with structural verification, the entire instruction set using a threaded interpreter, a simple Cheney+MS GC, concurrency via an Erlang-style M:N scheduler working on a single thread, and 20-odd marshaled functions.

Most software in the world does not need anything more than this. Everyone acts as if they are building the next Google.

• ashishb 16 hours ago

Always run third party code inside a sandbox

• lysace 16 hours ago

Is there some string to recursively grep for to know if you have been infected?

• andymcsherry 14 hours ago

Andy from Lightning here. The malicious file that gets installed has this signature:

  router_runtime.js

  SHA256 5f5852b5f604369945118937b058e49064612ac69826e0adadca39a357dfb5b1
  SHA1 f1b3e7b3eec3294c4d6b5f87854a52471f03997f
  MD5 40d0f21b64ec8fb3a7a1959897252e09
• lysace 14 hours ago

Thanks!

• 0xbadcafebee 19 hours ago

something something Safety Requires A Building Code something thing

• csvance 18 hours ago

Shai-Hulud dug my 100 ft trench. Should be OSHA compliant right?

• silverwind 16 hours ago

Maybe now people can stop blaming npm and realize none of these unreviewed package ecosystem are safe.

• rvz 19 hours ago

Shai-Hulud strikes again and continues to turn innocent packages into zombies.

Think twice before looking at a package and most importantly, always pin your dependencies.

• pixel_popping 18 hours ago

Yeah, pin the malware :p

• rvz 17 hours ago

Nope. Those on pinned versions don't get the malware.

You would have to publish the infected package first to infect others who haven't pinned their dependencies. With a simple pip install -U, and if the dependency is not pinned, then they will get the vulnerable version.

• ramon156 2 hours ago

I think it was a jab at the statement "if I pin the dep, I am safe". How do you know your current code is not compromised? No one reads all the code they run, anyway.

• doublerabbit 13 hours ago

Am I the only one who thought that by using github links for an dependency source is not a wise thing to do?

Do folk not understand that by doing so, you're enabling modules to maliciously write themselves in to your code?

• spate141 19 hours ago

ah shit, here we go again

• 12_throw_away 19 hours ago

this is fine, we are definitely a perfectly normal industry that knows what it is doing

• androiddrew 14 hours ago

Another exploit Mythos didn't find. Isn't the god machine kind of failing us?

• ElectricalUnion 13 hours ago

Forgot to do the various various maintenance rituals and prayers of function, so now the machine spirit's disposition is poor.