
The current trajectory of large language model (LLM) development is going to flame out. As models grow larger, they demand exponentially more data, energy, and computing power. Yet the performance gains from this scaling are already showing signs of diminishing returns, despite the massive investment. Training state-of-the-art AI models can consume megawatts of electricity, while just running them often requires hundreds of watts per user session. Compare this with the human brain which chugs along on a measely 20 watts, enough to power a dim light bulb!

Leave a Reply