

These distractions are DANGEROUS. Humans thought this was a good idea, and went ahead without thinking about what it entails.
Executives should be forced to drive home after work, so that they can understand what they are demanding.


These distractions are DANGEROUS. Humans thought this was a good idea, and went ahead without thinking about what it entails.
Executives should be forced to drive home after work, so that they can understand what they are demanding.


Considering that each chaebol owns many industries, that money goes right back into the pockets of Samsung. The chaebols effectively include company stores, with their own versions of KFC, Wendy’s, Krogers, Wal-Mart, 7-11, and so forth.
It is that never-ending greed of the rich, that makes them dissatisfied with merely getting most of the money. I suspect that even if they got everything, the executives would demand the flesh of the workers - because 100% isn’t enough, it must be MORE.



All sorts: the Epstein Class, corporations like Blackrock or Blackwater, the Heritage Foundation, cults, corrupt government officials, grifters, and so forth. They might not be physically in our rooms, but their influence colors our everyday life, in ways great and small, often beyond our perception.
Fact of the matter is that there are many forms of power and methods of applying it - AI is no different. Like any tool, it doesn’t care about how it is used or abused. Humans have to decide whether they wield the power of the tool, and to what end.
A shovel can dig gardens and mass graves alike. A good person tries to prevent whatever causes the latter outcome, and encourage the former.


I want both nuclear power and AI to be commonplace.
Where the latter is concerned, it should be decentralized by law: Individual households can own a home server, and in turn, rent or loan their compute to organizations. The reason for this, is to limit the power of corporations and force them to abide by the will of ordinary people, rather than being able to hoard technological power to fuck over the government and citizens. The same applies to robots capable of replacing human labor.
We should not reject AI nor automation, and instead seek to ensure that they can’t be used against the interests of the public good. Mindless rejection, just ensures that bad actors will eventually have sole mastery over these resources.


You do realize that you can use an AI model for many mundane things? Accounting, coding, scheduling, That leaves humans free to do human things like socializing and learning. The reason why Musk and company is so powerful, is because they can use their wealth to delegate tasks away from themselves. Time is a resource, and the wealthy are able to save much of it by not having to do the things that the ordinary person does.


I have 128gb of DDR4 RAM, a 4090, and a 3060. While certainly not weak, my computer is some generations behind. People, real people, can run a model inside their homes. Provided you limit the context and get a midrange quantization, you can run a Qwen3.6 35b on a midrange gaming PC.
Given time, we will someday run DOOM Eternal in our pockets, and be able to talk with the demons.


AI isn’t the problem, it is just an excuse to abuse and gaslight people. If AI didn’t exist, some other card would be played.
Instead of destroying the looms, we should take them over and make our own products. AI can be incredibly useful and might allow cottage industries and smaller communities to become strong enough to contest the powers above us. The big constraints is just the affordability of local hardware and the development of sufficiently powerful models.
Things are moving quickly, especially in the local AI space. Two years ago, fitting a 70b was difficult in my hardware, which had 4k context capacity, could take an hour to output, really sucked at calculating numbers, and was censored. Now a 122b can be uncensored, allow for 256k context, takes less than two minutes to output an lengthy response, and is much smarter.
What I am saying, is that we shouldn’t reject the power of AI. We should use it ourselves, and become the equals of the elite. If we foolishly abandon power, the wealthy will just continue bullying us.


No, the problem with keeping people in office, is that they get to establish strong networks of interests. By disrupting this and adding social uncertainty from unfamiliar people, we make it harder for corruption to become baked into society. Corruption is very much a social behavior that relies upon trust - the trust that the other guy won’t snitch on you, if the horsetrading is profitable.
We make it harder to establish that trust among thieves, by swapping people often.


Term limits are not the solution, they are part of a solution. Term limits alone wouldn’t work without other parts of the political process being reformed. For example, First Past the Post voting makes it much harder for independent candidates to get a fair shot.
The United States needs huge reforms across the board, because much of our processes were built 250 years ago.


Nah. Term limits help prevent the creation and perpetuation of “good old boy” clubs. Quite honestly, it is better to have an inexperienced but well meaning rando, than an expert who makes a habit of lining their pockets with their experience.


The lesson I am learning: the elderly should have the congressional keys taken away from them, before they can drive the nation into stupid.


Even if the US was trustworthy, this is still a prudent move. Good relations are not forever between nations, and it reduces the opportunities for third parties like Palantir to exfiltrate data.


How much you wanna bet that Trump will somehow cause the entire deposit to catch fire?


Always was. 🧑🚀 🔫 👩🚀


I know what Israel is: An enemy to people in general. No different from the Nazis.


Speed depends on how much of the model is on VRAM, and the dense/MoE architecture of that model. The RAM’s benefit is more about having the ability to run the model in the first place. In any case, a dense Qwen3.6 27b would take up about 27-33gb-ish of memory, plus whatever context size you set.
Upcoming implementation of MTP will increase the size of models, but in exchange, they will also run faster. About a 30%ish boost for dense models, a bit less for Mixture of Expert varieties, from the looks of it.


I like AI, and encourage its adoption by people. However, I am 100% certain that AI hasn’t prevented school shootings. In fact, I will go as far to say that the Trump administration has solely used it to harm people, schoolchildren included. We got bloodied backpacks proving that.
Kash stinks.


You can use something like KoboldCPP on Linux, which allows both RAM and VRAM combined to run a model. O’course, not as fast when compared to pure VRAM or the Mac approach, but it is an option. I use my 128gb RAM with some GPUs for running models.
Presumably, the same guy who botched Elon’s junk. It must grate, seeing this doctor whenever visiting the Orange House.