aqouta
No bio...
Friends:
User ID: 75
Moore's law was originally a doubling every two years and has basically kept pace, although it may have slowed now to every 3 years in just the last few years. If the speed of ai progress continues at this trend for half a century and then merely 2/3rds this speed then we're hitting agi for sure.
I can't find this analog SF graph you're talking about and don't see how it's related to this prediction. We knew about brute facts of physics that would prevent ftl travel back then and know of no fact of physics thst would stop ai from gaining human level general intelligence. You think we're at the looking at cars and predicting ftl level when we seem much more at the looking at horses and predicting cars given knowledge of engines. Was it possible that engines were just too heavy and a functional car wasn't really possible? I sure, could have happened. And horses still outclass cars in maneuvering in some cases. But we did get the car, and they do dominate horses in most ways.
We had that one anti-work guy around a long while back, any chance they're related? I don't remember much about that guy really but the philosophy seems similar.
You write like you're an AI bull, but your actual case seems bearish (at least compared to the AI 2027 or the Situational Awareness crowd).
I was responding to a particularly bearish comment and didn't need to prove anything so speculative. If someone thinks current level ai is cool but useless I don't need to prove that it's going to hit agi in 2027 to show that they don't have an accurate view of things.
I think this is true too, in a decade. The white-collar job market will look quite different and the way we interact with software will be meaningfully different, but like the internet and the smartphone I think the world will still look recognizably similar. I don't think we'll be sipping cocktails on our own personal planet or all dead from unaligned super intelligence any time soon.
well yes, that world is predicated on what I think is a very unlikely complete halt in progress.
It's sensitive to context and prompting. When having it write bash scripts have you consider just dumping the man files into the context? Don't bother actually formatting them, just dump anything that could possibly be relevant into the prompt.
There wasn't ever a mechanism by which cars would start improving themselves recursively if they were able to break 100 mph. There were very good laws of physics reason in the 60s to assume we couldn't even in theory get to FTL. No such reasons exist today. You're not fighting the prediction that cars will be able to go ftl, you're fighting the prediction that mag lev trains would ever be built.
Ok, taking all this in good faith then I think the only real shot at overcoming deprivation is by pushing forward. Continue expanding productivity through capital investment. Make more and more things too cheap to meter. Ownership isn't the source of deprivation really, only the shape it takes, it's scarcity that would need to be defeated. In practice, at least in the west, we've basically defeated scarcity on things like foodstuffs. Our poor suffer from obesity and not really hunger. Our poor mostly don't lack for running water, clothes on their backs, even shelter for most of them although I do have particular changes I'd like to see on this subject.
The chronic homeless wither not because society is unable to house them but because our sense of individual freedom won't allow us to commit those that can't function without aid. This example muddies the issue. The deprivation here might appear to be proximately caused by ownership of homes, trivially if homeless people could just go in and occupy anyone's home then they would be cured of their homelessness but this wouldn't really solve the underlying issue. I don't know how you could prevent self imposed deprivation, or at least how you could do so without forfeiting freedom.
Cool! Then notice that an ability is not a right, and let's keep them straight. I'm talking about a preemptive principled right to universally deprive. I can deprive every single person in the whole world of a $100 dollar bill by burning it to ash. Totally my right as an owner (ignoring the complication that money is not something we "own" strictly speaking) but impossible if I ain't got no matches. Right vs. ability.
Alright, then I revise what I said to "My conclusion is that the right to deprive is probably necessary for any social system that scales past around the Dunbar number and depending on how you operationalize "deprive" maybe far below that number."
Is this the world you want your loved ones and great- and great-great offspring to live in? Is this or something resembling it as good as you want it to get? My answers to those are resounding NO FUCKING WAY! Settling for better-than-worse to avoid the possibility that you might break something by attempting good-as-we-want has never made sense to me.
I do want things to improve. I observe the history of society and see that as we build out new technology and capital infrastructure we increase abundance and things get better. I would like to separate the concept of "things being better" into things made better systemically and things made better by material progress. We don't need to change the system for things to be made better by material progress. It's genuinely incredible how much better things have been made by material progress. I don't have to worry about infections. I can spend a Wednesday evening relaxing in a comfortable chair listening to tunes on high quality wireless headphones eating good food in a large air conditioned house responding to people on the internet. I am the envy of kings of old. I'm more skeptical about things being made by, radical, systemic changes.
Still I would like things to improve. I'm never sure if I should call myself a liberal or a conservative. I'm freedom loving and optimistic. I think if we mostly leave people alone and minimally adjust the system things will simply get better over time. So I oppose rash and under thought out changes to the system. You could say I conserve the system. I'm not opposed to all change, in fact I fiercely support some changes and updates as the material conditions change. But I find radicalism off putting, ungrateful, pessimistic and short sighted. You're not only risking what good we have, you're risking the good that the current system will produce if we only allow it to. You may see ownership as a rotten board of a rotting house, but I see it as a vital component of a prosperous and growing society. So from my perspective it really is on you to explain why and how we should get rid of it or I'm going to default to declining. If that's conservative then I am a conservative. If it's madness then I am mad.
It's been known since the 80s that the worst thing you can do in a brainstorming session is criticize the ideas that arise instead of accepting and exploring them. (de Bono's Serious Creativity is great on this point.)
I'll note that this is the culture war thread, we're here to discuss the culture war. The default valiance anyone will approach any underspecified idea with is that there are culture war/political implications to what you propose. I know you directly said in the OP that you were looking to brainstorm and it can be exhausting to have to overly signal that you're not advancing any particular objective but your OP would have been much better received if you had put some effort into making it clear that you didn't have an axe to grind.
Have you actually used the latest tooling? What tasks have you actually had it try? This seems incredibly unlikely to me.
If there were lots of natural creatures casually traveling around at light speed through mere evolution those predictions would have been much better founded. It seems like quite the unfounded prediction to have witnessed LLMs rapidly surpass all predictions with the pace not appearing to slow down at all and assume it's going to stop right now. Which kind of must be your assumption if you think we aren't going to hit agi. It rather seems like you're declaring those automobiles will never compete with horses because of how clunky they are. We're at the horse vs car stage where cares aren't quite as maneuverable or fast as horses and maybe will just be a fad.
Sounds like this should be easy then as we have seen some people who are smarter than others.
Speaking of work, what is/was your line of work? It may just be an engineers mindset but when I hear a critique of some fundamental part or tool I'm using I have two concerns.
- is this critique true
- if it is true what is the alternative and is it better than the downside being put forward.
Part 2 is pretty important because if no alternative is actually better than the tool itself then it makes step 1 pointless. If there is not an actual alternative to ownership then why should I care about your critique of it? It's like putting forward a critique of how much trouble it causes that humans must excrete waste. You can say tons of bad things about our need to excrete waste, it smells, we must do it at inopportune times and it's processing requires much effort. But as this practice cannot be eliminated we must make peace with it and the infrastructure and sewers must be built, damn the cost.
Ownership means people must be deprived of some things. It's not alone in that downside. The need to breathe oxygen and inability to survive at extreme levels of pressure deprive every human of a safe tour of the Titanic wreckage. As humans being deprived of things is just something we have to accept unless we can find a better alternative. We will probably never overcome deprivation on our ability to walk on the surface of the sun, whatever one might call the surface of a giant nuclear explosion.
I think a lot of the frustration you're seeing in response here is that the ball seems to be in your court on this topic but you refuse to acknowledge that and instead insist that the ball is in our court. You're proposing some pretty radical interpretations of society and then refusing to elaborate in anything but vagueness.
That you and I mean the same thing by "ownership".
I'm perfectly willing to accept your definition of ownership for sake of conversation. It's just a word. If we come to somewhere I don't think you're using it in a consistent way or trying to garner strength from a connotation ownership has that isn't present in your definition I'll let it be known that we differ.
That in a world where ownership has been abolished, there will still be factories and factory workers.
Sure, this is a pretty important thing. If you're proposing we collapse all of society and return to monkey or whatever I'd like to say straight up that I have no interest in giving up modern conveniences. I think society as a whole is pretty great and produces many wonders. If this is what you are proposing it would save us all a lot of time if you came out and said it. Then you could defend that position and maybe say something interesting. But it's pretty unsatisfying to have to guess at what you're even talking about.
You're welcome to list the assumptions I'm making.
I would greatly prefer you to list these. That I don't actually know what assumptions you're making is the problem here. You seem to think it's some kind of virtue that you're minimally engaging. It's not. It makes discussion practically impossible. The totality of what I know about your position is that you believe ownership to be unjust and that now that you also think work is bad.
You should read Bob Black's awesome little book, The Abolition of Work to stretch your mind a bit, if you haven't already read it.
Just read it. just seems like more unworkable fancy. His view on pre-industrial society is rose tinted and his proposal for an alternative, which I'll at least credit him with putting forth, is pure fantasy. I understand it's satisfying to say you don't like having to work for a living and this kind of thing can feel cathartic to imagine, ideally with friends while passing around a joint in your early twenties, but it's just nonsense. No, we are not going to be able to spontaneously organize society such that the waste gets handled joyously by small children by awarding them medals for doing a good job. No, we are not going to leave it up to people's whims to accomplish necessary jobs like providing us with food or maintaining our buildings and infrastructure. No, war will not be abolished because of this slick new idea where we all just chill out, war over resources is older than humanity, the monkeys and apes do it.
if your solution to some problem relies on “If everyone would just…” then you do not have a solution. Everyone is not going to just. At not time in the history of the universe has everyone just, and they’re not going to start now.
Beyond even the unfeasibility of his solutions I find something spiritually dismal about them. This yearning for a dead past and uninterest in further progress. I find it frankly pathetic. It is the attitude of a stoner with arrested development. A society of Bob Blacks would never explore the stars, wouldn't not have sent a contingent to the moon, would not have even ever come down from the trees. I welcome him and those who think like him to find their fellows and move into some still remaining stretch of wilderness and live life as they wish.
Thanks for your thorough reply!
Yes and no. Clearly, things are better than even three years ago with the original release of ChatGPT. But, the economic and practical impact is unimpressive. If you subtract out the speculative investment parts, it's almost certainly negative economically.
And look - I love all things tech. I have been a raving enthusiastic nutjob about self-driving cars and VR and - yes - AI for a long time. But, for that very reason, I try to see soberly what actual impact it has. How am I living differently? Am I outsourcing much code or personal email or technical design work to AI? No. Are some friends writing nontrivial code with AI? They say so, and I bet it's somewhat true, but they're not earning more, or having more free time off, or learning more, or getting promoted.
I think you're a little blinkered here. It takes more than a couple years to retool the whole economy with new tech. It was arguably a decade or more after arpanet before the internet started transforming life as we know it. LLMs are actually moving at a break neck pace in comparison. I work at a mega bank and just attended a town hall where every topic of discussion was about how important it is to implement LLM in every process. I'm personally working to integrate it into our department's workflow and every single person I work with now uses it every day. Even at this level of engagement it's going to be months to years cutting through the red tape and setting up pipelines before our analyst workflows can use the tech directly. There is definitely value in it and it's going to be integrated into everything people do going forward even if you can't have it all rolled out instantly. We have dozens of people whose whole job is to go through huge documents and extract information related to risk/taxes/legal/ect, key it in and then do analysis on whether these factors are in line with our other investments. LLMs, even if they don't progress one tiny bit further, will be transformative for this role and there are millions of roles like this throughout the economy.
I think that is the crux of our disagreement: I hear you saying "AI does amazing things people thought it would not be able to do," which I agree with. This is not orthogonal from, but also not super related to my point: claims that AI progress will continue to drastically greater heights (AGI, ASI) are largely (but not entirely) baseless optimism.
Along with these amazing things it comes with a ripple of it getting steadily better at everything else. There's a real sense in which it's just getting better at everything. It started out decent at some areas of code, maybe it could write sql scripts ok but you'd need to double check it. Now it can handle any code snippet you throw at it and reliably solve bugs one shot on files with fewer than a thousand lines. The trajectory is quick and the tooling around it is improving at a rate that soon I expect to be able to just write a jira ticket and reasonably expect the code agent to solve the problem.
Nothing has ever surpassed human level abilities. That gives me a strong prior against anything surpassing human level abilities. Granted, AI is better at SAT problems than many people, but that's not super shocking (Moravec's Paradox).
Certainly this is untrue. Calculators trivially surpass human capabilities in some ways. Nothing has surpassed humans in every single aspect. There is a box of things that AI can currently do better than most humans and a smaller box within that of things it can do better than all humans. These boxes are both steadily growing. Once something is inside that box it's inside it forever, humans will never retake the ground of best pdf scraper per unit of energy. Soon, if it's not already the case, humanity will never retake the ground of best sql script writer. If the scaffolding can be built and the problems made legible this box will expand and expand and expand. And as it expands you get further agglomeration effects. If it can just write sql scripts then it can just write sql scripts. If it's able to manage a server and can write sql scripts now it can create a sql server instance and actually build something. If it gains other capabilities these all compliment each other and bring out other emergent capabilities.
The number of people, in my techphillic and affluent social circle, willing to pay even $1 to use AI remains very low.
If people around you aren't paying for it then they're not getting the really cutting edge impressive features. The free models are way behind the paid versions.
It has been at a level I describe as "cool and impressive, but useless" forever.
AGI maybe not, but useless? You're absolutely wrong here. With zero advancement at all in capabilities or inference cost reductions what we have now, today, is going to change the world as much as the internet and smart phones. Unquestionably.
No, and that's exactly point! AI 2027 says well surely it will plateau many doublings past where it is today. I say that's baseless speculation. Not impossible, just not a sober, well-founded prediction. I'll freely admit p > 0.1% that within a decade I'm saying "wow I sure was super wrong about the big picture. All hail our AI overlords." But at even odds, I'd love to take some bets.
Come up with something testable and I am game.
Absolutely not. Deep research is a useful tool for specific tasks, but it cannot produce an actual research paper. Its results are likely worthless to anyone except the person asking the question who has the correct context.
This clears the bar of most Americans.
If you build a bigger rocket and point it at the moon, it will get incrementally closer to the moon. But you will never reach it.
If you have some of the smartest people in the world and a functionally unlimited budget you can actually use the information you gain from launching those rockets to learn what you need to do to get to the moon. That is was actually happened after all so I really don't see how this metaphor is working for you. The AI labs are not just training bigger and bigger models without adjusting their process. We've only even had chain of thought models for 6 months yet and there is surely more juice to squeeze out of optimizing that kind of scaffolding.
This is like claiming moore's law can't get us to the next generation of chips because we don't yet know exactly how to build them. Ok, great but we've been making these advancements at a break neck pace for a while now and the doubters have been proven wrong at basically whatever rate they were willing to lay down claims.
Speaking of claims you've decided not to answer my questions, that's fine, continue with whatever discussion format you like but I'd be really interested in you actually making a prediction about where exactly you think ai progress will stall out. what is the capability level you think it will get to and then not surpass?
Simply scaling existing methods, while potentially achieving impressive results, cannot achieve AGI.
Why do you believe this? Is it an article of faith?
It seems like we absolutely do know what lies ahead on the path to AGI and it's incrementally getting better at accomplishing cognitive tasks. We have proof that it's possible too because humans have general intelligence and accomplish this with far fewer units of energy. You can, at this very moment, if you're willing to pay for the extremely premium version, go on chat gpt and have it produce a better research paper on most topics than, being extremely generous to humanity here, 50% of Americans could given three months and it'll do it before you're back from getting coffee. A few years ago it could barely maintain a conversation and a few years before that it was letter better than text completion.
This is rather like having that LW conversation after we'd already put men into orbit. Like you understand that we did actually eventually land on the moon right? I know it's taking the metaphor perhaps to seriously but that story ends up with Alfonso being right in the end. We can, in fact, build spaceships that land on the moon and even return. We in fact did so.
Now we have some of the greatest minds on earth dedicated to building AGI, many of them seem to think we're actually going to be able to accomplish it and people with skin in the game are putting world historical amounts of wealth behind accomplishing this goal.
AI is a question of fundamental possibility: by contrast, with AI, there is no good reason to think we can create AI sufficient to replace OpenAI-grade researchers with forseeable timelines/tech. Junior SWEs, maybe, but it's not even clear they're on average positive-value beyond the investment in their future
You're just asserting this without providing reasoning despite it being the entire crux of your post. I know it's not reasonable to expect you to prove a negative but you could have at least demonstrated some engagement with the arguments those of us who think it's very possible near term have put forward. You can at least put into some words why you think AI capabilities will plateau somewhere before openAI-grade researcher. How about we find out where we are relative to each other on some concrete claims and we can see where we disagree on them.
Do you agree that capabilities have progressed a lot in the last few years at a relatively stable and high pace?
Do you agree that it's blown past most of the predictions by skeptics, often repeatedly and shortly after the predictions have been made?
Are there even in principle reasons to believe it will plateau before surpassing human level abilities in most non-physical tasks?
Are there convincing signs that it's plateauing at all?
If it does plateau is there reason to believe at what ability level it will plateau?
I think if we agree on all of these then we should agree on whether to expect AI in the nearish term, I'm not committed to 2027 but I'd be surprised if things weren't already very strange by 2030.
I don't understand how anyone can in good faith believe that even with an arbitrary amount of effort and funding, AGI, let alone ASI, is coming in the next few years. Any projection out decades is almost definitionally in the realm of speculative science-fiction here.
Then it's good the 2027 claim isn't projecting out decades.
Alright, no one else is so I'll defend inheritance. It's not about the rights of the heir, it's about the rights of the deceased to decide where their fruits go. Defending meritocracy, especially from a libertarian angle, doesn't commit you to preventing a person from doing with their earthly possession whatever they want in the last moments of their lives any more than it commits you to finding the person who would be the best CEO of amazon and installing him against their will and the will of the board.
Is the act of giving your wealth to someone who hasn't earned it meritocracy maxing? Probably not. Is having a system of ownership that incentivizes those with the most merit to earn as much as they can because they love their kid and want to pass on wealth to them merit maxing? Maybe, arguably. But it's also the pro liberty thing to do and libertarians are perfectly reasonable in coming down on the side of allowing inheritance.
It sounds like grazing rights are exclusive, just the ownership is held commonly by the townsfolk and the excluded members are non-townsfolk. A passing cowboy would be deprived.
Circling back around because I do enjoy a little bit of what are essentially economic thought experiments. In your world where we do away with ownership what is the model of production for semi-complex goods? Presumably we'd still have like pencils and paper in the world you envision. Who is working in the pencil factory and why? Who is working with sanitation and ensuring human waste is properly processed, the hands on parts of the job in particular?
My point at this point, which I think is quite clear, is that ownership is essentially and definitionally the right to deprive others.
This is why proposing an alternative is important. Because I really don't think you can have a system free of deprivation. For any finite item, say my nail gun, its use necessitates depriving someone else of its use at least for the duration of my use. You can certainly create systems that minimize deprivation but its existence is a brute fact of the universe. And I'd go so far as to argue that our systems of free exchange and property rights actually does a pretty good job of minimizing deprivation in practice through enabling growth.
In fact, the alternative is sticking us in the nose, which makes the fact that most people act clueless about it (whether they are or not) all the more ironic. One minute (not 10, JarJarJedi) is all it would take for a relatively intelligent person doing nothing more than looking for the logical compliment to deprivation to realize what a very familiar alternative is.
I'm afraid it is not sticking me in particular in the nose and would appreciate a more explicit spelling it out. If you want to say communism or whatever you can just come out and say it. We entertain much more fringe positions here from time to time even if there are those who jeer rightly or wrongly you'll usually find some interlocutors willing to approach in good faith so long as you're clear and not too unpleasant about it.
I really don't care how thousands of years of use has convinced us that ownership is useful or what "problems" it "solves" -- problems conceived of in the same paradigm where ownership was conceived, characterized by thousands of years of staunch neglect and refusal that it's all about deprivation. "Usefulness" is beside the point. War is universally considered useful, too. How is that relevant to the fact that it's obscene, horrific, and destructive?
This is a really unsatisfying answer to people who have to actually live in any of these proposed worlds. It actually matters quite a bit if you don't have an alternative because we rely on ownership as a foundation to this very complex world full of wonders that we have built.
Have you ever considered the fact that ownership is the right to deprive? You might spend a little time ruminating on that.
Yes, I have thought quite a bit about this kind of thing. My conclusion is that the ability to deprive is probably necessary for any social system that scales past around the Dunbar number and depending on how you operationalize "deprive" maybe far below that number.
Oh, really? No, not at all. How does the fact that there aren't enough lifeboats on the Titanic we're sailing, or the fact that I can't tell you where there's one with room for you, have any bearing on the fact that the ship is going under? No one owes you a solution. Are you just going to stand there until someone gives you directions or leads you by the hand? It's up to you if you want to use that as an excuse to refuse considering facts that are right all our faces.
I don't see us to be sinking in any meaningful way. Society is more prosperous than any time in history. So yes, I will need some kind of assurance that your plan to meddle with these fundamental axioms of society isn't going to be really really terrible before I sign on. It could be like slavery where we really are better off without it. Or it could be like the need to consume calories and expel waste that we really just need to make peace with.
What could possibly be an alternative to predicating entire societies on the principle of deprivation? No idea?
Genuinely just coming up with childish noble savage myths about how native Americans live in 90s era cartoons. Why are you so resistant to actually describing what you're after?
[1] https://www.umass.edu/political-science/about/reports/2025-8
[4] https://www.umass.edu/political-science/about/reports/january-16-2024
> Be me
> Load entire thread into a text to speech app
> Surely no one would just dump incredibly long naked links into the motte dot org.
> go upstairs to fold laundry
> oh a naked link, that's fine, how long can it be
> literal minutes later go downstairs to make this comment.
Can you put a little more effort into formulating your point here? This really just seems like a bunch of Russel conjugations. You take issue with the concept of ownership and then go on to describe consequences of this concept in unflattering terms. Ownership is a useful concept for many reasons, principally because it solves tragedy of the commons problems once society scales up enough that free riding becomes a problem. You really need to propose an alternative to ownership as a concept and not just leave it hanging out there if you want this to go anywhere. It's very difficult to actually build any organization without the concept of ownership without it being incredibly brittle. Not just in the case of physical goods but ownership in decision making.
It's a curious problem I think. I am against most of that stuff being taught in school but the whole "teach the controversy" thing must have some limits. What would my enemies do with this veto? I'm not so sure the opt out is the correct thing to demand, the battlefield should surely be the curriculum itself.
Yeah, almost certainly a resolution problem, models are trained to work with pretty specific width/height ratios and if you throw them off things get ugly.
They're per capita gdp is still much lowe, it's not chips to notice that it's catch up effects. China didn't even contain the most productive Chinese people. Do you think deepseek is still ahead?
I'm going to need a source for China out innovating us. Certainty they benefit from igniting or copyright and I'm not of the camp that thinks they don't innovate at all. But out innovate us? And China does have IP law.
- Prev
- Next
While working in corporate IT when you have a basically working system if someone came up and informs you that the silicon in all the electronics you use is susceptible to solar radiation that can occasionally make calculation incorrect and that you should consider alternatives to silicon what would you do? Maybe if you had the time to kill you could kick around the idea with them, the fellow may even be right about the silicon being susceptible to solar radiation, you vaguely remember that something like that can occasionally cause a bit to flip here or there. But really, how seriously are you going to take this warning? It's not really been a big problem before, you even had some redundancies set up so even if in a freak accident it mattered it'd probably be fine. There are some experimental alternatives to silicon materials, Germanium, Graphene, cubic Boron but it's not even clear if any of them solve the original problem and you manage tons of electronics. You realistically cannot even source a single Germanium chip, let alone replace your servers. You express skepticism and they accuse you of being negligent. That's kind of what it feels like to see you morally load this conversation by call capitalism psychopathy with a makeover. It just kind of comes off as silly and frivolous. Maybe there is an alternative and maybe we can talk about those alternatives but I live in downtown Chicago, I'm looking around at these sky scrapers and millions of people moving about keeping everything running and it may actually be easier to switch every electronic in the city to graphene then get this working without ownership as we know it.
I'm happy to talk about this, but don't call my a psychopath for being skeptical.
We've been offered many spoons, some we have later verified were filled with dog shit.
I think you're mistaken of the dynamics here. There are tons of courts. If discussing this with you is tedious I can go up or down a thread and participate in the forum's 800th discussion on whether Trump is good or bad, the 480th thread on whether lgbtq2s+ acceptance has not gone too far enough or even spicy new topics like the India/Pakistan conflict. This topic is of special interest to you because it's been a brain worm for you for years, it's of special interest to us because we do actually appreciate the opportunity to engage with new views. If the engagement is not forthcoming, if the ball stays in your court we can and will move on. As we were counting assumptions earlier the belief that your perspective will win out is is an assumption you're making and it's on you to convince us of that.
Depends what your standard is for collapse. I'd argue maoist china collapsed in a way. The Weimar republic probably counts. The soviet union might count. Usually a society is able to survive and change course after the implementation of bad ideas, see Trump tariffs.
I don't know if you've gone beyond anarchism, but I don't know much about your views.
More options
Context Copy link