domain:parrhesia.substack.com
A response to Freddie deBoer on AI hype
Bulverism is a waste of everyone's time
Freddie deBoer has a new edition of the article he writes about AI. Not, you’ll note, a new article about AI: my use of the definite article was quite intentional. For years, Freddie has been writing exactly one article about AI, repeating the same points he always makes more or less verbatim, repeatedly assuring his readers that nothing ever happens and there’s nothing to see here. Freddie’s AI article always consists of two discordant components inelegantly and incongruously kludged together:
-
sober-minded appeals to AI maximalists to temper their most breathless claims about the capabilities of this technology by carefully pointing out shortcomings therein
-
childish, juvenile insults directed at anyone who is even marginally more excited about the potential of this technology than he is, coupled with armchair psychoanalysis of the neuroses undergirding said excitement
What I find most frustrating about each repetition of Freddie’s AI article is that I agree with him on many of the particulars. While Nick Bostrom’s Superintelligence is, without exception, the most frightening book I’ve ever read in my life, and I do believe that our species will eventually invent artificial general intelligence — I nevertheless think the timeline for that event is quite a bit further out than the AI utopians and doomers would have us believe, and I think a lot of the hype around large language models (LLMs) in particular is unwarranted. And to lay my credentials on the table: I’m saying this as someone doesn’t work in the tech industry, who doesn’t have a backgrond in computer science, who hasn’t been following the developments in the AI space as closely as many have (presumably including Freddie), and who (contrary to the occasional accusation my commenters have fielded at me) has never used generative AI to compose text for this newsletter and never intends to.
I’m not here to take Freddie to task on his needlessly confrontational demeanour (something he rather hypocritically decries in his interlocutors), or attempt to put manners on him. If he can’t resist the temptation to pepper his well-articulated criticisms of reckless AI hypemongering with spiteful schoolyard zingers, that’s his business. But his article (just like every instance in the series preceding it) contains many examples of a particular species of fallacious reasoning I find incredibly irksome, regardless of the context in which it is used. I believe his arguments would have a vastly better reception among the AI maximalists he claims to want to persuade if he could only exercise a modicum of discipline and refrain from engaging in this specific category of argument.
Quick question: what’s the balance in your checking account?
If you’re a remotely sensible individual, it should be immediately obvious that there are a very limited number of ways in which you can find the information to answer this question accurately:
-
Dropping into the nearest branch of your bank and asking them to confirm your balance (or phoning them).
-
Logging into your bank account on your browser and checking the balance (or doing so via your banking app).
-
Perhaps you did either #1 or #2 a few minutes before I asked the question, and can recite the balance from memory.
Now, supposing that you answered the question to the best of your knowledge, claiming that the balance of your checking account is, say, €2,000. Imagine that, in response, I rolled my eyes and scoffed that there’s no way your bank balance could possibly be €2,000, and the only reason that you’re claiming that that’s the real figure is because you’re embarrassed about your reckless spending habits. You would presumably retort that it’s very rude for me to accuse you of lying, that you were accurately reciting your bank balance to the best of your knowledge, and furthermore how dare I suggest that you’re bad with money when in fact you’re one of the most fiscally responsible people in your entire social circle—
Wait. Stop. Can you see what a tremendous waste of time this line of discussion is for both of us?
Either your bank balance is €2,000, or it isn’t. The only ways to find out what it is are the three methods outlined above. If I have good reason to believe that the claimed figure is inaccurate (say, because I was looking over your shoulder when you were checking your banking app; or because you recently claimed to be short of money and asked me for financial assistance), then I should come out and argue that. But as amusing as it might be for me to practise armchair psychoanalysis about how the only reason you’re claiming that the balance is €2,000 is because of this or that complex or neurosis, it won’t bring me one iota closer to finding out what the real figure is. It accomplishes nothing.
This particular species of fallacious argument is called Bulverism, and refers to any instance in which, rather than debating the truth or falsity of a specific claim, an interlocutor assumes that the claim is false and expounds on the underlying motivations of the person who advanced it. The checking accout balance example above is not original to me, but from C.S. Lewis, who coined the term:
You must show that a man is wrong before you start explaining why he is wrong. The modern method is to assume without discussion that he is wrong and then distract his attention from this (the only real issue) by busily explaining how he became so silly.
As Lewis notes, if I have definitively demonstrated that the claim is wrong — that there’s no possible way your bank balance really is €2,000 — it may be of interest to consider the psychological factors that resulted in you claiming otherwise. Maybe you really were lying to me because you’re embarrassed about your fiscal irresponsibility; maybe you were mistakenly looking at the balance of your savings account rather than your checking account; maybe you have undiagnosed myopia and you misread a 3 as a 2. But until I’ve established that you are wrong, it’s a colossal waste of my time and yours to expound at length on the state of mind that led you to erroneously conclude that the balance is €2,000 when it’s really something else.
In the eight decades since Lewis coined the term, the popularity of this fallacious argumentative strategy shows no signs of abating, and is routinely employed by people at every point on the political spectrum against everyone else. You’ll have evolutionists claiming that the only reason people endorse young-Earth creationism is because the idea of humans evolving from animals makes them uncomfortable; creationists claiming that the only reason evolutionists endorse evolution is because they’ve fallen for the epistemic trap of Scientism™ and can’t accept that not everything can be deduced from observation alone; climate-change deniers claiming that the only reason environmentalists claim that climate change is happening is because they want to instate global communism; environmentalists claiming that the only reason people deny that climate change is happening is because they’re shills for petrochemical companies. And of course, identity politics of all stripes (in particular standpoint epistemology and other ways of knowing) is Bulverism with a V8 engine: is there any debate strategy less productive than “you’re only saying that because you’re a privileged cishet white male”? It’s all wonderfully amusing — what could be more fun than confecting psychological just-so stories about your ideological opponents in order to insult them with a thin veneer of cod-academic therapyspeak?
But it’s also, ultimately, a waste of time. The only way to find out the balance of your checking account is to check the balance on your checking account — idle speculation on the psychological factors that caused you to claim that the balance was X when it was really Y are futile until it has been established that it really is Y rather than X. And so it goes with all claims of truth or falsity. Hypothetically, it could be literally true that 100% of the people who endorse evolution have fallen for the epistemic trap of Scientism™ and so on and so forth. Even if that was the case, that wouldn’t tell us a thing about whether evolution is literally true.
To give Freddie credit where it’s due, the various iterations of his AI article do not consist solely of him assuming that AI maximalists are wrong and speculating on the psychological factors that caused them to be so. He does attempt, with no small amount of rigour, to demonstrate that they are wrong on the facts: pointing out major shortcomings in the current state of the LLM art; citing specific examples of AI predictions which conspicuously failed to come to pass; comparing the recent impact of LLMs on human society with other hugely influential technologies (electricity, indoor plumbing, antibiotics etc.) in order to make the case that LLMs have been nowhere near as influential on our society as the maximalists would like to believe. This is what a sensible debate about the merits of LLMs and projections about their future capabilities should look like.
But poor Freddie just can’t help himself, so in addition to all of this sensible sober-minded analysis, he insists on wasting his readers’ time with endless interminable paragraphs of armchair psychoanalysis about how the AI maximalists came to arrive at their deluded worldviews:
What [Scott] Alexander and [Yascha] Mounk are saying, what the endlessly enraged throngs on LessWrong and Reddit are saying, ultimately what Thompson and Klein and Roose and Newton and so many others are saying in more sober tones, is not really about AI at all. Their line on all of this isn’t about technology, if you can follow it to the root. They’re saying, instead, take this weight from off of me. Let me live in a different world than this one. Set me free, free from this mundane life of pointless meetings, student loan payments, commuting home through the traffic, remembering to cancel that one streaming service after you finish watching a show, email unsubscribe buttons that don’t work, your cousin sending you hustle culture memes, gritty coffee, forced updates to your phone’s software that make it slower for no discernible benefit, trying and failing to get concert tickets, trying to come up with zingers to impress your coworkers on Slack…. And, you know, disease, aging, infirmity, death.
Am I disagreeing with any of the above? Not at all: whenever anyone is making breathless claims about the potential near-future impacts of some new technology, I have to assume there’s some amount of wishful thinking or motivated reasoning at play.
No: what I’m saying to Freddie is that his analysis, even if true, doesn’t fucking matter. It’s irrelevant. It could well be the case that 100% of the AI maximalists are only breathlessly touting the immediate future of AI on human society because they’re too scared to confront the reality of a world characterised by boredom, drudgery, infirmity and mortality. But even if that was the case, that wouldn’t tell us one single solitary thing about whether this or that AI prediction is likely to come to pass or not. The only way to answer that question to our satisfaction is to soberly and dispassionately look at the state of the evidence, the facts on the ground, resisting the temptation to get caught up in hype or reflexive dismissal. If it ultimately turns out that LLMs are a blind alley, there will be plenty of time to gloat about the psychological factors that caused the AI maximalists to believe otherwise. Doing so before it has been conclusively shown that LLMs are a blind alley is a waste of words.
Freddie, I plead with you: stay on topic. I’m sure it feels good to call everyone who’s more excited than you about AI an emotionally stunted manchild afraid to confront the real world, but it’s not a productive contribution to the debate. Resist the temptation to psychoanalyse people you disagree with, something you’ve complained about people doing to you (in the form of suggesting that your latest article is so off the wall that it could only be the product of a manic episode) on many occasions. The only way to check the balance of someone’s checking account is to check the balance on their checking account. Anything else is a waste of everyone’s time.
Technically sure. But not really. He is a supervisor of an oil rig. You would still have managers and capital owners in this world. It still fits with the described knowledge worker atrophy.
Consider also that he got that position by gambling. Not as a career path or through credentialism. Again lending credence to the theory that knoweldge work is dead and everything is a blue collar larp.
But the opposite, right? She is aware of her sexual value. So she doesn't squander it on a 19 year old. He has no money.
In reality, perhaps this is what’s happening. But I don’t think it’s what’s happening in the ephepophile’s fantasy, that causes him to be attracted to the 19 year old, no. Golddigging is an unattractive trait in a partner even if you are the beneficiary. One would prefer to think that the free-spirited young thing with few sexual hangups is exactly that, rather than secretly calculative.
Or maybe he is just predisposed to noticing a particular type of bad thing in his life.
In other words, he's a racist.
Lando is white collar.
The original english translation you posted below is incomprehensible.
Eh, it was comprehensible enough, the most mistranslated part was "reacted really disgusting me" vs what I assume was meant to be "reacted really disgusted with me" - and the true meaning can be error-corrected from context. The AIsloppy editing destroyed more meaning, originality, dare I say soul than the lack of English skills of the author.
Interestingly @RandomRanger cited a video in another thread that's an unintentional example of this. It's an Avatar compilation video titled "Hardest RDA Edit" where 'hard' is used to mean based/awesome/woah. My browser mistranslated that to "[Most Difficult] RDA Edit' i.e. 最も難しい RDA 編集.
If GPT is given both the title and the summary (which Youtube could do internally with their API) it gives the much better translations "Max strength RDA edit" 史上最強RDA編集 or "Most villainous RDA edit" 最凶RDA編集. In general I find GPT much better on language problems than they are on almost any other task, and miles better than standard machine translation.
今、天国に色気分だわ
@4bpp sorry for double-dipping, but since I've got you here do you know why わ is used? Obviously it's usually feminine, and I understand that the male usage is from the archaic patterns where it's broadly an emphasiser like ぞ and therefore used by archaic / cool characters to express emphasis. Is that what's going on here? It doesn't quite seem to fit.
I do feel like it's insane how much content is now AI driven.
Even random innocuous social media blurbs have em-dashes when it's like 'You could have written that your restaurant is open for longer hours'. I understand using AI to marshall your thoughts or if you're wanting to do longer form writing but there's plenty of messages where I feel like it'd just be quicker and easier to not open ChatGPT and provide a prompt.
I didn't spot that tbh. After a decade I still can't quite get all the nuances of how に should be used, especially when it's used as part of more sophisticated/niche grammar structures. N1 is still a little ways off...
I do notice that none of the translations got the nuance of 「キモい!」と反応してくれて right.
Moreover, she said “You’re degenerate!!!” for me.
The use of くれて to imply this was a sort of mutually positive interaction changes the entire tone of the passage, so it's kind of bad GPT misses it. Though I feel like I'm putting far too much thought into the ramblings of a perv on the internet.
EDIT: Sorry, replied to wrong post.
And this particular condition is not characteristic of the whole law either, and as such characterizing the broader law in terms of this particular condition is wilfully misrepresenting the broader law.
Or, to put in other terms, it is missing the forest for a tree. It can indeed be a joo-tree in the forest, but it is not a joo-tree forest. Talking about how the forest is the result of malign joo influence is willfully misrepresenting the forest.
Not least because, and part of that broader context being obfuscated, the joo-tree is coincidentally planted in a specific grove beside the anti-DEI-tree, and the anti-illegal-immigration (ALL) tree, all at the direction of the hated forest-lord. This grove is now being publicized to audiences with people who would like to cut down joo-trees, anti-DEI-trees, and ALL-trees even before their hatred of the forest-lord is considered.
That's bait, and SS fell for it as much or more than the intended targets.
It's as if Kaczynski was using AI agents and hypersonic missiles, starting a VC-backed startup for the cause of destroying technology.
'The Master's tools can absolutely dismantle the Master's house,' said Kaczynski, watching from his penthouse as smoke rose on the horizon. 'With great efficacy.'
enshittify
This verb implies a movement from a good state to a bad one; the language was previously not shit. Except, the people using LLMs in this way already can't communicate. The original english translation you posted below is incomprehensible. You suggest
the English they do write will be worse
but I can't see how anyone would suggest the AI translation is worse than the original. It might screw up some of the meaning, but that comes with the tradeoff of being more readable.
Or are you just using this example to push your point that native speakers are going to degrade the quality of their communication? This seems far more to reinforce the argument that smart users of LLMs will use them to leap forward, while poor users will get left behind. As I write this post I am using the Grammarly add-on; it's a useful spelling and grammar checker. It will also pop up "writing improvements". Almost without exception, these improvements are shit, and they've been shit long before ChatGPT came along. However, it hasn't changed the way I write, because I am capable of judging the quality of its suggestions. Do you think that Grammarly has been degrading the quality of English for years because some users implement everything it says?
It's the same story with translation. 15 years ago, a non-native speaker might go to babelfish.com and pump out something completely useless. 10 years ago, they would have switched to Google translate, and got something better, but still missing a ton of meaning. 5 years ago, DeepL was the standard, but still a long way off human translation. Now it's LLMs. When learning any language, one of the first lessons a student learns is not to blindly trust any machine translation.
Sounds like quibbling over priorities. They also said taking out Saddam would aid in regime-changing Iran.
'Don't do [course of action] unless you're going to do it the right way' may be dismissed as quibbling over priorities, but it is still a caution against, and are not even indirectly instigating the [course of action] either.
On a material-level, if you are going to invade both countries as the neocons intended, then your second sentence is objectively true. It would be far easier for the US to launch from Kuwait and Saudi Arabia into Iraq than into Iran (you could drive), and then from Iraq into Iran (you could drive), than to launch an amphibious invasion of Iran.
Also, was what you're mentioning said in public, or in private? Because if it was the latter, you can't blame the public for not knowing what was deliberately kept from them.
When the result of private discussions are later publicized, and have been public for nearly two decades now, it is a distinction without a difference. Someone can claim the later public revelations were lies, or self-serving after-the-fact deflections, but absent that we can absolutely blame people for not knowing a historical record exists.
And maybe I'm typical minding, but if it was anything like the times I've blindly trusted a woman who told me she was on birth control, the truth of the matter is that in the moment I didn't give a single shit if she might get pregnant.
Would you have had unprotected sex with her had she stated she was not on birth control? If no, then clearly you did in fact give a shit to some extent if she might get pregnant.
If you would still have done so, then yes - I'm not sure it's appropriate for you to be typical-minding.
I am with amadan here
So I'm under no impression that Amadan will ever agree with me (or that many of the people advocating this will ever agree with me, really), which is why I declined to pursue the point too much, but okay let's examine the core of this moral evaluation for a bit. If it is really the case that a child not only has the right to provision, but has the right to provision from both biological parents - if depriving the child of this is so unacceptable that freedoms should be curtailed to pursue that objective - then the following should also be a logical corollary of this belief:
1: A woman should not avail herself of the services of a sperm bank, as it results in the production of a child without the father involved. Single women should be barred from using a sperm bank under any circumstances, and if they do they should be aggressively socially shamed for intentionally producing a child who will grow up in that deprived state. After all, the statistics on children raised by single mothers speak for themselves. Same thing for men and surrogacy.
2: It should be against the law for a woman to leave the biological father off the birth certificate, or to fail to inform him of the existence of a child. She should be required to identify the father and get him involved in supporting the child either by choice or by force. A woman who does not do so is being horribly negligent and selfish and should be castigated.
3: Women should have no access to safe haven abandonment (or adoption, for that matter) under any circumstances, possibly even extremely coercive ones. Under this moral framework that is even worse than paternal surrender as it results in the unilateral abandonment of a child and alienation from both biological parents, and is a complete and total infringement of the child's right, excluding it support from even just one parent and possibly consigning it to become a ward of the state.
Of course, none of these things are currently the case. Are you willing to assent to all the above, and state that anybody who makes the above choices in contravention of these dictums is being capricious and immoral? If so, I would say you're perfectly consistent. Understandable, have a nice day. If not, it stands to reason that children do not in fact have the inherent right to the support of both biological parents, and that it's permissible for a child to end up without this supposed right for many reasons, including "she just wanted to be a single mother", and "she just didn't want her child". In practice I don't actually think most people believe that a child has an inherent and inalienable right to support from both biological parents, they certainly don't prioritise it above all else. They are perfectly willing to infringe on this principle especially if they can be convinced that it gives women more choice.
If it is perfectly moral for a single woman to use a sperm bank and produce a child out of wedlock which will not be entitled to any support from the father, by extension it should be perfectly moral for a man to surrender responsibility for a child before birth; after all it produces the very same outcome if the woman decides to keep it. This especially applies if he was duped into becoming a father through false representations, regardless of whether or not he was "thinking with his dick". But I don't think most people who advocate this position have really thought through its moral ramifications.
Personally in a situation like this I'd try to get custody of the kid and bring it home with me.
In theory I agree that would be good (I would not want a child of mine in the custody of a woman who would do something like that), in practice that's not going to be easy.
Right, this translation gets closer to the original in some ways by not reproducing the additions and deletions in the original proposal, but also loses some of the colour. Notably, none of the three translations really quite reproduces the heroin-addled vibe of the original (this was perfect, I am in a state of absolute bliss, I took a dose, and then I got another dose!! and soon I'll get yet another dose, I can't wait!!). I wonder if this sort of pathology has been thoroughly RLHFed out of ChatGPT, or one could elicit it with the right prompt.
(The "sexy heaven" thing in yours came from a typo @phailyoor introduced - it's 天国にいる気分 on paper, not the enigmatic 天国に色気分 for which that interpretation would be a fair guess.)
If the goal is just discrimination, why single out Israel specifically? It’s an odd flex considering that there are other trade partners that would qualify under anti discrimination rules (India, Japan, Korea, Latin America, etc.) but they don’t get the same protections. If I passed a law in North Dakota that said “no money goes to Asian countries,” it’s perfectly fine. If I do the same with South Asia, again, fine. It’s only when North Dakota says “we aren’t buying from Israel,” that anything happens.
I think he must have tried to iterate on his original translation. The direct translation is more accurate:
Today’s stream was perfect! When I commented, “Step on me, please!” my oshi, Haachama, actually responded with “You’re gross!” And then she even followed up with “You’re way too much of a perv!” It was insane!! I feel like I’m in sexy heaven right now. This is honestly the most peaceful moment of my life.
And the thing I’m most hyped for is Haachama’s birthday live on Sunday, August 10th at 9PM!! I seriously want to support her with everything I’ve got. Just imagining that day feels like I’m drinking her bathwater.
Though I agree with @phailyoor that a lot of self-expression is lost here compared to his original attempted translation.
Sort of. Broadly, I believe that young people weren’t in serious danger so depriving them of the vaccine for a while was fine. Rather, young people weren’t being deprived per se. Whereas your white guy over 45 was still in some need of a vaccine and depriving them is therefore a problem.
If the disparity was massive enough I imagine I’d bite that bullet and give the vaccines to the young black people first out of obvious necessity.
My understanding is that the disparities weren’t that wide and that in the cultural moment professionals were sort of overjoyed to find a reason to demonstrate their anti-racist credentials by giving black people preference in a matter of life and death. Which obviously affects my perception.
Wait, so what was the process there? Was ChatGPT given the Japanese text and asked to generate its own translation for comparison, or asked to improve/iterate on his? In general, I agree with your critique of non-native speakers using AI for text massaging (the feeling of something not quite coherent being said in superficially polished prose by an AI broadcast announcer voice with occasionally inappropriate diction is pretty grating), but in this particular case, it seems to me that the AI translation is in fact superior and somewhat more true to the original, which may be because unlike in the "Indians making slop for Tiktok/Youtube shorts" case, it had access to a literate source text. Specifically, for example, there is in fact nothing to the effect of "I could die there" in the JP text. The author must have spontaneously come up with it while writing his own proposed translation.
In general, the text we are looking at is close to a pessimal case for both AI translation and translation by someone who learned formulaic English at school, because the original is dense with subculture references and memes that are not just invoked as keywords but determine the syntax and narrative form as well. It's like trying to translate a 4chan greentext, or a doge image.
Under the lens of the Civil Rights Act, a company saying "We won't do business with Israeli nationals" (note the number of dual-citizenships and US citizens residing in Israel, which is more than in Canada) is a pretty transparent violation.
[...] But in this particular case, "will not buy from Israel-linked companies" is pretty strongly associated with attempts to discriminate against persons of Israeli origin. I think this case is maybe winnable, but you'd likely need to be squeaky clean on the persons (not corporate) level.
Discriminating against Israeli citizens in the US seems bad from a civil rights perspective, yes.
Discriminating against Israeli companies or products seems much less problematic, especially if it is just spending decisions. Both states and companies should be free to chose with which companies they do business. If Texas prefers to arm its police force with weapons produced in Texas, that seems the kind of decision a state should be able to make. If Google decides that it hates South Korea and refuses to buy any computer components produced there, that is something for the market to solve.
I think that the use of financial incentives is pretty disingenuous, because it allows the feds to say "we did not violate your rights, you could just opt out of FEMA or not take tax credits".
If federal funds come with strings attached on how to spend that money specifically, that seems fine. "If you buy emergency shelters from your FEMA grant, you may not discriminate against Israeli companies" - "None of the medicaid funds may be spent on medical marijuana" - "5% of the medicaids funds are earmarked for abortion services. If you can not provide these, you do not get the 5%."
But my understanding is that this is not what is happening here. Instead, it is "follow our rules generally, or you don't get money", which I find bad.
Saying something about the military implicitly distingushes between military things and non-military things.
But what are the actual clauses with actual words that go on to make further distinctions between things? Without that, your hypothetical isn't analogous.
I don't know. You could make arguments for either one.
Then at least try. Because right now, you're not even trying, and it's becoming ever clearer that it's because you can't. Because the Air Force just isn't authorized. It doesn't fit.
Don't worry, though. There's an easy fix. It's why we have Article V. Literally everybody really wants to have a legal Air Force. You want it so bad that you're tying yourself in knots trying to imagine that you already have it, when you clearly can't even come up with a half-hearted argument for it. Nobody is going to get in the way of passing an amendment to finally make it constitutional.
Original English:
Today’s stream was stunning! I commented “踏んでくだSTEP ON ME!!!” and My Oshi HAACHAMA reacted really disgusting me. Moreover, she said “You’re degenerate!!!” for me. OMG I’m in a heaven right now. It was a peaceful and an amazing moment in my life.
And also, the most excited thing is There’s HAACHAMA’s birthday live soon. It’s in August 10th 21.00JST. I could die there. I can’t wait to watch that!! I really wanna support her live too. Imagine that live I feel like drinking her bath water.
Original Japanese:
今日の配信は完璧でした!俺は”踏んでくだステップオンミー!”とコメントすると、推しのはあちゃまが「キモい!」と反応してくれて
さらに、「あんた変態すぎ!」と言ってくれたんだ、やばすぎ!! 今、天国にいる気分だわ。 これは、俺の人生で最高に平和な瞬間だよ。
そしてさらに、1番楽しみで仕方ないことははあちゃまの誕生日ライブが8月10日(日)21:00にあるってこと!!全力で応援したいと思ってるんだ。 その日をライブを想像すると、彼女の風呂水を飲んでる気持ちになるよ。
ChatGPT output:
Today’s stream was absolutely mind-blowing!
I typed “踏んでくだSTEP ON ME!!!” in the comments, and my oshi, Haachama, actually reacted. Not only that —em—dash— she called me “degenerate!!!” directly OMG. I’m in heaven. That one moment was pure peace and bliss. Probably one of the best highlights of my life.
And the most exciting part is still ahead. Haachama’s birthday live is coming up on August 10th at 21:00 JST. I seriously can’t wait. I really, truly want to support her with everything I’ve got. Just thinking about that live … feels like I could drink her bathwater.
This is purely your opinion and given the price people are willing to pay per sqft, one that millions of people do not share with you.
NYC apartments have something McMansions can never have: location, location, location. This is the ultimate amenity.
You may not value it, which is fine, but that doesn't mean it isn't valuable.
More options
Context Copy link