@self_made_human's banner p

self_made_human

amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi

14 followers   follows 0 users  
joined 2022 September 05 05:31:00 UTC

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

A friend to everyone is a friend to no one.


				

User ID: 454

self_made_human

amaratvaṃ prāpnuhi, athavā yatamāno mṛtyum āpnuhi

14 followers   follows 0 users   joined 2022 September 05 05:31:00 UTC

					

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

A friend to everyone is a friend to no one.


					

User ID: 454

Sin and cos. And I use them a lot.

SOHCAHTOA coming in clutch for an entirely different kind of Indian.

Shame I missed the halcyon days. I showed up maybe a year or two after the migration to the subreddit.

Et tu, Brute? I keep imagining there's some mysterious phase change where repetition makes it stick.

I had a very awkward referral once, for a patient with a TCA overdose. I looked at it, knew what it was, but when the person taking the referral asked me to describe it, I was "uh... Those T waves look tented?"

I did at some point succeed at "building" the world's shittiest ECG; at least it made an appropriately squiggly-looking line (relying on the oscilloscope for 98% of the work, of course). I'm pretty sure that experience has only left me more mystified about what an ECG is supposed to do.

The heart goes through sequential contraction and relaxation phases, with the upper atria and lower ventricle being out of phase. This is governed by electrical waves propagating roughly top down. Since we're talking about a chemical process (ions crossing membranes), there's noticeable conduction delay.

Roughly speaking, it kicks off near the top of the heart, and has a "highway" of rapid conduction down the middle. There's increased latency the further you go.

We place multiple electrodes on the limbs and chest:

*The leads placed on the chest measure changes in voltage propagating perpendicular to the skin (front and lateral).

  • The axial leads measure measure the projection of the heart's electrical axis to the vector connecting the leads, going ~left to right and top-bottom.

You draw a chart. Leads V1 and V2 focus on the anterior-right of the heart, 3 and 4 are a bit lower and right above the heart, so you get the anterior picture, 5 and 6 show you what's going on in the sides. The limb leads help figure out the inferior bit.

Once we have established a baseline, then we look at a patient's ECG for deviations from the norm. Too much or too little voltage, or an unusual delay between phases, these can all point to cardiac pathology, and we can localize based on which views are aberrant. For example, in a heart attack, the leads reading anteriorly will, badum-tss, be the ones most out of whack if the damage is on the anterior aspect of the heart (anterior myocardium/muscles), and so on. And those delays in conduction point towards something wrong with the inbuilt cardiac pacemakers or that highway I mentioned.

In effect, an ECG isn't just a single image, it's closer to tomography. The additional leads provide clear advantages over just attaching a potentiometer to someone's toes and fingers.

Of course, it gets much more complicated in practice. Especially when a patient has multiple heart conditions at once, I start sweating when I have to interpret those even when I'm fully up to speed. And it's all the worse in psychiatry, because you can't rely on the patients to be particularly cooperative. And it hurts when you pull off the adhesive on the cups and it takes chest hair with it.

If you're looking for a 'picture' to hold in your head, this 3Blue1Brown is a classic. Surprisingly appropriate for a huge range of mathematical sophistication.

But Pagliacci, I've tried clown therapy :(

3B1B is excellent, and his video on the FT is my go to. It's just that I forget the details beyond "you can decompose arbitrary analog signals into a sum of sine waves".

Reply: "The concept is called XYZ and it works by X, Y, and Z." Entirely a hallucination when you then go to search for XYZ.

Which model? Hallucinations have become quite rare on the SOTA models, especially the ones with internet search enabled. It's not like they never happen, but I'm surprised that they're happening "all the time".

Does anyone have their own equivalent of a personal "antimeme", a concept you familiarize yourself with (potentially with difficulty) and then inevitably forget unless you make an intentional effort to look it up?

In no particular order:

  • I often have to look up whether I need an x86 or x64 executable when I need to download a program
  • ECGs. Fucking ECGs. I get good at understanding them when I absolutely have to (before exams), but guess what, by the time the next one rolls around, it's all out of my head.
  • Fourier transforms (how they actually work, and not the conceptual strokes)
  • And many more, all of which stubbornly refuse to come to mind, because of course they do.

If I could pay $20 for an upgrade to business class, you bet I would.

It's very interesting to see you be even more bloodthirsty and drama-pilled than Count.

I'm a basic bitch who goes to Starbucks twice a year and thinks the coffee was nice. I feel like a deaf person walking into a concert.

You are starting to write like a guy who reads LLM output and thinks "Yeah, that's good writing!"

I disagree! I do not think that the majority of LLM output is worth reading. That is not the same as LLMs being incapable of good writing. Getting something decent out of them takes effort. Not some kind of overcomplicated prompt engineering nonsense, but more effort than bad actors take.

To illustrate, I can truthfully claim that Xianxia as a genre is sloppy trash (most of it is) while simultaneously arguing that Reverend Insanity is peak fiction. The selection process is what allows for a recommendation.

are absolute crap in terms of writing style (* cough * Reverend Insanity * cough * )

As you can see, we have irreconcilable differences. Pistols at dawn?

Note that I am not saying you're doing the same thing, just... I think you know you're outsourcing too much to AI, and now you're getting pissy when people point it out.

I really can't win. If I stay quiet and ignore things: avoidant behavior. If I just say that, yeah, I've used AI, that is a no-contest. If I actually take a stand, then suddenly the lady doth protest too much. Nah, this lady has principles, and is willing to argue them.

AI detectors are themselves not that reliable, since the ability to detect AI writing is a moving target, so posting "An AI detector said my writing is 100% human" is probably not that convincing to most people. (Just as many people have had the displeasure of seeing something they know they wrote themselves tagged as "almost certainly AI" by an AI detector.)

I have heard claims that Pangram is better than most. For example, it's batting 100% here, admittedly, for a single sample. To the extent that people have used AI detectors on me in an attempt to shore up their argument that I'm using AI (in a post where I allude to the fact I'm using it), then I feel entitled to use them myself. If it works, then you believe in my probable innocence, if you believe it doesn't work, then you had no reason to consider me guilty beyond what I've already confessed.

I'm not even against using a LLM to refine your writing. I wish I had so I wouldn't have made that annoying set of typos.

Funny story. Do you know why I made that effort?

A guilty pleasure of mine is to copy and paste entire pages of my profile into an LLM and ask for a summary/user profile (without telling them I'm the user in question). When I first started, maybe a year or so back, I noticed that the models would regularly call me acerbic and prone to cutting humor, even when they happily acknowledged the positives.

I thought about it, and decided, huh, it might be worth an effort to intentionally tone it down myself. If it's not obvious, I adore Scott, and he is probably so mild-mannered that his toddlers walk all over him.

(Oh, wait.)

So I decided, hey, it's worth trying to be nicer, even though I do not suffer fools gladly. Or perhaps I'm getting old, and realizing that yelling at people on the internet is of little utility and only raises my blood pressure.

For your pleasure:

https://youtube.com/watch?v=-h7BOxN-qRc

(The longer version was sadly hit by a guided copyright strike)

But no, it's a totally legit songbook work from an Adirondacks campfire song to a broadway show in 1927 to a Fred Astaire film. But the joy of discovering that was ruined, because I was too busy worrying if I was a dope falling for AI slop.

Sigh. I suppose this strengthens my usual point:

Stop worrying about "AI slop". You often won't be able to tell if it's AI, and it will, inevitably, become less sloppish. Dare I say, even good. Sure, you want to be able to tell if your Tinder date is catfishing you, or if the email or invoice you've received is real. It's of utmost importance to know if there's an epidemic of obese, suicidal women are throwing boulders on bridges in China.

But for everything else? A drawing is a drawing dawg, a song either sounds good or it doesn't. A work of fiction is no better or worse if meat machines or silicon wrote it (for the same assortment of letters in the same order).

This way lies zen, and the ability to happily partake in the post-scarcity supply side of the attention economy.

Of course, if you do genuinely value human authorship for reasons such as "soul" or the inability of machines to feel emotion, then good luck. You're going to need it.

I also acknowledge I like your writing, I think it's some of the most consistent and interesting posting here. I also think you are a much better writer than me, so if that's your standard for receiving feedback feel free to just ignore the rest.

My apologies. I was very annoyed, for what I hope were understandable reasons. I'm happy to accept feedback when it's not framed as a personal attack alongside, IMO, very poor justification. I'm happy to hear what you have to say!

All that being said. It is uncanny, I have more than once in the last week been interacting with ChatGTP and thought "This could just as well be a Mechanical Turk and @self_made_human is on the other side." It's not just the use of bullet points, it's your tone, word choice, argument structure. It's not just the use of markdown, it's extremely machine like choice of formatting.

Hmm.

The thing is, markdown is cool and incredibly powerful. LLM chatbots like ChatGPT (that aren't base models), are under heavy selection pressure to conform to human preferences. That means a convergence to certain norms, because the average user or RLHF monkey prefers! Headlines, emphasis, bullet points, em-dashes — they're all useful. They make text more legible and help it flow better.

In other words, I've come to appreciate the benefits to writing in a certain structure. I personally prefer it, and I think the majority do (by revealed preference) and it strikes some people as AI-like. The last bit is an unfortunate side effect.

(I would say a bigger influence is Scott. I'm a fanboy, and his advice is solid)

I'm not sure what you mean by a change in tone or word choice, though I make an intentional effort to be less acerbic these days.

However:

I do use AI, sometimes! I've never tried to hide it, or deny its influence when anyone asks. That does not mean that any of my posts are writtrn by AI. I use LLMs for research, fact checking, proof-reading and editorial purposes.

That usually entails writing a draft, then submitting it into an LLM for advice or critique, which I may or may not use.

I think this is entirely above board, and I champion its use. It is categorically not the same as throwing a prompt into a box and then getting the AI to do the heavy lifting. The AI is an editor, not a ghostwriter.

Do you honestly not think your writing style has not changed at all over the course of three years? I think it's would be extraordinarily unlikely that someones writing style does not change at all over the course of years in their 20s. If you acknowledge your style has changed, is your claim it's directionality away from LLM style?

Precisely the opposite. My style has changed, for what I think is the better. I'd hope so, given that I must have written like 1-5 million words in between, including a novel. It has also become more LLM-like, but that is because I like some of the things LLMs do, and not because I'm replaced by an LLM. Case in point, I've never had anyone accuse me of including unsourced or inaccurate information, even when they're criticizing my style, because it's a point of pride that I always review anything an LLM tells me.

When I said:

I've always written like this. You're welcome to trawl my profile back to the days when LLMs were largely useless, and you'll find the same results.

I mean that that specific comment had zero AI in it, and is of a style that strikes me as self_made_human from a few years back, as raw as it gets. It was quickly jotted off, with none of the usual edits, revisions or edit passes I make a point of doing manually. It is as me as it gets, and wouldn't be out of place three years back. It lacks the effort and polish I aspire to today.

Hell, I was doubly mad because I made an intentional effort not to succumb to just asking him to check ChatGPT (which would have given him excellent advice on a topic as done to death as this one) since he clearly wanted a more personal touch. I didn't even ask ChatGPT to write boilerplate that I could have theoretically co-opted as my own. I saw the comment, noticed, hey, I'm actually studying NICE guidance on initiating and managing antidepressant usage, and decided to just scribble down my understanding of best practice. I am, after all, mostly a shrink, even if I'm got more shrinking to do.

So, here I am, providing what I hope is accurate and helpful advice, the old-fashioned way, and someone comes along and starts shit. I might be a moderator, but I have my limits. Anyone calling me a "slopmonger" can fuck right off. As this current example of discourse demonstrates, I am more than happy to be civil and take pains to explain myself if the other person extends me the same courtesy. I appreciate that you have.

https://www.themotte.org/post/2368/culture-war-roundup-for-the-week/354239?context=8#context

Here is a thread outlining my stance towards prior accusations of AI usage, where I am perfectly happy to acknowledge that I have used it (when I've actually used it). You'll notice that I've spent a great deal of time explaining the same thing to jkf in good faith, in an attempt to convince him of the merits of my stance. That hasn't worked, and I am offended by new accusations when the evidence on display is very clearly not AI. It's like someone going around with a loudspeaker telling people I'm a sex offender, when the rap was for public urination while drunk. Even if it was technically correct (it wasn't here), I have little energy to spare to have this argument again.

Alternatively, this:

https://www.themotte.org/post/2368/culture-war-roundup-for-the-week/354252?context=8#context

This strikes me a quite distasteful. It strikes me as someone being upset they got some criticism, then decided to use their mod powers to make an ad hominem attack rather than ignoring or addressing the criticism. If you really don't care what the lesser writers here think of your style, why bother to dig through the mod log?

I don't think opening the moderation log is an abuse of mod power in any meaningful sense. Moderation actions are public, anyone can see them on the sidebar. The panel only shows me the ones linked to a specific user. I didn't slap him with a ban, or start a fight. Moderators are only human, mea culpa.

If he's going to call me a slopmonger, when I think I've got more than enough evidence of engagement (presumably high quality, though everyone is at liberty to form their own opinions, I'm not your dad, I think), then I feel within my rights to point out that he has almost nothing to his name, and what he does is negative. It's genuinely impressive to have been here so long and still achieve so little. Both lurking moar or engaging less are valid options.

And I hope that I have demonstrated, to your satisfaction, that I am usually open to criticism, and have, in fact, had this same conversation with him in the past.

Bruh. Let me summon @Throwaway05 :

  • Do you think it's possible to make recommendations or suggestions about antidepressant usage without heavily stressing the importance of a full physical to rule out medical causes for low mood?

  • Do you think the information given by OP was sufficient to make a clinical recommendation beyond the most universally applicable points?

I strongly suspect he's going to back me up there. Since the facts aren't really in dispute, all that's left is finding a certain string of letters to convey the message. I see nothing "AI" about my choice of phrasing, that's just... normal writing. It's oodles less formal than what I might for in an actual effortpost, because it was smashed out in 5 minutes in the middle of a study session.

To the extent that the 'best' "AI detectors" don't think it's AI at all, I'm very curious to know what stylistic tells you imagine you see, and then an effort to compare it with my earliest writing. I'm not going to bother, I've already put in more than sufficient effort, and I am generally honest about using LLMs. If and when I do use them.

Dude, I've been on here... I don't remember actually, but a long time before I saw you show up.

I'll save you the bother. We've both been on themotte.org since September 2022. I've been a user of /r/TheMotte since just after it split off from the CWR thread on /r/SSC.

And in the span of 3 years, the only notable events in your mod log are two warnings. Not a single AAQC, and people stumble into those by accident. I'll welcome your criticism about my writing style when you write something to impress me first. Or even impress anyone, I don't select the nominees, those are largely on the basis of popular opinion. It takes as little as one person hitting report.

When someone like @Amadan or @2rafa or @phailyoor or.... criticizes my writing style or my very limited use of AI (in this case, exactly zero), I listen. When I didn't even use the damn thing, I'm not going to care very much about your unfounded concerns. If you don't like the self_made_human house style, you're entirely at liberty to not read it.

If Bryan isn't a drooling senile mess at 120, then he's probably benefited from some kind of drug that rejuvenates the brain and restores neuroplasticity too. Taking LSD or shrooms helps with that today, even if it's not going to cure dementia.

Wow.

I guess we have to expand the taxonomy of LLM psychosis, to account for people so paranoid/blind that they see AI the moment someone bothers to use markdown formatting. If bullet points are all it takes to set you off, then one to the brain is probably the best possible cure.

I've always written like this. You're welcome to trawl my profile back to the days when LLMs were largely useless, and you'll find the same results.

And, for what it's worth, that comment was hastily typed out while in the midst of studying actual notes on antidepressant prescription according to UK guidance. You just can't win.

Guess what? The LLMs have read the same literature. There isn't much room to put some kind of unique human spin on the basics of choosing and switching between antidepressants. If ChatGPT had written it for me, it would have been thrice as long, and probably more comprehensive. In which case, I am flattered to be mistaken for it.

How do AI artists deal with preserving character details from image to image? It seems to me this is even more important for furry art (various fur patterns must be harder to reproduce correctly than "black hair, pixie cut").

Nano Banana or GPT Image are perfectly capable of ingesting reference images of entirely novel characters, and then just placing them idomatically in an entirely new context. It's as simple as uploading the image(s) and asking it to transfer the character over. In the old days of 2023, you'd have to futz around fine-tuning Stable Diffusion to get far worse results.

I've done my time with Stable Diffusion, from the closed alpha to a local instance running on my pc.

Dedicated image models, or at least pure diffusion ones, are dead. Nano Banana does just about everything I need. If I was anal about the drop in resolution, I'd find a pirate copy of Photoshop and stitch it together myself, I'm sure you can work around it by feeding crops into NB and trusting they'll align.

All of the fancy pose tools like ControlNet are obsolete. You can just throw style and pose references at the LLM and it'll figure it out.

I suppose they might have niche utility when creating a large, highly detailed composition, but the pain is genuinely not worth it unless you absolutely must have that.

I wanted to write a post about some of these events, specifically the change in attitude for the titans of industry like Linus Torvalds and Terence Tao. I'm no programmer, but I like to peer over their shoulders, I know enough to find profoundly disorienting, seeing the creator of Linux, a man whose reputation for code quality involves tearing strips off people for minor whitespace violations, admit to vibe-coding with an LLM.

Torvalds and Tao are as close to gods as you can get in their respective fields. If they're deriving clear utility from using AI in their spheres, then anyone who claims that the tools are useless really ought to acknowledge the severe Skill Issue on display. It's one thing for a concept artist on Twitter to complain about the soul of art. It is quite another for a Fields Medalist to shrug and say, "Actually, this machine is helpful."

Fortunately, people who actually claim that LLMs are entirely useless are becoming rare these days. The goalposts have shifted with such velocity that they've undergone a redshift. We've moved rapidly from "it can't do the thing" to "it does the thing, but it's derivative slop" to "it does the thing expertly, but it uses too much water." The detractors have been more than replaced by those who latch onto both actual issues (electricity use, at least until the grid expands) and utter non-issues to justify their aesthetic distaste.

But I'm tired, boss.

I'm sick of winning, or at least of being right. There's little satisfaction to be had about predicting the sharks in the water when I'm treading that same water with the rest of you. I look at the examples in the OP, like the cancelled light novel or the fake pop star, and I don't see a resistance holding the line. I see a series of retreating actions. Not even particularly dignified ones.

First they ignore you, then they laugh at you, then they fight you, then you win.

Ah, the irony of me being about to misattribute this quote to Gandhi, only to be corrected by the dumb bot Google uses for search results. And AI supposedly spreads misinformation. It turns out that the "stochastic parrot" is sometimes better at fact-checking than the human memory.

Unfortunately, having a lower Brier score, while good for the ego, doesn't significantly ameliorate my anxiety regarding my own job, career, and general future. Predicting the avalanche doesn't stop the snow. And who knows, maybe things will plateau at a level that is somehow not catastrophic for human employability or control over the future. We might well be approaching the former today, and certain fields are fucked already. Just ask the translators, or the concept artists at Larian who are now "polishing" placeholder assets that never quite get replaced (and some of the bigger companies, like Activision, use AI wherever they can get away, and don't seem to particularly give a fuck when caught out). Unfortunately, wishing my detractors were correct isn't the same as making them correct. Their track record is worse than mine.

The TEGAKI example is... chef's kiss. Behold! I present a site dedicated to "Hand-drawn only," a digital fortress for the human spirit, explicitly banning generative AI. And how is this fortress built? With Cursor, Claude, and CodeRabbit.

(Everyone wants to automate every job that's not their own, and perhaps even that if nobody else notices. Guess what, chucklefuck? Everyone else feels the same, and that includes your boss.)

To the question "To which tribe shall the gift of AI fall?", the answer is "Mu." The tribes may rally around flags of "AI" and "Anti-AI," but that doesn't actually tell you whether they're using it. It only tells you whether they admit it. We're in a situation where the anti-AI platform is built by AI, presumably because the human developers wanted to save time so they could build their anti-AI platform faster. This is the Moloch trap in a nutshell, clamped around your nuts. You can hate the tool, but if the tool lets your competitor (or your own development team) move twice as fast, you will use the tool.

We are currently in the frog-boiling phase of AI adoption. Even normies get use out of the tools, and if they happen to live under a rock, they have it shoved down their throats. It's on YouTube, it's consuming TikTok and Instagram, it's on the damn news every other day. It's in your homework, it's in the emails you receive, it's you double checking your prescription and asking ChatGPT to explain the funny magic words because your doctor (me, hypothetically) was too busy typing notes into an Epic system designed by sadists to explain the side effects of Sertraline in detail.

To the extent that it is helpful, and not misleading, to imagine the story of the world to have a genre: science fiction won. We spent decades arguing about whether strong AI was possible, whether computers could be creative, whether the Chinese Room argument held water. The universe looked at our philosophical debates and dropped a several trillion parameter model on our heads.

The only question left is the sub-genre.

Are we heading for the outcome where we become solar-punks with a Dyson swarm, leveraging our new alien intelligences to fix the climate and solve the Riemann Hypothesis? Or are we barrelling toward a cyberpunk dystopia with a Dyson swarm, where the rich have Omni-sapients in their pockets while the rest of us scrape by in the ruins of the creative economy, generating training data for a credit? Or perhaps we are the lucky denizens of a Fully Automated Luxury Space Commune with optional homosexuality (but mandatory Dyson swarms)?

(I've left out the very real possibility of human extinction. Don't worry, the swarm didn't go anywhere.)

The TEGAKI example suggests the middle path is most likely, at least for a few years (and the "middle" would have been ridiculous scifi a decade back). A world where we loudly proclaim our purity while quietly outsourcing the heavy lifting to the machine. We'll ban AI art while using AI to build the ban-hammer. We'll mock the "slop" while reading AI summaries of the news. We'll claim superiority over the machine right up until the moment it politely corrects our Gandhi quotes and writes the Linux kernel better than we can.

I used to think my willingness to embrace these tools gave me an edge, a way to stay ahead of the curve. Now I suspect it just means I'll be the first one to realize when the curve has become a vertical wall.

Thanks!

I feel like someone might have answered this already, but I'm too lazy to look it up:

As someone who is curious about Gundam, where do I start?

I've always raised an eyebrow at this advice. Speaking for myself, I've never felt that photography distracted me from being "in the moment." If I'm visiting a nice place, I'm going to whip out my phone, take as many photos as I please, and then use my Mk. 1 human eyeballs. I don't perceive events entirely through a viewfinder.

And I notice that my memory of events is significantly enhanced by photos. I have forgotten a ton of things until I've seen a picture that either brought back memories or let me reconstruct them.

You would have to have a very pathological attachment to a camera for taking photos at the frequency of a normal 21st century human to be detrimental.

You need a psychiatrist. I am only two-thirds of one, but fortunately for you, I've got exams and that means actually reading some of the papers.

(Please see an actual psychiatrist)

The choice of initial antidepressant is often a tossup between adherence to official guidelines, clinical judgements based on activity profile and potential side effects, and a dialogue with the patient.

In short? It is usually not very helpful to worry too hard about the first drug. They're roughly equally effective (and where one is superior, it's by a very slim margin) But in certain situations:

  • Can't sleep? Lost appetite? Mirtazapine
  • Too sleepy? Already gaining weight? Absolutely not mirtazapine, consider bupropion or vortioxetine
  • Afraid of sexual side effects? Bupropion or vortioexetine again, mirtazapine too
  • Tried an SSRI and it didn't help? It's better to try a different class of antidepressant instead of just another SSRI, and so on.

(But before the meds, a physical checkup is mandatory, as are investigations to rule out medical causes. You're going to feel depressed if your thyroid isn't working, or if you've got Cushing's.)

  • Antidepressants work. They beat placebo, but not by a massive margin.
  • Effects are synergistic with therapy.

Unfortunately, you haven't given me enough information to make an informed choice. I'd need to know about the severity of your depression, graded based on symptoms, lifestyle, overall health and a bunch of other things. Hopefully your actual doctor will do their due diligence.

I would be the last person to claim that conscientiousness is unimportant. ADHD sucks.

But I can take a pill to improve my conscientiousness, and I can't take one that moves my IQ in a positive direction. So it is not nearly as harsh a constraint.

Just ask them for sources? You can also share output between multiple models and see their points of agreement or contention.

I know that OpenRouter lets you use multiple models in parallel, but I suspect a proper parallel orchestration framework is most likely to be found in programs like OpenCode, Aider, Antigravity etc.