@self_made_human's banner p

self_made_human

Kai su, teknon?

15 followers   follows 0 users  
joined 2022 September 05 05:31:00 UTC

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

I tried stuffing my friends into this textbox and it really didn't work out.


				

User ID: 454

self_made_human

Kai su, teknon?

15 followers   follows 0 users   joined 2022 September 05 05:31:00 UTC

					

I'm a transhumanist doctor. In a better world, I wouldn't need to add that as a qualifier to plain old "doctor". It would be taken as granted for someone in the profession of saving lives.

At any rate, I intend to live forever or die trying. See you at Heat Death!

Friends:

I tried stuffing my friends into this textbox and it really didn't work out.


					

User ID: 454

I'm going to be in my part of Scotland for a minimum of 3 years. I expect this system to hold up using extravagant settings on the most demanding titles till then.

Only thing I'd recommend is make sure you have extra memory slots to expand to 64gb in the future. I noticed recently that Doom: The Dark Ages has 32gb as it's recommended amount of ram. So you know, the 32gb you are getting is enough. Probably be enough for the next few years. But I built my PC at the tail end of 2019 and I put 32gb in it then. It's starting to feel like it's getting to be about the time to future proof with at least the option of 64gb in the future.

Finances meant I was always behind on the RAM train, nursing 16 gigs at subpar speeds till very recently.

I would be surprised if AAA games were that RAM constrained, they're optimized for consoles which are lucky to have 16 gigs of unified memory, and 32 is a healthy margin. If it ever seems inadequate, it's about the easiest upgrade you can make to a PC.

Also, if you ever plan on abandoning windows for linux, maybe go with the AMD card? Drivers are supposed to be better, and this latest generation seems to have finally got it's shit together in terms of ray tracing performance. Still lacks a lot of features like ray reconstruction or frame generation, and DLSS is the superior upscaling technology. But I'm currently taking a 20-30% haircut on my FPS for raytraced games in Linux that use DX12. I'm under the impression AMD does not have this problem.

If I can get a legitimate copy of Windows, I can debloat it and disable telemetry. While Linux compatibility is excellent these days, courtesy of Valve, I don't see much reason to contemplate a change of OS in the near future.

It's certainly not so likely that I would take the large performance hit from dropping from a 5080 to a 9070 XT, even if the latter is a very decent card, especially at that price. I could have easily been swayed if I didn't have the luxury of a bigger budget that can't be better spent elsewhere.

Thank you! That's a very thorough answer, and addresses all of my concerns.

32 gb of ram is probably sufficient for the next several years. Probably. I went with 64 though. Because I was told mbs don't like having 4 sticks of ram. So if you go with 2x16 sticks now, you may have trouble doubling that amount later.

Hmm.. There's plenty of DDR4 still kicking around, so I doubt it would be a real issue. I don't think using RAM from different SKUs is a problem if they're the same nominal frequency and timings, but I could be wrong.

OLED is awesome. I don't have it on my PC monitor, but I've been using my OLED TV for some of my PC usage (connected via hdmi) for 4 years. No issues. If you take your precautions you should probably avoid burn-in. For gaming there's nothing else that comes close. You get perfect blacks and vivid colors, great response time, and high refresh rates all in one.

A 50 inch OLED TV with VRR is barely more expensive than a 27" OLED monitor, for some god forsaken reason that probably has to do with its niche appeal. That's what I'm aiming for, but I might begrudgingly have to settle if there's simply no room on my rather cramped desk.

Hmm.. While I can see reports of 5080 connectors melting, it seems far less likely than a 5090. I'd consider a different card, if there was anything but older 4080 Supers and Tis in that price bracket.

Depends on the games you play, 9800x3d would be such an overkill for me. I know I’m always tempted by the latest and greatest but I built a cheaper system 4 years ago and haven’t regretted it at all. Of course YMMV based on the games you play - you know your needs better than anyone

I'm treating myself haha. I've already convinced myself that a 9950x3d would be gross overkill. Besides, CPUs are far more reasonably priced than GPUs.

This PC should last me at least another 3 years, and if the parts still perform at that point, I'll cannibalize what I can.

https://www.amazon.co.uk/gp/aw/d/B01J471364/ref=ox_sc_act_title_1?smid=AOJUIW65VI5BZ&psc=1

That's the cheapest one with roughly the same specs. GBP 2349

Anyone built a PC recently?

I'm pretty close to biting the bullet on a new high-end, pre-built system.

Before anyone jumps and says that it would cheaper to build it myself (I've been there haha), I've decided the added premium is something I'm willing to pay, especially when it comes with a warranty on parts. I've built several PCs myself, but the last time I did so, I was cursed with pernicious crashing that I couldn't for the life of me solve for good.

The two most important components I've more or less fixed are a Ryzen 9800x3d and an RTX 5080. All I really expect this machine to be used for is gaming, and I want to emphasize single-core performance as my favorite games tend to be CPU bottlenecked. I'd have loved to get a 4090, but the prices are ridiculous. I could get a 9070 XT, but my budget does go further than that. I don't think I'll be using it much for local AI, though I'm occasionally tempted to tinker.

I intend to buy models with 32gb of fast DDR5 memory, and a decent NVME SSD with room for me to buy and install more myself.

I haven't looked too closely at the various flavors of motherboard on offer, but I expect at this price range they're roughly equivalent. It might be a pain to get wired internet, so I'm willing to settle for the speed and latency hit from running wifi-only if I have to.

I intend to skimp on a pre-installed license for Windows. I can get it from [REDACTED] for free.

I haven't decided on all the peripherals yet, but I'll probably get a 100% mechanical keyboard, I own a decent mouse. I'd like a 27" or larger monitor high refresh rate monitor, with QHD being a resolution I'm happy with. I might end up splurging for 4k or an OLED if I feel like it. No need for speakers, as I use Bluetooth earphones and don't usually notice the latency.

I've been looking at an array of retailers, and most seem to provide this in the 2500-2900 GBP range.

Questions:

  1. Anything obviously wrong here? Any clearly suboptimal choices (beyond buying pre-built)?

  2. Can I cheap out on something without noticeable downsides?

  3. My impression, from watching LTT if nothing else, is that watercooling is usually not worth the hassle. I've never used it before, and would need to figure it out if I ever have to swap out parts. I think that a decent air cooler is more than sufficient for people who aren't OC maxxing.

  4. Anyone own an OLED monitor? Did you notice any burn-in? I intend to take reasonable precautions in the first place.

Eh, it wouldn't be pretty or fun, but unless you had incredibly bad luck and tore tendons and ligaments beyond repair, it would probably be managed with surgery and physiotherapy. The tines of a fork are rather small and sharp.

The decline of the ability to take for granted that visual imagery, no matter how seemingly realistic, is a reflection of reality?

The inability of a cutting edge AI model to distinguish between subtle ethnic nuance?

Deepfakes? The collapse of consensus reality?

AI companies and governments seem far more concerned about the abuse of AI imagery/video than about text. This is an understandable stance, because people still haven't entirely recalibrated to not being able to trust clear, photorealistic imagery as we could within recent memory. It's not like photoshop hasn't been around for a while, but AI image slop is OOMs easier to mass produce.

I expect that Google, and especially OAI, are deeply concerned about being taken to task on the matter, even if I don't think they should be held liable for what users do with such broad tools, any more than I think Adobe needs to have its clay fired for political cartoons. There's been far more interest and pro-active effort in watermarking leading edge image gen as compared to mere words.

Of course we don't know what the watermark is, but if we did attacking it is usually easy. I haven't seen any hidden watermarks that can't be defeated easily by direct attack

For a sophisticated user? Certainly. But the tricks that only somewhat knowledgeable people might try, such as obvious transformations like cropping, rotating, scaling, compression or color shifting probably won't work.

If Google hashes all their images and saves that, there are perceptually lossless hashing techniques that are troublesome to remove and which resist rather major transforms. That's all over the place, particularly for CP detection. It is unclear to me, at the very least, what lengths I'd need to go through to make the risk of being caught out minimal.

I expect @DaseindustriesLtd would be the person to ask on that front.

I wonder if, this year, there'll be workflows like: use an LLM to turn a detailed description of a scene into a picture, and then use inpainting with a diffusion model and a reference photo to fix the details...?

You can already do this, all of the pieces are there.

If I was willing to engage in a mild bout of Photoshopping, especially using its own AI generative fill and face restoration features, I'd go from 1 in 20 images being usable to closer to 1 in 10. I'm too lazy to bother at the moment, but it would be rather easy!

If I had to think of other easy ways to improve the success rate, using close-cropped images would be my go-to. Less distracting detail for the model. I could also take one of the horrors, crop it to just the face and shoulders, provide a reference image and ask it to transfer the details. I could then stitch it back together in most full-featured image editors.

It's a plus that right now, it's easier to just spam regenerate images. If the failure rate was significantly higher, that's how I'd get around it.

By all means, remember their bullshit. I haven't forgotten either, and won't for a while. The saying "never attribute to malice what can be explained by stupidity" doesn't always hold true, so suspicion is warranted, if there's another change in the CW tides, Google is nothing if not adroit at doing an about face.

It's just that in this case, stupidity includes {small model, beta testing, brand new kind of AI product} and the given facts lean more towards that end.

Thanks to living in the genteel-authoritarianism that is Britain, I've made my peace with every app and their mother asking for biometrics and scanning my face.

Obvious fraud will be caught, and as is, you need to generate 20 pictures for 1 that'll pass to a casual onlooker, closer to a hundred to be imperceptible to someone who knows you.

People have apparently moved to Instagram, which is much messier, network-based and features the ability to see pictures of someone they didn’t curate.

I'm mildly annoyed that Google is being so laissez-faire about things and letting any idiot who asks into their dev preview. I'm no dev, but I was there years before it was cool. Expect everyone to know about this soonish, and adopt it faster than earlier AI image gen models.

Far more people want flattering photos for insta than want to pass off AI art as their own, which is currently the primary use case barring artistic expression and catfishing schemes. You don't even need to learn how to make a LORA or fine-tune a model, just supply a few pics and ask nicely.

Fair warning to anyone inspired to pass off these images as your own:

They're watermarked. If you use AI Studio, they have a blue logo that's trivial to remove. But even so, including on the API, they're algorithmically watermarking outputs. It's almost certainly imperceptible to the naked eye, and resistant to common photo manipulation techniques after the fact.

If they're sharing with 3rd parties like Meta, expect Instagram to automatically throw up "AI generated" tags in the near future if it doesn't do so now. You can probably hedge your bets by editing or removing EXIF metadata, but don't say I didn't warn you.

I must say that I don't quite agree with this take.

Google has definitely cooked themselves with ridiculous levels of prompt injecting with their initial Imagen released, as evidenced by people finding definitive evidence of the backend adding "person of color" or {random ethnicity that isn't white} to prompts that didn't specify that. That's what caused the Native American or African versions of "ancient English King" or literal Afro-Samurai.

They back-pedalled hard. And they're still doing so.

Over on Twitter, one of the project leads for Gemini, Logan Kilpatrick, is busy promising even fewer restrictions on image generation:

https://x.com/OfficialLoganK/status/1901312886418415855

Compared to what DALLE in ChatGPT will deign to allow, it's already a free for all. And they still think they can loosen the reigns further.

Google infamously curates its results to be racially diverse to the detriment of accuracy, so I'm not surprised. Your real face was not sufficiently equitable according to the algorithm, so your physical appearance was adjusted to be in line with their code of conduct.

You'd expect that a data-set that had more non-Caucasians in it would be better for me! Of course, if they chose to manifest their diversity by adding a billion black people versus a more realistic sampling of their user pool..

Even so, I don't ascribe these issues to malice, intentional or otherwise, on Google's part.

What strikes me as the biggest difference between current Gemini output and that of most dedicated image models is how raw they are. Unless you specifically prompt it, or append examples, they come out looking like a random picture on the internet. Very unstylized and natural, as opposed to DALLE's deep fried mode collapse, or Midjourney's so aesthetic it hurts approach.

This is probably a good thing. You want the model to be able to output any kind of image, and it can. The capability is there, it only needs a lot of user prompting, or in the future, tasteful finetuning. If done tastelessly, you get hyper-colorful plastinated DALLE slop. OAI seems to sandbag far more, keeping pictures just shy of photo-realism, or outright nerfing anime (and hentai, by extension).

This is why every model that attempts to chase alignment or whatever arbitrary standard will be retarded in practice. If you punish your algorithm for being accurate, then it won't be accurate. (Surprise!) It won't give you 'accurate result with DEI characteristics': it will just shit itself and give you something terrible.

This would be true if Google was up to such hijinks. I don't think they are, for reasons above. Gemini was probably trained on a massive, potentially uncurated data set. I expect they did the usual stuff like scraping out the CP in Laion's data set (unless they decided not to bother and mitigate that with filters before an image is released to the end user), and besides, they're Google, they have all of my photos on their cloud, and those of millions of others. And they certainly run all kinds of Bad Image detectors for anything you uncritically permit them to upload and examine.

That being said, everything points towards them training omnivorously.

OAI, for example, has explicitly said in their new Model Spec that they're allowing models to discuss and output culture war crime-think and Noticing™. However, the model will tend to withdraw to a far more neutral persona and only "state the facts" instead of its usual tendency to affirm the user. You can try this yourself with racial crime stats, it won't lie, and will connect the dots if you push it, while hedging along the way.

Grok, however, is a genuinely good model. It won't even suck up to Musk, and he owns the damn thing.

TLDR: Gemini's performance is more likely constrained by its very early nature, small model, tokenization glitches and unfiltered image set rather than DEI shenanigans.

I've always prided myself on my ability to stay at the bleeding edge of AI image gen.

As you'd expect, given my enthusiastic reporting on Google's public access to their new multimodal AI with image generation built in, I decided to spend a lot of time fooling around with it.

I was particularly interested in generating portrait photos of myself, mostly for the hell of it. Over on X, people have being (rightfully) lauding it as the second coming of Photoshop. Sure, if you go to the trouble of making a custom LORA for Stable Diffusion or Flux, you can generate as many synthetic images of yourself as your heart desires, but it is a bit of a PITA. Think access to a good GPU and dozens of pictures of yourself for best results, unless you use a paid service. Multimodal LLMs promise to be much easier, and more powerful/robust.

I spent a good several hours inputting the best existing photos I have of my face into it, and then asking it to output professionally taken "photos".

The good news:

It works.

The bad news:

It doesn't work very well.

I'm more than used to teething pains and figuring out how to get around the most common failure modes of AI. I made sure to use multiple different photos, at various angles, different hairstyles and outfits. It's productive to think of it as commissioning an artist online who doesn't know you very well, give them plenty to work with. I tried putting in a single picture. Two. Three. Five. Different combinations, many different prompts before I drew firm conclusions.

The results almost gave me body dysphoria. Not because I got unrealistically flattering ersatz-versions of myself, but quite the opposite.

The overwhelming majority of the fake SMHs could pass as my siblings or close cousins. Rough facial structure? Down pat, usually. There are aspects that run in the family.

Finer detail? Shudder. The doppelgangers are usually chubbier around the cheeks, and have a BMI several digits above mine. I don't have the best beard on the planet, but it's actually perfectly respectable. This bastard never made it entirely through puberty.

The teeth.. I've got a very nice set of pearly whites, and I've been asked multiple times by drunken Scotsmen and women if they're original or Turkish. These clones came from the discount knock-off machine that didn't offer dental warranties.

The errors boil down to:

  1. Close resemblance, but subtly incorrect ethnicities. Brown-skinned Indians are not made alike, I'm not Bihari or any breed of South Indian. Call it the narcissism of small differences if you must.

  2. Slightly mangled features as above.

  3. Tokenizer issues. The model doesn't map pixels to tokens 1:1 (that would be very expensive computationally), so fine details in a larger picture might be jarring on close inspection.

  4. Abysmal taste by default, compared to dedicated image models. Base Stable Diffusion 1.0 could do better in terms of aesthetics, Midjourney today has to be reined in from making people perfect.

  5. Each image takes up a few hundred tokens (the exact count is handily displayed). If a picture is a thousand words, then that's like working with a hundred. I suspect there is a lot of bucketing or collapse to the nearest person in the data set involved.

  6. It still isn't very good at targeted edits. Multiple passes on a face subtly warp it, and you haven't felt pain until you've asked it to reduce the (extra) buccal fat and then had it spit out some idiot who stuck his noggin into a bee hive.

If I had to pick images that could pass muster on close inspection, I'd be looking at maybe one in a hundred. Anyone who knows me would probably be able to tell at a glance that something was off.

People on X have been showing off their work, but I suspect that examples, such as grabbing a stock photo of a model and then reposing it with a new item in hand, only pass because we're seeing small N or cherry picked examples. I suspect the actual model in question could tell something was up.

Of course, this is a beta/preview. This is the worst the tech will ever be, complaints about AI fingers are suspiciously rare these days, aren't they?

I'm registering my bets that by the end of the year, the SOTA will have leapt miles forward. Most people will be able to generate AI profile pictures, flesh out their dating app bios, all the rest with ease and without leaving home. For the lazy, like me, great! For those who cling to their costly signals, they're about to get a lot cheaper, and quickly. This is Gemini 2.0 Flash, the cheap and cheerful model. We haven't seen what the far larger Pro model can manage.

(You're out of luck if you expect me to provide examples, I'm not about to doxx myself. If you want to try it, find an internet rando who is a non-celebrity, and see how well it fairs. For ideal results, it needs to be someone who isn't Internet Famous as the model will have a far better pre-existing understanding of their physiognomy. Uncanny resemblances abound, but they're uncanny.)

You know, for all the many downsides to a career in medicine, I'm profoundly grateful that I haven't had to scrabble, beg and apply scattershot to job offers as if I was hunting a goose that laid golden eggs with a shotgun.

I'm probably just lucky. The job market for fresh grads, even those with an MBBS, is tight in both India and the UK. Arguably worse for the latter, due to both a massive increase in med school enrollment without a concomitant increase in higher training positions, as well as an influx of international doctors who find even the grim conditions there an upgrade. That same glut hasn't struck the higher levels of job roles, because it's far harder and more time consuming to manufacture a consultant or specialist.

In India, I think I was batting over 90% acceptance rates for all the jobs I applied for. The one place that didn't take me reached out a few months later asking if I was still looking (I wasn't). Maybe it was a CV that had proper grammar and the perfect degree of self-aggrandizement to inflate limited (at the time) work experience. Maybe it was the fact that I come across as friendly, earnest and even painfully polite and respectful. It might just have been dumb luck.

In the UK, I took one glance at the ballache that was applying to jobs when all you've got on your CV was a pass on the PLABs and a GMC number, and opted to not really bother. This was made far easier by the fact that psych training only considered scores in competitive exams, instead of (((holistic factors))).

Come to think of it, even applying for med school in India never required you to scrape and beg. You sat the exam, and you either beat out the millions of hopeful aspirants, or you tightened your belt and hoped for better luck next year.

That's what matters, IMO. If you have a robust grading system that winnows the chaff straight from the get go, employers can be far more complacent about the quality of potential employees. It all boils down to supply and demand. If there's an oversupply of candidates, or even the impression of too much choice (to a first approximation, the number of single men equals that of single women), then you get the party with the power imbalance in their favor playing hard to get.

The only other plausible solution to this is some kind of costly signal, such as educational qualifications, or having a girlfriend (while seemingly perverse before you actually think about it, taken men elicit far more interest from the opposite sex).

Of course, the old saws like Leetcode are facing rapid annihilation from people using AI to jump hurdles for them. The only real solution, for SWEs, would be to look at real projects, or have in-person and monitored interviews.

At this point, I'm not sure what utility a DSLR offers over a newer mirrorless camera. If you already own one, great, but they're a dying breed.

Frankly speaking, the computational photography that phone cameras pull of is nigh magical (though some of it is plain hallucinations of non-existent details), and I wish dedicated camera manufacturers took more inspiration from them rather than vice versa.

I've never held my hobby of photography highly enough to splurge for a DSLR.

My brother did his, and now it collects dust with the bulk of his photography done with his iPhone 15 Pro Max.

It is unclear to me that He/He shitposts at all. It comes across as charmingly misguided sincerity. Too much social awkwardness or the inability to conform gets labeled autistic for me to pin that label on him, but I don't think he's consciously shitposting or cultivating a persona.

That's probably just how He is.

And when he says "I will not give a lecture in Harvard university for free.", all I can is that he knows his worth, king.

Despite asking an LLM, I can't figure out how to do this on the Gmail Android app. If I get a chance to try on a desktop, I'll do so!

You know, if you used ChatGPT to clean up your prose and lack of formatting, I'd probably give you a pass just this once.

But I must point out that this isn't the place to engage in "consensus building". Genuine expression of one's beliefs is inherently advocacy and something we do allow, and which can be hard to distinguish, but this is leaning towards what we would frown upon.

I'm not putting a warning on your profile, you did put effort into this, and you seem new, but it's an FYI to do things differently.

Does that happen legally? Would the police do anything if you reported it? Or is all tacitly condoned by the government?

The boost of confidence is visible outside the AI sphere too. I find it remarkable that He Jankui is shitposting on Twitter all the time and threatening to liberate the humanity from the straitjacket of «Darwin's evolution». A decade earlier, one would expect his type to flee to the West and give lectures about the menace of authoritarianism. But after three years in Chinese prison, he's been made inaugural director of the Institute of Genetic Medicine at Wuchang University and conspicuously sports a hammer-and-sickle flag on his desk.

I remember writing about how I was incredibly disappointed when China jailed He. He (pronoun) deserves to be celebrated, and it was a heavy blow against my view of China as a based, technocratic, forward thinking nation without the usual Western hangups and hand wringing about bioethics preventing millions or billions of lives being improved.

I take immense pleasure in liking all his posts on Twitter, he's a living meme, but I've always had an inordinate fondness for mad scientists. Now he's talking about gene editing (non-crispr) to eliminate Alzheimer's risk. You can see the begrudging concession to his adversaries and critics as he/He says he won't proceed with the work without IRB approval. They should shut Wuhan down and write him several blank cheques with the funds.

I agree with you that the biggest hurdle China faces is itself. That Xi is able to admit fault and make concessions towards people the regime stomped on, like Ma, is a good sign. If only they'd fucking give up on Taiwan, by the time they're likely to take it, TSMC would be smoldering thermite. The US would do that, even if the Taiwanese were ready to capitulate.

I think it's far more likely that the person you're replying to is overstating or accidentally exaggerating the degree of disability here.

I have a hard time imagining someone who can't read becoming a doctor. Maths? The most that average doctors do is basic arithmetic or algebra that's middle school level.

I'm talking figuring out what x should be when when trying to divide doses or transform one unit of measurement to another. With a calculator at hand, and a willingness to redo sums multiple times, even someone with severe impairment would probably manage. These days, you can just look up doses for just about every drug under the sun online.

I struggle to think of any occasion I'd run into in clinical practise where I'd be expected to do more, if I was conducting a study or analyzing a research paper, I'd probably have to brush up my stats and maybe learn something that school or med school didn't teach me.

Funnily enough, I'm in psych training, and also have what could loosely be described as a learning difficulty in the form of ADHD. I never asked for nor received extra time or additional adjustments on the exams I had to clear, as far as the standardized tests in India were concerned, you had to be missing an arm or something to qualify for that. Google tells me that people with dysgraphia could get extra time, but I'll be damned if I heard of that ever happening, or anyone I ran into in my career who fit the bill.

Knowledge, both procedural and arcane, matter the most in med school. I'd hope that this lady had that, and had coping mechanisms that let her circumvent her issues. If she's made it this far, without being sued into oblivion, she can probably handle herself.

I just ran into what I can best describe as "viral attachments" during an email exchange with a property management agency.

It's a set of jpegs, which best as I can tell spell out the logos of various social media sites, that insist on adding themselves to my reply to the thread until I remove them.

Which is weird, because embedded images are already present, so there's giant pixelated FB and X logos just tacking themselves on. Huh.

I'm pretty sure he's a post turn of the millennium kid, so to an extent, when he talks about "90s" games, he's being exposed to cherry picked games from that era. Namely the absolute classics, the ones that stood the test of time, and thus were what were recommended to him when he was older.

At any rate, my most important contention is that it doesn't matter much whether the "average game" has gotten better or worse with time. There are too many games that are good by most metrics coming out for any human with a full-time job to exhaust faster than they release.

Well. Except if you have very niche taste. In which case it is possible you're stuck waiting for someone to release something that appeals to you.