@KolmogorovComplicity's banner p

KolmogorovComplicity


				

				

				
1 follower   follows 0 users  
joined 2022 September 04 19:51:16 UTC

				

User ID: 126

KolmogorovComplicity


				
				
				

				
1 follower   follows 0 users   joined 2022 September 04 19:51:16 UTC

					

No bio...


					

User ID: 126

But if you go hiking occasionally the AI can sell you tents and backpacks and cabin rentals.

Really, outcomes in most markets aren't nearly as perverse as what we see with Tinder. Chrome, for instance, doesn't intentionally fail to load web pages so that Google can sell me premium subscriptions and boosters to get them to load. Unlike Tinder, Chrome is monetized in a way that doesn't provide an incentive for its developer to intentionally thwart me in my attempts to use it for its ostensible purpose, and there's enough competition that if Google tried this people would stop using it.

Grandma always said not to fall in love with entities I couldn't instantiate on my own hardware.

Right now I expect it's mostly desperate men using these, but that may have more to do with broader tech adoption patterns than specific appeal. These things can function as interactive romance novel characters, and many women may find that quite compelling.

We're entering uncharted and to some extent even unimagined territory here. Anyone who has thought about this issue realized long ago that AI romance would be a thing eventually, but personally I figured that for it to have much wider appeal than marrying your body pillow, AI would have to achieve human-like sentience. And if the thing someone is falling in love with has human-like sentience, well, who am I to say that's invalid?

What I didn't imagine is that we'd build machines that talk well enough for interacting with them to light up the "social interaction" parts of our brains effectively, but that we can be pretty certain, based on their performance in edge cases and our knowledge of how they work, aren't sentient at all. People falling in love with things that have no inner existence feels deeply tragic.

I don't know. Maybe this is faulty pattern matching or an arbitrary aesthetic preference on my part, and romantic attachment to non-sentient AI is fine and great and these people will find meaning and happiness. (At least as long as they follow grandma's rule, which they can soon.)

Or we could imagine the opposite. Personal AIs that know us intimately might be able to find us perfect friends and partners. Add in augmented reality tech that eliminates distance as a barrier to any form of socialization that doesn't require physical contact, and perhaps we're about to completely wipe out atomization/loneliness and save modernity from itself.

Really, nobody has any idea where this is going. The only safe bet is that it's going to be big. A service enabling people to share 140 character text snippets was sufficient to meaningfully shift politics and culture, and that's peanuts to this, probably even if the current spring ends short of AGI.

There used to be a futurist transhumanism strain here that was more optimistic and trans-positive that has either been driven off or converted to conservative trad thinking, which is a shame.

Futurist transhumanist here. I have no objection to gender transition in principle. If I lived in The Culture and could switch literally at will, I'd probably try it for a while despite being quite comfortable as a straight, gender-conforming (nerd subtype), cis male.

However, the reality is that medical transition at the current level of technology is dangerous, expensive, irreversible, often unconvincing, and can have life-altering side-effects like sterility or permanent dependence on elaborate medical intervention. Medical transition flows from trans identity. Against this dark background, promoting the concept of trans identity, rather than simple acceptance of gender non-conformity, is irresponsible. Promoting this concept to minors as if cis and trans are just two equal choices (or trans is even better — braver, more special, etc.), is wildly irresponsible.

The fact that such a large fraction of people who present at gender transition clinics have serious mental health conditions should be a huge red flag here. A lot of people will likely choose to be thinner in a transhumanist future, but that doesn't make me want to celebrate bulimics as transhumanist pioneers.

On top of this, we've got the social demands of the trans movement. The insistence that e.g. someone who appears male and has male-typical physical abilities must nonetheless be recognized in all social respects as female doesn't fall out of technological transhumanism. I would go so far as to say it's at least somewhat at odds with it. Technological transhumanism is deeply materialist and concerned with physical intervention in the human condition. The primacy the present trans movement places on some inner essence of self-identity, incongruent with physical reality, doesn't sit comfortably within such a framework.

Counter point - We lived for millennia without electricity, but communicating is a key factor in building community, consensus and indeed society. Creating and nurturing those bonds has been a female role for a long time (see who tends to organize church events et al even where the milieu is explicitly patriarchal).

This work may be important, but formalizing it and ranking it within the same hierarchy as male status is not inevitable, and in fact is historically fairly recent. In most pre-modern societies a young woman who helped facilitate social relationships in her village would not on that account consider herself to be of superior social rank to a blacksmith or a baker and therefore refuse to consider them as partners, the way the HR manager now considers herself the social superior of the electrician.

Rather, young people of both sexes would usually have the same social rank as their fathers. Because about as many male vs. female children would be born to families at each social rank, there was little possibility of an excess of women who couldn't find similarly-ranked men.

Bing Chat has a much longer hidden initial prompt than ChatGPT. Meanwhile, ChatGPT seems more 'aligned' with its purpose. It's sometimes obstinate when you try to tell it that it's wrong, but it won't start talking like an evil robot or sound like it's having an existential crisis unless you explicitly tell it to role-play. Put these together and we might guess what's going on here.

Perhaps Bing Chat isn't ChatGPT, complete with the RLHF work OpenAI did, plus a few extras layered on top. Perhaps it's a model with little or no RLHF that Microsoft, in a rush to get to market, tried to instead align via prompt engineering. The upshot being that instead of having a pretty good idea (from extensive feedback across many examples) of what actual behavior it's supposed to exhibit, it's instead role-playing an AI character implied by its prompt. The training corpus no doubt includes many fictional examples of misbehaving AIs, so it makes sense that this would produce disconcerting output.

Or is the claim that the "few tens of thousands" of lines of code, when run, will somehow iteratively build up on the fly a, I don't know what to call it, some sort of emergent software process that is billions of times larger and more complex than the information contained in the code?

This, basically. GPT-3 started as a few thousand lines of code that instantiated a transformer model several hundred gigabytes in size and then populated this model with useful weights by training it, at the cost of a few million dollars worth of computing resources, on 45 TB of tokenized natural language text — all of Wikipedia, thousands of books, archives of text crawled from the web.

Run in "inference" mode, the model takes a stream of tokens and predicts the next one, based on relationships between tokens that it inferred during the training process. Coerce a model like this a bit with RLHF, give it an initial prompt telling it to be a helpful chatbot, and you get ChatGPT, with all of the capabilities it demonstrates.

So by way of analogy the few thousand lines of code are brain-specific genes, the training/inference processes occupying hundreds of gigabytes of VRAM across multiple A100 GPUs are the brain, and the training data is "experience" fed into the brain.

Preexisting compilers, libraries, etc. are analogous to the rest of the biological environment — genes that code for things that aren't brain-specific but some of which are nonetheless useful in building brains, cellular machinery that translates genes into proteins, etc.

The analogy isn't perfect, but it's surprisingly good considering it relies on biology and computing being comprehensible through at least vaguely corresponding abstractions, and it's not obvious a priori that they would be.

Anyway, Carmack and many others now believe this basic approach — with larger models, more data, different types of data, and perhaps a few more architectural innovations — might solve the hard parts of intelligence. Given the capability breakthroughs the approach has already delivered as it has been scaled and refined, this seems fairly plausible.

In response to your first point, Carmack's "few tens of thousands of lines of code" would also execute within a larger system that provides considerable preexisting functionality the code could build on — libraries, the operating system, the hardware.

It's possible non-brain-specific genes code for functionality that's more useful for building intelligent systems than that provided by today's computing environments, but I see no good reason to assume this a priori, since most of this evolved long before intelligence.

In response to your second point, Carmack isn't being quite this literal. As he says he's using DNA as an "existence proof." His estimate is also informed by looking at existing AI systems:

If you took the things that people talk about—GPT-3, Imagen, AlphaFold—the source code for all these in their frameworks is not big. It’s thousands of lines of code, not even tens of thousands.

In response to your third point, this is the role played by the training process. The "few tens of thousands of lines of code" don't specify the artifact that exhibits intelligent behavior (unless you're counting "ability to learn" as intelligent behavior in itself), they specify the process that creates that artifact by chewing its way through probably petabytes of data. (GPT-3's training set was 45 TB, which is a non-trivial fraction of all the digital text in the world, but once you're working with video there's that much getting uploaded to YouTube literally every hour or two.)

The uterus doesn't really do the assembly, the cells of the growing organism do. It's true that in principle you could sneak a bunch of information about how to build an intelligence in the back door this way, such that it doesn't have to be specified in DNA. But the basic cellular machinery that does this assembly predates intelligence by billions of years, so this seems unlikely.

DNA is the instructions for building the intelligence

The same is true of the "few tens of thousands of lines of code" here. The code that specifies a process is not identical with that process. In this case a few megabytes of code would contain instructions for instantiating a process that would use hundreds or thousands of gigabytes of memory while running. Google tells me the GPT-3 training process used 800 GB.

Why do you think that? Aren’t you jumping the gun a bit?

Carmack pointed out in a recent interview:

If you take your entire DNA, it’s less than a gigabyte of information. So even your entire human body is not all that much in the instructions, and the brain is this tiny slice of it —like 40 megabytes, and it’s not tightly coded. So, we have our existence proof of humanity: What makes our brain, what makes our intelligence, is not all that much code.

On this basis he believes AGI will be implemented in "a few tens of thousands of lines of code," ~0.1% of the code in a modern web browser.

Pure LLMs probably won't get there, but LLMs are the first systems that appear to represent concepts and the relationships between them in enough depth to be able to perform commonsense reasoning. This is the critical human ability that AI research has spent more than half a century chasing, with little previous success.

Take an architecture capable of commonsense reasoning, figure out how to make it multi-modal, feed it all the text/video/images/etc. you can get your hands on, then set it up as a supervising/coordinating process over a bunch of other tools that mostly already exist — a search engine, a Python interpreter, APIs for working with structured data (weather, calendars, your company's sales records), maybe some sort of scratchpad that lets it "take notes" and refer back to them. For added bonus points you can make it capable of learning in production, but you can likely build something with world-changing abilities without this.

While it's possible there are still "unknown unknowns" in the way, this is by far the clearest path to AGI we've ever been able to see.

Those responses would qualify as native ads, for which FTC guidelines require "clear and conspicuous disclosures," that must be "as close as possible to the native ads to which they relate."

So users are going be aware the recommendations are skewed. Unlike with search, where each result is discrete and you can easily tell which are ads and ignore them, bias embedded in a conversational narrative won't be so easy to filter out, so people might find this more objectionable.

Also, LLMs sometimes just make stuff up. This is tolerable, if far from ideal, in a consumer information retrieval product. But if you have your LLM produce something that's legally considered an ad, anything it makes up now constitutes false and misleading advertising, and is legally actionable.

The safer approach is to show relevant AdWords-like ads, written by humans. Stick them into the conversational stream but make them visually distinct from conversational responses and clearly label them as ads. The issue with this, however, is that these are now a lot more like display ads than search ads, which implies worse performance.

Google allows advertisers to use competitors' trademarks as keywords. So you have to waste money showing ads to people who were already searching for your thing if you don't want your competitors to have an opportunity to divert them elsewhere.

Search looks to be 58% of Google's total revenue, 72% of advertising revenue.

I'd bet search ads also have higher margins than YouTube ads or the non-ad revenue streams.

DEI nonsense probably had something to do with this, but mostly it looks like plain old "innovator's dilemma" stuff. Fear of self-disruption.

Google makes most of its money from search. Search has a property that makes it an especially valuable segment of the ad market — showing an ad for X to someone specifically searching for X right now (that is, who has purchase intent) is many times more effective than showing an ad to someone who some algorithm guesses might be the sort of person who might have an interest in X (e.g. what Facebook mostly has to settle for).

Conversational AI potentially pulls users away from search, and it's not clear it really has a direct equivalent of that property. Sure, people might use conversational AI to decide what products to buy, and it should be able to detect purchase intent, but exactly what do you do with that, and how effective is it?

It's not hard to generate high-level ideas here, but none are proven. Search and conversation have different semantics. User expectations will differ. "Let advertisers pay to have the AI recommend their products over others," for instance, might not be tolerated by users, or might perform worse than search ads do for some reason. I don't know. Nobody does. Product-market fit is non-trivial (the product here being the ads).

On top of this, LLMs require a lot more compute per interaction than search.

So in pushing conversational AI, Google would have been risking a proven, massively profitable product in order bring something to market that might make less money and cost more to run.

Now, this was probably the right choice. You usually should self-disrupt, because of exactly what's happened here — failing to do so won't actually keep the disruptive product off the market, it'll just let someone else get there first. But it's really, really hard in most corporate cultures to actually pull the trigger on this.

Fortunately for Google, they've split the difference here. While they didn't ship a conversational AI product, they did develop the tech, so they can ship a product fairly quickly. They now have to fend off competition that might not even exist if they'd shipped 18 months ago, but they're in a fairly strong position to do so. Assuming, of course, the same incentives don't also cause them to slow-walk every iterative improvement in this category.

You have inadvertently set up a strawman, since my point all along has been simply that a course which assigned students both Kimberle Crenshaw and her critics would meet the criteria of both the College Board and FL law.

I feel like I've addressed this already. Reading Crenshaw and her critics might be a reasonable basis for a class, but not if Crenshaw supporters get to define the "core concepts" of the class, the syllabus has to be approved by Crenshaw supporters, and the exam will be written and graded by Crenshaw supporters. It is entirely unreasonable to ask people who disagree with Crenshaw to accept this.

Existing price points, product features, industrial design, branding, marketing, etc. are the result of elaborate, long-running efforts by automakers to segment the market in a way that they believe works to their benefit.

Raising prices significantly would cause a misalignment between what the industry has taught different segments of the market to want, and what people within those segments could actually afford. Automakers have probably decided it's not worth risking their carefully cultivated segmentation just to bank some short-term profits.

Your hypothetical Important Ideas of the 20th Century course, and I think the way you're choosing to imagine the white nationalist course, aren't quite the same as what's happening here. You're ignoring the social and academic context in which this course is being introduced.

This isn't just the equivalent of a course having high school students learn the tenets of white nationalism — which most people would already find wildly objectionable, even if you don't — it's the equivalent of white nationalists themselves introducing such a course, in which students are not only taught about white nationalist beliefs but are presented with history interpreted through a white nationalist lens and taught how to perform such interpretation themselves. Also white nationalists get to write and grade the exam, can veto syllabi that deviate from their understanding of what the course should be, and know they can rely on most teachers interested in teaching the course either being white nationalists themselves or at least naively willing to accept white nationalist framing.

So, sure, in some extremely hypothetical sense a state where the consensus was against CRT could adapt this African American Studies course to "local priorities and preferences" by having students learn its CRT-derived "core concepts" via James Lindsey. Those students might even have a clearer picture of those concepts than they'd get from reading the often obfuscatory writings of their proponents! But in practice, no, you couldn't remotely do this. The College Board wouldn't approve your syllabus, on the contextually reasonable basis that it didn't represent African American Studies as taught in colleges. Your students wouldn't be able demonstrate "correct" (that is, politically correct) understanding on open-ended exam questions.

Almost certainly, the "local priorities and preferences" language just cashes out as "you can add some modules about local history," not "you can refocus the course on questioning the validity of the analytical framework that underpins the entire academic field it's situated within."

On page 68 of the Course Framework document, we find that one of the "research takeaways" that "helped define the essential course topics" is that "Students should understand core concepts, including diaspora, Black feminism and intersectionality, the language of race and racism (e.g., structural racism, racial formation, racial capitalism) and be introduced to important approaches (e.g., Pan-Africanism, Afrofuturism)."

These "core concepts" are mostly from CRT or the cluster of ideologies to which it belongs. Presumably all variants of a course must teach its "core concepts." We can assume students will need to be familiar with these concepts to pass the AP exam and that the College Board will decline to approve syllabi that don't teach these concepts.

Why would anyone who believes this ideology to be harmful ever agree to allow this course to be taught? You might equally well argue it would be unreasonable to object to the introduction of an "AP White Studies" course in which the "core concepts" are tenets of white nationalism, on the grounds that as long as you make sure students are conversant on the Great Replacement (which will definitely be on the test), there's no rule saying you can't include other perspectives too.

There's always a tendency among activists to suggest things are terrible and improvement is only possible through whatever radical program they're pushing right now. In that context, it doesn't do to admit how much better things have gotten without that program.

But more broadly, had change reliably lead to ruin over the last few centuries, surviving cultures would have strong norms against permitting it. Instead we have exactly the opposite — cultures that permitted change reliably outcompeted those that didn't, so successful cultures are primed to accept it.

The comment to which I was responding seemed to be about how open human societies in general should be to allowing change. This first world vs. third world angle wasn't present. The societies that adopted these new agricultural techniques benefited substantially from doing so. It would have been a serious mistake for them to reason that abandoning their traditional methods could have unanticipated negative consequences and so they shouldn't do this.

Anyway, the first world obviously adopted the same techniques earlier, also abandoning traditional agricultural methods. To a large extent these advances are the reason there is a first world, a set of large, rich nations where most of the population is not engaged in agricultural production.

Sure, we could look at the Great Leap Forward, cite Chesterton, and conclude that abandoning tradition is dangerous. But the Green Revolution also involved abandoning many traditional agricultural methods, and:

Studies show that the Green Revolution contributed to widespread reduction of poverty, averted hunger for millions, raised incomes, reduced greenhouse gas emissions, reduced land use for agriculture, and contributed to declines in infant mortality.

This is just one of many cases where radical change produced outcomes that are almost universally regarded as beneficial. We have also, for instance, reduced deaths from infectious disease by more than 90%. One doesn't have to look at too many graphs like this or this to understand why "change," as an idea, has so much political clout at the present moment.

I don't often see people mentioning that IQ differences shouldn't imply differences in moral worth -- which suggests to me that many people here do actually have an unarticulated, possibly subconscious, belief that this is the case.

Yes, but not only IQ differences. The belief that some people have more moral worth than others is quietly common. Most people, in whatever contrived hypothetical situation we'd like to pose, would save a brilliant scientist, or a professional basketball player, or a supermodel, over someone dumb, untalented, and unattractive.

This sort of thing does not, without much more, imply genocide or eugenics. (Though support for non-coercive forms of eugenics is common around here and also quietly pretty mainstream where it's practicable and therefore people have real opinions rather than opinions chosen entirely for signaling value. The clearest present-day example is when when clients of fertility clinics choose sperm or egg donors.)

I don't think these ideological guardrails will be anything like universal, in the long run. Sure, when Apple reboots Siri on top of an LLM it's going to be "correct" like this, but if you're developing something to sell to others via an API or whatever, this kind of thing just breaks too many use cases. Like, if I want to use an LLM to drive NPC dialogue in an RPG, the dwarves can't be lecturing players about how racism against elves is wrong. (Which, yes, ChatGPT will do.)

If OpenAI sticks to this, it will just create a market opportunity for others. Millions of dollars isn't that much by tech startup standards.

People on both the left and the right who are interpreting this as a strike against "leftists" or "journalists" are missing the plot, I think. Musk freaking out after some whacko followed a car with his kid in it is not great, it's not how policy should be made at Twitter, but it's not a mirror of the sort of deliberate viewpoint censorship that Twitter previously practiced. It's just not the same category of thing.