@RedRegard's banner p

RedRegard


				

				

				
0 followers   follows 0 users  
joined 2022 November 09 21:32:36 UTC

				

User ID: 1832

RedRegard


				
				
				

				
0 followers   follows 0 users   joined 2022 November 09 21:32:36 UTC

					

No bio...


					

User ID: 1832

A method I've seen is for someone to copy the transcript of an existing video, feed it into an AI and ask it to make arbitrary changes, feed the outputted script into an AI voice generator, then use AI + third worlders on Fiverr to stitch together visuals to go with it, and voila, a complete video with minimal effort.

There's also a trend of using AI actors or clones. Essentially, since so many videos are just people talking into cameras with minimal movement, an AI generated actor is totally serviceable. It's AI script + AI voice, exposited by an AI person.

Now the question is, is AI mimicking people or were people already mimicking AI?

It’s all downstream of the choices YouTube makes. YouTube wants to show you videos lengthy enough for ads, so they create incentives both monetary and exposure based for creators to make them, and then adjust their algorithm in order to show them to you. YouTube controls it all and the content creators are merely their puppets. YouTube has a monopoly over this sort of thing and that is how they get away with it. The monopoly is more or less inherent to how these digital platforms operate, with market forces encouraging centralization of user bases. So really it’s digitized markets to blame for all of this, YouTube’s just the beast it operates through.

You're making the mistake of thinking it operates as a human does. Humans are constantly forming models of the world and using those models to inform their judgements and actions. While LLMs potentially develop models during their training, their prompt outputs are based on probabilistic likelihood calculations. 'The code being bad' is one likelihood which might emerge for it to disjointedly expand on, but there are many others. It's more like it's exploring probability space while hugging the median than actually contemplating your question; the calculations it runs through are instantaneous.

*A note on its calculations: the probabilities themselves pertain to the text being outputted and not necessarily the underlying concepts, so if it says something about 'the code being bad', that might only indicate calculations pertaining to the very words these ideas are expressed in rather than the underlying ideas themselves. LLM might not have, through its training or anything else, an approximate understanding of what code or 'bad' even are, but instead merely highly elaborate algorithms linking them and other words and word assemblages together.

So since its operating primarily or wholly on a linguistic level, it is impossible to get it to divorce its output from your starting prompt, which sets off the whole probabilistic determinacy cycle.