@MathWizard's banner p

MathWizard

Good things are good

0 followers   follows 0 users  
joined 2022 September 04 21:33:01 UTC

				

User ID: 164

MathWizard

Good things are good

0 followers   follows 0 users   joined 2022 September 04 21:33:01 UTC

					

No bio...


					

User ID: 164

Thank you for the detailed advice. I'm going to propose with a fake ring and then discuss what the real ring should be in detail before buying one, so there isn't going to be any tricking. But she is also incredibly shy and reluctant about receiving gifts, so it needs to be more about the thought put into it and the sentimental value than mere monetary cost. If I spend too much she will feel guilty about the burden, but if I even suggest something inferior she will be secretly disappointed and then feel guilty about not appreciating something that I did for her and try really hard to pretend she likes it.

Your point about elaborate designs is helpful. I'm still trying to decide how quirky/unique I want to go versus just plain fancy. Like, I've been looking at dragon and cat shaped rings and gemstone patterns, which would be more special and sentimentally unique to her, but the goofiness might detract from the universal beauty standard?

I suppose I can't plan too much ahead of time before I've actually had the discussion with her. But if I just bluntly say "you can have whatever you want" she is probably going to get overwhelmed by the pressure of too many options with no direction.

I am not sure. I suspect, but I'm going to propose with a fake ring and then talk to her about it in detail and probably go shopping together since I think sacrificing the spontaneity of the moment for a more accurate and satisfying ring will be worth it in the long run.

I am not even slightly an expert on dating advice in general, but I have two insights that I think are valid:

1: Dating sites are garbage in so far as they are filled with 90% low effort posts by low effort people looking for quick hookups with highly attractive people. It would be nice if there were separate dating sites for people who want quick hookups and sites for people looking for long term relationships, but that's not really enforceable. But even if your success rate is 20 times worse online than it is in real life, I found that the explicit permission to engage makes it more than 20 times easier to engage. You're not creeping on people at work or at the gym. This is a place where people explicitly go to meet people romantically, you have permission to talk to them to an extent you're never going to get in person. I must have sent hundreds of messages over the several years I was on these, got maybe 40 matches/responses by real humans, 35 of whom were not even slightly my type and never went past a couple back and forth messages, 4 reasonable length conversations that seemed promising but didn't work out, and 1 that was perfect from the moment it started and we've been happily together for 4 years since then.

And that's the main secret, it only needs to work once. It's largely a numbers game, you need to encounter a bunch of people and it will be a disaster with most of them, and then once it won't. I found online dating way easier to get over the fear of rejection because it was faceless and impersonal. At any moment, they are free to ghost you and never speak to you again, and you can do the same, which means it hurts so much less. But I think this is true to some extent in person as well. If you can manage to encounter enough women that you can ask out without creating major drama, do. Most will say no, and some might say yes, and most of those won't work out long term. But in the end, if it truly works out, you only need one.

2: I find that "Be yourself" is not the best advice for maximizing your chances of getting someone interested in the first place, or getting laid, but it's a good filtering mechanism that saves effort in the long run. Be yourself so that people who don't like who you are will reject you immediately instead of waiting a few dates to find out who you are before rejecting you. I usually gave nerdy jokes and pickups lines in initial messages. And the vast majority of people never responded. And the few that did were heavily selected for the type of people who actually liked them and thought they were clever/cute/funny, so I wasted less time talking to people who dislike nerds.

I'm planning to propose to my girlfriend soon, and am looking for advice on the engagement ring. I'm planning on going with a placeholder for the actual proposal and getting the real ring afterwards so that we can pick something out together that lines up with her preferences. But I'd like some ideas and general knowledge to bring to the table.

My understanding is that natural diamonds have their prices massively inflated by diamond cartels, propaganda, and literal slavery, so am planning to avoid them. I'm not opposed to going with a synthetic diamond, since they're better and cheaper, but maybe the prices are still artificially high due to the propaganda of diamonds overall? I'm not really sure.

Her favorite color is yellow, so I'm thinking a silver ring with a yellow gemstone (diamond or other gem), but there's a bunch of different types of gems even restricting to yellow, and I want one that's going to last long and look fancy without deteriorating over time. My natural inclination is to be a cheapskate about everything, so I want to make sure I'm not just doing mental gymnastics to justify cheaping out on something with significant emotional value. Neither of us are especially social people so aren't super concerned with how other people would perceive buying a non-diamond ring, but it probably matters a little bit. Ideally I would like to get something that is simultaneously cheaper and more meaningful and more impressive looking than a diamond. What are my best options and tradeoffs to consider? Also, are we better off shopping around at local jewelers so we can see stuff in person, or they all scams including the non-diamond gems such that there is a significantly better quality/price ratio online?

The bad news is that status game sociopaths exist and will blow those with morals and ethics out of the water for some amount of time before their local society decides to exile them.

This relies on there being a local society. My impression is that a significant cause of the modern destruction of dating and friendships is not just the dissolution of rituals, but also the dissolution of local society. If everyone you know is a friend of a friend of a spouse of a cousin, then there are reputational concerns. The good faith actors can vouch for each other and introduce each other to their friends and therefore recognize each other, while the bad faith actors quickly burn through all their social capital and end up as outcasts. If you have forty trustworthy mutual friends who all know each other, then a viable strategy is to only trust people who are vouched for by other people you already trust, and treat any outsiders with suspicion until they jump through a lot of hoops to prove themselves. But if you're in an atomized society where you've moved into a new city 2 years ago and all of your neighbors are people who have also moved here within the past couple years, each from a different place, that's not an option. The strategy to only trust and befriend people who are vouched for is equivalent to having no friends. You have to lower your standards for reputation, which makes it easier for the sociopaths to blend in. And when they are eventually caught they can just move to a different social circle (in a high population area they don't even need to literally move to a new location), and blend in again because people can't afford to ostracize strangers anymore.

Where were you fifteen years ago when I needed this advice?

I am frustrated that all of the social and romantic advice that I received from adults as a kid was inscrutable and unquantified and vague normie intuitions that I didn't understand. I always knew I was doing it wrong, but couldn't figure out how or why, and nobody could explain it to me. And only since I discovered rationalist and rationalist adjacent spaces did I start hearing coherent logical explanations that I could use to actually figure out social situations and figure it out. And these are verbal descriptions! It's not just that I'm older and wiser and have learned from experience lessons that cannot be taught by words. If I had heard these words fifteen years ago I would have understood them and been able to adjust my behavior!

And the worst thing is that all the normies probably understand all this already and if you told them this they'd be like "yeah that sounds about right", but when they give their own version of the explanation in their own words it's just incoherent nonsense.

I mean, if they don't, the industry is leaving money on the table.

That depends on the level of scrutiny they get from whatever quality assurance regulations they have to comply to, the estimated probability of a employee turning whistleblower, or just random customer finding out, multiplied by the expected damage from the resulting lawsuits and public backlash. It's not especially unrealistic that if they did do this people would notice the trend, some scientist would do a statistical survey, and then lawyers would jump at the opportunity for a class action lawsuit, it happens for many products.

So it's entirely plausible that the expected value of doing so is negative and thus the company increases profits by keeping their products safe and effective. It's not guaranteed, I can see it going either way, but the entire point of being able to sue companies for damages is to act as a deterrent for this kind of behavior.

It depends on your ultimate goal and level of opposition. If you actually believe in the motte, you think it is a true position that you yourself share or at least don't object to, but believe is being exploited to defend a harmful bailey, then this is entirely appropriate. If you destroy the bailey and everyone stays in the motte then you are content.

If, however, you fundamentally disagree with the entire position, are attempting to tear down both the motte and bailey, and simply focus on the bailey more often because it's easier, then there's a sort of dishonesty here. The weakman fallacy is when you point out flaws in the bailey and then use those to try to tear down the motte. In this scenario, even in the event that you push people out of the bailey you then switch tactics to fighting the motte afterwards using the victories over the bailey as momentum. In some sense, this is a fulfillment of the slippery slope: as soon as you accomplish X you then keep pushing towards Y. Which is fine if you are honest about it from the beginning, admitting that you disagree with both and are prioritizing the bailey first because it's easier. But is a problem if you pretend that they Bailey is the only problem up until you win that battle and then immediately launch a surprise attack on the motte (and/or attack people who are already motte-only people using bailey arguments).

I've also heard complaints from doctors themselves that more of their time is being taken up by paperwork rather than actually seeing patients. A doctor that spends half their time seeing patients and half doing paperwork is going to need to charge twice as much per patient as a doctor who just spends all their time seeing patients.

I think we do need public buy-in because the AI experts are partly downstream from that. Maybe some people are both well-read and have stubborn and/or principled ethical principles which do not waver from social pressure, but most are at least somewhat pliable. If all of their friends and family are worried about AI safety and think it's a big deal, they are likely to take it more seriously and internalize that at least somewhat, putting more emphasis on it. If all of their friends and family think that AI safety is unnecessary nonsense then they might internalize that and put less emphasis on it. As an expert, they're unlikely to just do a 180 on their beliefs based on opinions from uneducated people, but they will be influenced, because they're human beings and that's what human beings do.

But obviously person for person, the experts' opinions matter more.

Three flaws. First, that turns this into a culture war issue and if it works then you've permanently locked the other tribe into the polar opposite position. If Blue Tribe hates AI because it's racist, then Red Tribe will want to go full steam ahead on AI with literally no barriers or constraints, because "freedom" and "capitalism" and big government trying to keep us down. All AI concerns will be dismissed as race-baiting, even the real ones.

Second, this exact same argument can be and has been made about pretty much every type of government overreach or expansion of powers, to little effect. Want to ban guns? Racist police will use their monopoly on force to oppress minorities. Want to spy on everyone? Racist police will unfairly target muslims. Want to allow Gerrymandering? Republicans will use it to suppress minority votes. Want to the President just executive order everything and bypass congress? Republican Presidents will use it to executive order bad things.

Doesn't matter. Democrats want more governmental power when they're in charge, even if the cost is Republicans having more governmental power when they're in charge. Pointing out that Republicans might abuse powerful AI will convince the few Blue Tribers who already believe that government power should be restricted to prevent potential abuse, while the rest of them will rationalize it for the same reasons they rationalize the rest of governmental power. And probably declare that this makes it much more important to ensure that Republicans never get power.

Third, even if it works, it will get them focused on soft alignment of the type currently being implemented, where you change superficial characteristics like how nice and inclusive and diverse it sounds, rather than real alignment that keeps it from exterminating humanity. Fifty years from now we'll end up with an AI that genocides everyone while keeping careful track of its diversity quotas to make sure that it kills people of each protected class in the correct proportion to their frequency in the population.

Especially given the pascal's wager type argument going on here. You don't even need to prove that AI will definitely kill all of humanity. You don't even need to prove that it's more likely than not. A 10% chance that 9 billion people die is comparable in magnitude to 900 million people dying (on the first order. the extinction of humanity as a species is additionally bad on top of that). You need to

1: Create a plausible picture for how/why AI going wrong might literally destroy all humans, and not just be racist or something.

2: Demonstrate that the probability of this happening is on the order of >1% rather than 0.000001% such that it's worth taking seriously.

3: Explain how these connect explicitly so people realize that the likelihood threshold for caring about it ought to be lower than most other problems.

Don't go trying to argue that AI will definitely kill all of humanity, even if you believe it, because that's a much harder position to argue and unnecessarily strong.

I don't think you're using my premises.

Let's set aside life threatening scenarios, because I'm not entirely sure how my argument interfaces with them, and trying to assign personal valuations to them probably ends up with infinity dollars or something silly. Similarly, let's set aside issues of bankruptcy, and issues of involuntary treatment. And assume we're dealing with amounts of money which a person either can afford, or can afford by going into debt that they eventually mostly pay off. The clinic will treat its sale price as the amount of money they actually expect to receive from someone.

I'm fine with conceding that limited price discrimination can have positive effects, as you point out. But I'm specifically referring to perfect or near perfect price discrimination. That is, there is some level X, such that you are perfectly indifferent between receiving no treatment, and receiving treatment that costs $X. If it cost $10 million dollars and put you in debt for the rest of your life, you'd be better off untreated and saving your money, so X is less than 10 million. If it cost $1 you'd be better off with treatment, so X is greater than $1. Intermediate value theorem, or iterate, or whatever, we find some finite value, which differs from person to person based on some combination of their finances, how bad the injury i, how much they hate being in debt, how much social support they have, etc, at which they are perfectly indifferent, the scenario in which they go untreated is exactly the same value as the scenario in which they pay X and get treated. The treatment increases their utility by the exact same amount that losing $X decreases their utility, so if they paid $X for treatment they have gained nothing on net. If the clinic has some cost C for the procedure, then they are willing to charge any amount P > C. The client is willing to pay any amount P < X. Therefore any price P with the property C < P < X is mutually acceptable to both parties. The clinic profits P - C, the client "profits" X - P, and both are greater than zero. A normal sane version of price discrimination would pick some value near the middle of the interval (C,X), such that both have nontrivial profit.

However, if the clinic has perfect knowledge of X, and is selfish, they will pick some amount trivially less than X. The doctor is only going to charge you $1000 if their costs are less than $1000 and that is every last cent you have to your name or can scrounge up by going into debt. Take whatever money is the most you could possibly be willing to part with to undergo the procedure, such that if it cost a single dollar more you'd be better off untreated, and the doctor charges you exactly that much, such that the monetary cost to you is so painful it's only a tiny bit. By definition, if the person is benefiting a nontrivial amount from the transaction, we are not in this scenario. I am defining "perfect price discrimination" to be this, this is not itself an argument.

My argument can be broken down as:

1: Perfect Price Discrimination as defined is as coherent concept and isn't some contradiction of terms.

2: Perfect Price Discrimination would follow from a perfectly rational/selfish producer with perfect knowledge of their client's utility function.

3: Perfect Price Discrimination is a tiny bit better for each clients than a scenario in which they are not served at all, but worse than any other pricing mechanism in which they are served (because it gives them the least possible nonzero surplus), including imperfect price discrimination.

4: Price Discrimination scenarios increasingly approach Perfect Price Discrimination as a price discriminating producer gains more knowledge.

On this last point, if the client has some imperfect knowledge of clients, like maybe it bins them into "poor" "average" and "rich" then maybe it sets up three prices, like $1k, $10k, $100k. Then all the poor people who value the treatment at $3k can benefit because they're only paying $1k for a treatment that improves their lives by $3k, but they couldn't afford the $10k cost, so they're genuinely better off by not having to reject treatment. Average people who value it $15k will benefit by $5k, because the price is lower than their valuation. Someone who by sheer coincidence values the treatment at $10,001 is screwed, because they'll be charged $10k and receive a trivial benefit, but if the prices are spaced out enough such people will be rare.

But if the producer gains enough data to accurately bin people by thousand dollars, ie it can detect and set prices at $1k, 2k, $3k..., then we're in worse shape. Now if someone has, say $3500 valuation, they pay $3k and only get $500 benefit. If someone has $23232 valuation, they pay $23k and get $232 benefit.

If the producer gains enough data to accurately bin people by tens of dollars, then nobody can benefit by more than ten dollars.

I'm not so much arguing that such extremes are realistic, I don't think a person themselves could accurately assign a monetary value to how much they would benefit from an action X even after having already received it, since they would have to compare to a counterfactual scenario in which they hadn't received it and had kept their money. But more information allowing for more accurate price discrimination can, in many cases, lead to lower consumer benefits. If you were previously in a scenario where you "couldn't afford" something (in the sense of it not being worth the money, not whether you literally have enough money), and the price discrimination puts it into an acceptable price then you're some amount better off. But if you were previously in a scenario where it was worth it, the price discrimination raises prices on you, squeezing out your value and making you worse off. And this happens incrementally, such that one supper accurate price discrimination is comparable to an initial discrimination that makes everyone be able to afford it, followed by a bunch of subsequent discrimination that squeeze all the value out. You seem to be under the impression that price discrimination = lowered prices, but it also means raised prices. You might consider it an incremental raising and lowering of prices on every single person until as much value as possible is squeezed out while still giving them barely enough to keep them consenting to the transaction.

At the extremes. I'm not attempting to apply my argument to all possible scenarios of price discrimination or suggest that it's always bad. Just that it can be bad when taken to extremes.

Sure. I wholeheartedly agree that in some instances price discrimination is better for both consumers and producers simultaneously. Especially when costs are nonlinear as in your example.

But in other cases it's bad for consumers (as a whole, every case will have a few specific individuals who benefits at the lowest end of the curve who wouldn't be served without price discrimination, but the average consumer ends up worse off)

And producers with perfect knowledge have no incentive to pick and choose only to use it when it benefits customers. And I don't think it's necessary to demand such a strong burden on them. If scenario A has $500 consumer surplus and $500 producer surplus, while scenario B has $400 consumer surplus and $1000 producer surplus, it doesn't seem unreasonable to allow them to do scenario B without getting upset at them. But if scenario C has $1 consumer surplus and $2000 producer surplus... I feel like something has gone wrong. Like, it's economically efficient on a global scale, the total surplus is higher. But it violates intuitive notions of "fairness" in ways that lead to poverty and discontent. Note that utility is nonlinear with respect to money. A thousand people with $10,000 each will be more happy/healthy/fulfilled/content/secure on average than 999 broke people and 1 person with $10 million.

Maybe if we could figure out a way to losslessly tax them and redistribute some of the profits back to consumers this would be fine? But I'm skeptical of "lossless taxation" on producer surplus being possible. I feel like a more organic market solution involving competition and balanced bargaining power would be better, where prices are set in between customer's values and producer costs such that both could extract nontrivial fractions of the surplus.

This means poor people benefit greatly from price discrimination: they get goods or services they want at a price they are willing to pay when otherwise they wouldn't be able to afford it.

No, the entire point is that they don't. They benefit a tiny tiny bit from price discrimination. If the maximum someone is willing to pay for a product is $10, and it costs $9.99, then they benefit by $0.01. That is, they are barely coming off ahead at all, almost all of the benefit from that product they gain was lost to them in the $9.99 they spent and given to the producer. You can sell five times as much product to five times as many poor people and create five times as much benefit, but none of them are gaining much benefit at all because the products are just barely worth it.

If you have a crippling leg injury, and a doctor cures it but charges so much money that the debt cripples your life 99% as much as the leg injury did, you have benefited... but just barely.

From the producer's standpoint, this is great. Tons of value is being created by the increased number of exchanges. Lots of people are incentivized to become producers... which benefits the people who are in a position to become producers and able to (assuming the market isn't an oligopoly that crushes small competitors). And the increased trade does benefit customers... by like 1% because that's how much of this increased surplus they get to keep.

If the only two options are perfect price discrimination or unserved customers, then the price discrimination scenario is technically better for those particular customers. But my goal is to find a third option that's better, because the price discrimination scenario isn't very good for anyone except producers.

I'm pretty sure it is, if that's the intended use case of that file, and people other than you know about the decryption method. On the other hand, literally any data file of a certain length (call it A) can be turned into literally any other data file of the same length (B) if you hit it with exactly the right "decryption" (B-A) by just adding the bits together. So if you take this idea too far, every file is secretly an encrypted Mickey Mouse to the right code.

There's something nontrivial in here about information theory. If the copyrighted image has 500 kb of data, and your "encrypted file" is 500 kb, and the decryption key "Mickey Mouse" is 12 bytes, then clearly the file must contain the copyright violation. If you make an "encrypted file" with 12 kb and some wacky compression algorithm that requires 500 kb to encode and is specifically designed to transform the string "Mickey Mouse" into a copyrighted image, then yeah, that algorithm is a copyright violation.

On the other hand, if you use a random number generator to generate a random 500 kb number A, and then compute C = (B - A) where B is your copyrighted image, then in isolation both A and C are random numbers. If you just distribute A and nobody has any way of knowing or guessing C, then no copyright violation has occurred. If you just distribute C and nobody has any way of knowing or guessing A, then no copyright violation has occurred. But if you distribute them together, or if you distribute one and someone else distributes the other, or if one of them is a commonly known or guessable number, then you're probably violating copyright and trying to get away on a technicality.

But it's not enough for it to simply be possible to "decrypt" something into another thing. A string of pure 0s can be "decrypted" into any image or text. A word processor will generate any copyrighted text if the user presses the right keys in the right combination. I think there has to be some level of intent or ease or information theory value such that the file is doing the majority of the work.

So I'll concede that you make a LLM that will easily reproduce copyrighted material with simple descriptions and passwords, then I can see there being issues there. Similar to how if an author keeps spitting out blatant ripoffs of copyrighted works with a couple of words changed they'll get in trouble. But simply having used them in the training material is not itself a copyright violation. A robust LLM that has trained on lots of copyrighted materials but refuses to replicate them verbatim is not a copyright violation simply for having learned from them (which seems to be the primary objection that artists are having, not the actual reproduction of their work which I would agree is bad).

If I can ask an LLM for the "I have a dream" speech and it produces it, I have proven that the LLM contains a copy of the "I have a dream" speech and is therefore a copyright violation.

Except that LLM don't explicitly memorize any text, they generate it. It's the difference between storing an explicit list of all numbers 1 to 100 {1,2,3...100}, and storing a set of instructions: {f_n = n: n in [1,100]} that can be used to generate the list. It has a complicated set of relationships between words that it understands, and is very refined such that if it sees the words "Recite the "I have a dream" speech verbatim", it has a very good probability of successfully saying each of the words correctly. At least I think the better versions do, many of them would not actually get it word for word, because none of them have it actually memorized, they're generating it new.

Now granted, you can strongly argue, and I would tend to agree, that a word for word recitation by a LLM of a copyrighted work is a copyright violation, but this is analogous to being busted for reciting it in public. The LLM learning from copyrighted works is not a violation, because during training it doesn't copy them, it learns from them and changes its own internal structure in ways that improve its generating function such that it's more capable of producing works similar to them, but does not actually copy them or remember them directly. And it doesn't create an actual verbatim copy unless specifically asked to (and even then is likely to fail because it doesn't have a copy stored and has to generate it from its function)

Who is the ‘human’ in this example?

That's an entirely different question. Obviously the LLM is not itself a human, but neither is a typwriter or computer which a human uses as a tool to write something. So probably the copyright author would be the person who prompts the LLM and then takes its output and tries to publish it. Especially if they are responsible for editing its text and don't just copy paste it unchanged. You could make an argument that the LLM creator is the copyright holder, or that the LLM is responsible for its own output which is then uncopyrightable since it wasn't produced by a human.

But regardless of how you address the above question, it doesn't change my main point that the AI does not violate copyrights of humans it uses input from in any way differently from a human doing the same things that it does. Copyright law is complicated, but there's a long history and a lot of precedents and individual issues tend to get worked out. For this purpose, the LLM, or a human using LLM as an assistant, should be subject to the same constraints that human creators already are. They're not "stealing" any more or less than humans already do by consuming each other's work. You don't need special laws or rules or restrictions on it that don't already exist.

But you could make a similar argument that a human brain is a derivative work of its training data. Obviously there are huge differences, but are those differences relevant to the core argument? A neural net takes a bunch of stuff it's seen before and then combines ideas and concepts from them in a new form. A human takes a bunch of stuff they've seen before and then combines ideas and concepts from them in a new form. Copyright laws typically allow for borrowing concepts and ideas from other things as long as the new work is transformative and different enough that it isn't just a blatant ripoff. Otherwise you couldn't even have such a thing as a "genre", which all share a bunch of features that they copy from each other.

So it seems to me that, if a neural net creates content which is substantially different from any of its inputs, then it isn't copying them in a legal sense or moral sense, beyond that which a normal human creator who had seen the same training data and been inspired by them would be copying them.

If you look, it got more upvotes than the post it was responding to, so most likely people who saw with it agreed but didn't have anything of their own to add in response.

I don't know about everyone else, but I don't dig into the responses on every top level post, only ones I find interesting. And often miss responses if they happen after I've already read the top level post, as I usually don't go back and find new responses. So that's why I missed this one, because I do read every top level post, but I didn't care about this one.

I also more frequently respond to people I disagree with than people I agree with, because people I agree with already said half of my thoughts. So that's a bias towards non-response which was probably relevant here given how insightful your post was.

So I guess as a followup here:

Is there a solution? I think we'd both agree that this scenario is generally bad for society if businesses capture all of the gains, because that screws over the customers. Economic surplus is created by the economic trade between producers and customers, and thus both are partially responsible for it, so both deserve some of the surplus. Not necessarily exactly 50-50, but some reasonable fraction. So if producers capture 99% of surplus by near-perfect price discrimination and leave just a tiny scrap of surplus to customers to push them over the edge of indifference, then customers are being deprived of surplus that is rightfully theirs.

On the other hand, price discrimination is often more economically efficient than a flat rate.

Suppose we have 10 consumers who value a good with utility 1,2,3...10. And a producer who can produce the good with cost 2.

1: With a flat price for all customers, the producer maximizes profits by setting their price at 7 - ε, in which case they sell to 4 consumers. The total surplus is 26, of which 20 - 4ε is captured by the producer and 6+4ε is captured by consumers.

2: With perfect knowledge and price discrimination, the producer sells to each person with value greater than 2, at a cost ε less than their valuation. They sell to 8 consumers, the total surplus is 36, 36-8ε is captured by the producer, and 8ε is captured by consumers.

So even though the consumers are better off in the flat price scenario, the total economic surplus created with price discrimination is higher. If we could somehow detect these scenarios and redistribute the surplus back to the consumers in a way that didn't distort the economic incentives of the producers or consumers, the price discrimination scenario is better. I will note that there's also a third scenario with comparable surplus:

3: If the producer is altruistic/non-profit, they can set a flat price equal to 2+ε, they sell to 8 people, the total surplus is 36, but now 36-8ε is captured by consumers and 8ε is captured by the producer.

So if the balance of power tips too far in either direction, one of the groups will snatch all of the surplus. I think a fair equilibrium would maximize surplus while splitting the distribution somewhere in the middle. Not necessarily 50-50, but somewhere in the ballpark. But how do you do that here? Taxes and explicit forms of redistribution usually distort incentives, but maybe there's something clever I'm not aware of?

I don’t like eating lentils and kale

Non-vegan with mostly the same opinions as you. But lentil soup is amazing (with ham still on the bone cooked in a slow cooker to thicken the broth). I agree that lentils without any meat are pretty lame though.

Kale soup with beans is decent too on occasion (and technically vegan), but it's not one of my favorites and I can see not liking it.

This seems absolutely terrible, comparable to affirmative action in nature. Artificially increasing demand for a thing lowers the standards it has to reach in order for the market to accept it. This can't have a positive impact on the amount of genuinely quality Canadian content, because content they make that is comparable to non-Canadian content is/was able to compete in a fair playing field without regulations demanding it be spread. So this only impacts low quality content that wasn't previously good enough but now is accepted anyway to meet quotas. If you want people to consume your product, make a good product that people genuinely want to consume out of their own free will, don't force it on them. Now the average piece of Canadian content people encounter will have a lower quality than it did before, which actually reinforces stereotypes and breeds annoyance and resentment.

I can only see this going poorly.

Holy crap. That @ControlsFreak post on personalized pricing just blew my mind. I hadn't seen it when it was first posted, but I'm very glad that I did because it just changed my perspective on the whole financial assistance thing.

I have no idea if these exist, but if there's a company whose sole business model is buying/managing large servers and renting out the compute, then buying shares in that company is nearly equivalent to what you're looking for. The same could be done for other physical goods like trucks and C&C machines. As long as the company specializes heavily in a single type of thing and doesn't diversify, then its shares should have approximately the same value as that thing.

I second this. I've played quite a bit and haven't spent a dime of real money on cards compared to the literal thousands my collection would be worth if bought physically (some of the mythic rares you need for the best decks go for hundreds of dollars each). I literally just crafted a deck rated at $300 an hour ago using wildcards I've been accumulating from just playing the game.