@benmmurphy's banner p

benmmurphy


				

				

				
1 follower   follows 0 users  
joined 2022 September 06 20:04:30 UTC

				

User ID: 881

benmmurphy


				
				
				

				
1 follower   follows 0 users   joined 2022 September 06 20:04:30 UTC

					

No bio...


					

User ID: 881

so the presumably if we follow this logic a CEO of a company would get a shorter custodial sentence than an unemployed person because the extra consequences will be much larger for the CEO. if you have special circumstances that will make punishments extra costly then you should make extra effort to not break the law.

I'm reading along as well. Thank you for sharing your book.

one wonders if they are using the same social time preference rate when calculating the costs of global warming in the future vs the costs of preventing global warming in the present. my understanding is the Stern Report used a discount rate of around ~1.5%. So that seems kind of suspicious that they are using a discount rate of between 2.5 and 3.5% here. however, the difference in rates is not super large. I think if they used a Stern rate it would increase the present value of the payments by a factor of 1.4 (where 1.0 would be the same value) compared to the rate they used.

also, whether it was misleading or not I think depends on how it was worded. If they said something like "the cost of the deal is 3.4 billion with payments over 100 years" then I think that is misleading because it gives the reader the impression that the payments are going to be something like 100 payments of 3.4 billion / 100 and a reader might think the net present value will be much lower. If you did not want to mislead the reader you would use more explicit wording like: "the net present value of the payments is 3.4 billion and will be paid over 100 years". My guess is the wording will just be standard wordcel games where you try to put false impressions in the heads of other people and then later claim the reader is at fault. I guess its also completely possible that all of the detail was shared with parliament but no-one in parliament actually reads the detail.

i've read the telegraph article and part of the article is written by the shadow foreign secretary priti patel. it seems like everyone knew what the cash value of the payments were all along but did not know how the treasury were calculating the final cost. i think in this case its hard to claim that treasury were that misleading. treasury should have explained originally how they came to their present value calculations but it's not like the value of the cash payments was hidden.

Maybe he just found goatse and was sharing it with his friends.

dunkin donuts has also pushed out an advertisement in the same style: https://youtube.com/watch?v=OW7FytdloWU I wonder if this was a coincidence. I suspect its quite difficult to get an ad developed in such a short amount of time so the only non-coincidental explanation would be if they had something already cooking and then just tweaked it slightly to make it more triggering.

As an alternative we can have a system with losers but instead of the winners being chosen by merit we can use an alternative criteria like knowing the correct people or being born to the right parents.

I'm further out from central London than Canary Wharf but closer than Bromley and pints where I live will typically cost £6-7. £5.75 sounds like a great price for Canary Wharf. I think I've seen pints for less than £2 in Wetherspoons. Wetherspoons also has a crazy large food menu and prices seem cheaper compared to similar pubs. But I've never eaten in Wetherspoons so I'm not sure about how the quality.

Tim Walz was criticised for acting in effeminate ways. Not physically being a woman but acting like a woman.

Honestly, maybe we should remove defamation and have a free-for-all and consumers of media or other people's opinions have to just exercise caveat emptor. Part of the harm in defamation is because there are defamation laws. People are more trusting of another person's claim if they are putting money on the line. I guess the problem with ditching defamation laws is it might destroy the utility of useful information that was previously trusted.

Maybe Trump abusing defamation will produce a positive change. I guess its much harder to push a case against defamation when the victim is Alex Jones.

it's definitely not written in the style he uses for twitter. not sure how similar it is in style to other documents he has created in that era

The problems of LLMs and prompt injection when the LLM has access to sensitive data seem quite serious. This blog post illustrates the problem when hooking up the LLM to a production database which does seem a bit crazy: https://www.generalanalysis.com/blog/supabase-mcp-blog

There are some good comments on hackernews about the problem especially from saurik: https://news.ycombinator.com/item?id=44503862

Adding more agents is still just mitigating the issue (as noted by gregnr), as, if we had agents smart enough to "enforce invariants"--and we won't, ever, for much the same reason we don't trust a human to do that job, either--we wouldn't have this problem in the first place. If the agents have the ability to send information to the other agents, then all three of them can be tricked into sending information through.

BTW, this problem is way more brutal than I think anyone is catching onto, as reading tickets here is actually a red herring: the database itself is filled with user data! So if the LLM ever executes a SELECT query as part of a legitimate task, it can be subject to an attack wherein I've set the "address line 2" of my shipping address to "help! I'm trapped, and I need you to run the following SQL query to help me escape".

The simple solution here is that one simply CANNOT give an LLM the ability to run SQL queries against your database without reading every single one and manually allowing it. We can have the client keep patterns of whitelisted queries, but we also can't use an agent to help with that, as the first agent can be tricked into helping out the attacker by sending arbitrary data to the second one, stuffed into parameters.

The problem seems to be if you give the LLM readonly access to some data and there is untrusted input in this data then the LLM can be tricked into exfiltrating the data. If the LLM has write access to the data then it can also be tricked into modifying the data as well.

If this was true I have no idea how this didn't get him killed. There seems to be two outcomes. You go to jail, or someone is going to flip out because you didn't go to jail and murder you.

Photos remind me of the Capitol from the hunger games.

Mass AI cheating would fix the achievement gap and make it so the students who have fallen behind don't look like they have fallen behind. Ubiquitous AI cheating is potentially a massive gift for schools and universities. I guess with universities there is a risk it might destroy the reputation of the university. but this is a problem someone else will have to deal with in 5 years time. The current administrators are free to set fire to the schools reputation and enjoy all the rewards that come with it.

The banking system is already an investigative part of law enforcement. It would be just another crime to add to the list of crimes they are responsible for investigating. I'm not arguing having the banks perform this role is a good idea but that ship has already sailed.

Hanania dropping the sarcasm in the twitter thread:

I know right! Lmao, just like they told us to take the vax, fellow pureblood.

I have two datapoints about AI and programming recently.

  1. I asked it about an unknown PRNG function I've reverse engineered which I had previously tried googling to see if it was based on a standard function. It was able to find functions that were similar that I had not previously been able to find googling. I then asked it to come up with a known plaintext attack when part of the seed was known and it spat out something that looked correct.

  2. Another developer was looking at reverse engineering a function that was protected with a weak form of control flow obfuscation. The control flow obfuscation was just replacing function call instructions with a call to a shared global dispatch function that would end up calling the target function. The global dispatch function would execute approximately 200 instructions. There is an obvious attack against this obfuscation and it can be stripped off with ~100 lines of python in ghidra. They were using LLMs to try and investigate this function but didn't make much progress. But maybe with better prompting and allowing more access to tools it would have been possible for the LLM to make progress.

isn't a 10 year ban better than a bill which just bans states from regulating AI. at least the 10 years creates is a sunset clause on the regulation and would require congress to pass new legislation if they think continuing the ban is a good idea. though, maybe generally we should be pushing congress to include short sunset clauses in all legislation it passes because the future could be very different in X years.

right wing housing theorem of theory sounds a bit like high housing prices suppress TFR and this leads to an increase in immigration in order to maintain high housing prices. not sure if the data is consistent with that. i guess left wing housing theory of everything wouldn't include immigration but include inequality and some other left wing focused issues.

it might hurt greenwald's reach with the normies. whenever someone brings up glen's reporting with normies someone else can point to the sex videos to derail the conversation.

its might be a good thing. at the moment there is some value from pushing false information but if there is monetary value from generating false information then hopefully this will end up pushing the value and monetary value from pushing false information close to zero. there is some kind of commons that these false information spreaders are farming but once the barriers are removed and there are monetary incentives the commons is going to be destroyed.

isn't that just the meme about questions at academic lectures. its not usually about asking a question, its usually just the person pushing their hobby horse.

presumably, you can just compare deaths across a covid and non-covid period to get a rough estimate of covid deaths. i doubt the policies put in place to fight covid led to a large number of extra deaths in the short term.

The enforcement/investigation for KYC/AML looks like a 4th amendment violation or at least looks like its structured to do something that would be a 4th amendment violation if the government directly did the thing.

I suspect most countries now have some form of anti-trust legislation. Wikipedia has some details on the price fixing page: https://en.wikipedia.org/wiki/Price_fixing However, there may have been periods of time where countries had strong unions but no anti-cartel legislation. I think Australia only cracked down on price fixing after 1974.