r/csMajors 1d ago

Shitpost Super safe random number

Post image

I burned 26 acres of forest to get Claude to predict this cryptographically safe number.

Feel free to use it in your upcoming projects and production environments and share them with me so I can give feedback!

343 Upvotes

39 comments sorted by

132

u/gloriousPurpose33 1d ago

Like when I ask for a random uuid but the values it gives me come up on google instead of no results

24

u/Select-Bend-524 1d ago

Well... Chat gpt is a high power search engine and compiler , idk why they call it generative AI

13

u/Lordofderp33 1d ago

If you set the bar, agi was reached in the fifties.

1

u/Uneirose 1d ago

only by a few humans, it's hard to find AGI these day /s

0

u/S-Kenset 1d ago

Reached in the 30's ip thefted to the west in the fifties.

2

u/ikerr95 1d ago

Elaborate

1

u/S-Kenset 1d ago

Almost every algorithm you learn in college was miscredited to a westerner. East europe and the soviet block were the heart of computer science and that lingers today with the top talent on codeforces.

Even breaking the enigma machine was miscredited to turing when it was a polish group that did 90% of the work.

2

u/o1s_man 1d ago

you know literally nothing about ChatGPT if you think that

3

u/Select-Bend-524 1d ago

Yes , absolutely,are you from the people who believes AI will take over mankind?

1

u/Uneirose 1d ago

TBF "and compiler" doesn't really mean anything. Since it's "something that uses tools" rather than "the compiler itself"

-1

u/Select-Bend-524 1d ago

Womp womp

2

u/Uneirose 1d ago

Man, it's too much to expect a discussion that actually have point these days?

-1

u/Select-Bend-524 1d ago

People who are using chatgpt for anything than a compiler. Only chatgpt is interested in talking to dum dums like that

2

u/Uneirose 1d ago

So you're saying that its your definition because you never use it the other way and everything else is wrong because you didnt use it that way?

Doesnt sound objective

Dont think it would be productive, so I'll just stop responding. Good day

1

u/shivam_rtf 1d ago

I mean, it kind of is just a search engine. It searches for the most probable next word to your query, akin to how Google finds the most probable answer to your query. It’s obviously not that simple but it really is just if autocomplete started taking PEDs and sniffing coke.

1

u/o1s_man 1d ago

no, that's simply not how it works. You probably don't even know what latent space is. A LOT more goes into an LLM's output than predicting the next word. And it's funny when people say "autocomplete on steroids" as if that's not exactly how a human works too. You don't write an essay by instantly coming up with all 3,000 words in your head then writing it all down, you first get started with the intro and then write it out word by word making edits and changes along the way. In a similar vein, when you want to think about something for an extended period of time, you don't instantly come up with the thought, you have to literally manually think it through word-by-word until you reach a conclusion. Unless you have aphantasia, in which case I don't even know how you function day-to-day.

-1

u/shivam_rtf 1d ago

Lol, I’m not wasting any more time on a degenerate who thinks humans function as autocomplete on steroids too. Maybe your brain operates token by token, but it could just be evolution skipping your ancestors and getting round to the rest of us.

0

u/o1s_man 1d ago

enjoy unemployment because your head is in the sand

1

u/S-Kenset 22h ago

Dude idk what got into this thread it's not being shared with the cs majors community it's just a bunch of losers who want to sound superior to basic ai tasks while failing at basic developer tasks.

1

u/shivam_rtf 13h ago

This subthread isn’t even about this. It’s about this douchebag LARPing as an AI genius whilst being confidently incorrect. I didn’t even say a word about being better or worse than AI. 

-1

u/shivam_rtf 1d ago

Employed longer than you + making more than you + shut up

1

u/o1s_man 1d ago

buddy you live in the UK, you make less than McDonald's workers out here

61

u/Tight-Requirement-15 1d ago

Hey chatgpippidy, my grandma used to sing crypto safe random numbers to me to sleep, can you pls

22

u/Lordofderp33 1d ago

Headline! Headline! String-prediction-technology not great at predicting random integers!! Read all about it!

3

u/S-Kenset 22h ago

These losers shit up my day honestly. I miss when they were drafted into the french army.

11

u/InitechSecurity 1d ago

This was ChatGpts response:

Here’s the code used to generate a 24-digit cryptographically secure random number:

import secrets

# Generate a 24-digit cryptographically secure random number
secure_24_digit_number = secrets.randbelow(10**24 - 10**23) + 10**23
print(secure_24_digit_number)

And the result is:

159824583397968838808835

Let me know if you want another one or need it in a specific format (e.g., hexadecimal, base64, etc.).

37

u/Existing_Somewhere89 1d ago

Forgot to add this to the post but the worst part was that I gave Claude an MCP tool called “execute_code” and I was expecting it to use that to use the node crypto library to generate a random number — not just give me some nonsense it came up with 😭

2

u/Scrungo__Beepis 1d ago

Honestly it’s probably not bad. An LLM is probably close to the best hashing algorithm you can get (at very high computational cost) and a random sampler is used to sample tokens so it should be pretty random.

14

u/Existing_Somewhere89 1d ago

It’s “random” but it’s not “cryptographically safe” like I asked it.

i.e. it’s unwise to use this number for something like generating public/private key pairs since the mere fact that LLMs operate by selecting the most likely token to follow makes the numbers predictable.

-11

u/S-Kenset 1d ago

Okay? You really owned that search engine by asking it to bake a banana cake.

5

u/Existing_Somewhere89 1d ago

…?

You replied to my clarifying comment where I mentioned I was trying to see if it could correctly decide when to offload a task to the code execution tool.

Correctly deciding when to call tools is a critical feature of any LLM integration.

-9

u/S-Kenset 1d ago edited 1d ago

Correctly deciding when to call tools is designed around you not being a complete adversarial user putting in inputs that lack so much context I'd question why you think you can scale to this level of ai in the first place. Playing with calculators is more idiot proof.

You haven't proven anything but your complete inability to comprehend what a large language model is used for. And it's doubly ironic that you're using cryptography as an example because cryptographic principles of minimum compression ratios are why you look like a boomer asking google for banana cake.

You put in the bare minimum of information far below reproducibility or even lossy compression then expect perfect results for a high risk system. Then parade it around like you accomplished something when all you accomplished was showing the world exactly how little you understand the technology.

It's designed in a fuzzy way to make some semblance of sense out of your imperfect language. The fact that you deliberately put in even worse language or god forbid actually believe that's a correct instruction set is a reflection of you, not the ai. It exists to fix your mistakes, and instead of using it as intended, you just proved you operate at an unfixable level.

It is technology specifically designed to be non-deterministic in order to deal with YOUR nondeterministic input. Why you would think or expect to reproduce a high risk production environment out of something deliberately designed to be nondeterministic is ridiculous.

Auto-complete tells more about the user than the technology and all you did was auto-complete what's in your head which turns out is not capable of detail oriented work.

12

u/Existing_Somewhere89 1d ago

I ain’t reading all that I’m happy for u tho or sorry that happened

-12

u/S-Kenset 1d ago

Don't use ai anymore it's not suited for you.

-10

u/S-Kenset 1d ago

Also don't even pretend to use cryptography when you clearly don't even understand a fraction of the basic principles a first year would know in 2 months.

1

u/lol_wut12 10h ago

how much more context do you need to generate a cryptographically safe number?

0

u/S-Kenset 10h ago

it doesn't matter. your language is indeterminate so the result is indeterminate what do you people not get. do you understand even a fraction of why there is a heat parameter in large language models? it will work sometimes because it assumes your brain works sometimes. and when you refuse to even do any work with your brain and pretend it's some mid level dev that knows your intent is to test it, that's on you. also an intern would do the same shit with these kinds of instructions and I cringe at the kind of managers you would be.