

Math students in university need to verify basically everything, that’s a lot of what the career is about. I remember being humbled when asked to prove something as familiar to everybody as -1 * -1 = 1
Math students in university need to verify basically everything, that’s a lot of what the career is about. I remember being humbled when asked to prove something as familiar to everybody as -1 * -1 = 1
Your comment smells like opened canned fish left in the sun, eaten, then pooped out.
I feel like if there’s anyone being Reddit-like, it’s you.
We don’t know if π+e is irrational.
We don’t know if π*e is irrational.
That’ll make me really glad :)
We can also exchange contact information via private message if you want to.
Sadly, no. However, you could maybe do a personal blog, similar to how Terrence Tao does.
I really encourage you to try, it could help you find new stuff, check for mistakes, clarify ideas, and maybe even hear ideas from others.
Last year I went to “Rock to the park”, a free Colombian rock/metal festival. I went inside the “pogos” (mosh pits), some sort of way of violent dancing common in metal concerts, where everybody pushes everybody. I stayed there basically all night, despite being a very thin and physically weak person.
I think it was the most fun I’ve ever had in a social event.
How is that a “regime position”?
You are only saying this because you agree with general regime positions…
Please name 2.
Censorship and bias is nowhere near as bad in Chinese model. Try even a local Deepseek model and you’ll see it.
It would work the same way, you would just need to connect with your local model. For example, change the code to find the embeddings with your local model, and store that in Milvus. After that, do the inference calling your local model.
I’ve not used inference with local API, can’t help with that, but for embeddings, I used this model and it worked quite fast, plus was a top2 model in Hugging Face. Leaderboard. Model.
I didn’t do any training, just simple embed+interference.
Milvus documentation has a nice example: link. After this, you just need to use a persistent Milvus DB, instead of the ephimeral one in the documentation.
Let me know if you have further questions.
OP can also use an embedding model and work with vectorial databases for the RAG.
I use Milvus (vector DB engine; open source, can be self hosted) and OpenAI’s text-embedding-small-3 for the embedding (extreeeemely cheap). There’s also some very good open weights embed modelsln HuggingFace.
If I remember correctly, you can also use a water drop in the lens and it will amplify the image.
Back in university, I studied basically all day long, which was tiresome after long sessions of study, even if with friends. My great superpower is that it used to just take me ~10 seconds of resting with my eyes closed to feel a huuuuge boost of energy that lasted for 1-2 hours. After that boost expired, I just did it again.
Incredibly useful.
Genuine question: why would Denmark be happy/help Greenland become independent?
If you don’t mind connecting directly with your IP address, you don’t even have to pay a domain. Been doing this for around 2 years
That seems interesting. Do you have any material/link/blog on this?
No direct answer here, but my tests with models from HuggingFace measured about 1.25GB of VRAM per 1B parameters.
Your GPU should be fine if you want to play around.
This is not no-account YT, but no-cookies YT. If you are interested in experiencing this, some extensions automatically delete cookies for certain websites for you. I use Cookie AutoDelete