![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://programming.dev/pictrs/image/170721ad-9010-470f-a4a4-ead95f51f13b.png)
Hmm, maybe I am missing the point. What exactly do you mean by handling automatic updates in place? Like, the program that requires and parses the config file is watching for changes to the config file?
Hmm, maybe I am missing the point. What exactly do you mean by handling automatic updates in place? Like, the program that requires and parses the config file is watching for changes to the config file?
Until someone cannot tell the difference between tab and space when configuring or you miss one indentation. Seriously, whoever thinks indentation should have semantic meaning for computers should burn in hell. Indentation is for us, humans, not computers. You can write a JSON with or without indentation if you want. Also, use JSON5 to have comments and other good stuff for a config file.
Yep. Much like we don’t treat phone numbers like a number. The rule of thumb is that if you don’t do any arithmetic with it, it is not a “number” but numeric.
I can already imagine the log generated will be a hint. We usually automate those anyway as it is closer to (D)DoS too.
Well, this is just my 2-cent. I think you misunderstand the point I am making. First of all, accept that translation is a lossy process. A translation will always lose meaning one way or another, and without making a full essay about an art piece, you will never get the full picture of the art when translated. Think of it this way, does Haiku in Japanese make sense in English? Maybe. But most likely not. So anyone that wanted to experience the full art must either read an essay about said art or learn the original language. But for story, a translation can at least give you the gist of the event that is happening. Story will inherently have event that have to be conveyed. So a loss of information from subtlety can be tolerated since the highlight is another piece (the string of event).
Secondly, how the model works. GPT is a very bad representation for translation model. Generative Pretrained Transformer, well generate something. I’d argue translation is not a generative task, rather distance calculation task. I think you should read more upon how the current machine learning model works. I suggest 3Blue1Brown channel on youtube as he have a good video on the topic and very recently Welch Labs also made a video comparing it to AlexNet, (arguably) the first breakthrough on computer vision task.
Yeah, nevermind, I didn’t know what I wrote either. I need my sleep lol.
Depends on the application. When the user is able to set the schema via database, then you cannot assume the shape of the data.
GPL v2 don’t, which lead to tivoization. But Linus himself didn’t agree with that standing.
I prefer CUID
Just to clarify: Yes, I do know not all use cases are appropriate for CUID. But in general when generating ID, I’d use CUID2
And the other memes just mention moth vibrating their genitals to throw off bat.
Easy or not depends vary wildly. But the usual task is
That is the bare minimum, but we need to do more configuration to be able to boot. Hence the next task is configuring the following
That is it. Everything else is usually work specific. Like, if you wanted arch to be a server, you usually didn’t install a GUI. For workstation and gaming, you need more steps but it will vary depending on hardware. The archwiki covers a good deal of hardware from laptop to desktop and their quirks.
Well, maybe he refers to the branch with the greatest common ancestor of us and whales. So our branch of evolution can have mating calls 100KM rather than their branch with measly 80KM
To be fair, he could also just be fed up after a long time being ignored for what he thinks is quite an important design decision.
Oh you mean privately owned cctv that faces public place. Yeah, I agree it is questionable since public spaces are the jurisdiction of law enforcement. But I can also see it as someone with a hobby of hoarding data, archivists, and the other extreme being as you said, voyeur. But there is no way of knowing hence I also understand your irks towards it.
I think valve is silently collecting the data for their internal team all this time when they didn’t act. It’s always a cat and mouse game after all, but if the cat is patient, then he may catch a lot of mice. And remember, they don’t have as invasive anti cheat as any other game of the same genre.
Genuinely curious, why do you hate public cameras/cctv?
A word strings together form a sentence which carries meaning yes, that is language. And the order of those words will affect the meaning too, as in any language. LLM then will reflect those statistically significant words together as a feature in higher dimensional space. Now, LLM themselves don’t understand it nor can it reason from those feature. But it can find a distance in those spaces. And if for example a lot of similar kanji and the translation appear enough times, LLM will make the said kanji and translation closer together in the feature space.
The more token size, the more context and more precise the feature will be. You should understand that LLM will not look at a single kanji in isolation, rather it can read from the whole page or book. So a single kanji may be statistically paired with the word “king” or whatever, but with context from the previous token it can become another word. And again, if we know the literary art in advance, we could use the different model for the type of language that is usually used for that. You can have a shonen manga translator for example, or a web novel about isekai models. Both will give the best result for their respective types of art.
I am not saying it will give 100% correct results, but neither does human translation as it will always be a lossy process. But you do need to understand that statistical models aren’t inherently bad at embedding different meanings for the same word. “Ruler” in isolation will be statistically likely to be an object used to measure or a person in charge of a country depending on the model used. But “male ruler” will have a significantly different location in the feature space for the same LLM for the former, or closer for the latter case.
What if… we add more dimension?
Ahh, then the modification must be done on the AST level not the in-memory representation since anyway you do it, you must retain the original.