remux vs reencode itself is a big point for video noobs such as myself.
in the past, cropping out a part of a video would meant reencoding it in some random preset. this would often take longer than required. however, accidentally realized the difference when trying out avidemux [1] and clipping together videos blazing fast (provided in same container and format)!
> just use a VM with PCIe passthrough to pass in a gpu and to load up a game for windows or use CAD, etc. Seriously, ez.
as pointed out by others and throughout the thread, anti-cheat is very restrictive inside a vm. even cloud streaming fails to support some popular titles from EA and R* due to this.
meanwhile, WSL exists, and now provides good gpu passthrough and has a higher success rate with the said permutation.
while most of the discourse is around text and (multimodal) LLMs, the past year has been quite interesting in other media as well. i suppose the "slop" section did hint on it briefly.
while LLM-generated text was already a thing of the past couple years, this year images and videos had the "AI or not" moment. it appears to have a bigger impact than our myopic world of software. another trend towards the end of the year was around "vibe training" of new (albeit much smaller) AI models.
personally, getting up and running with a project has been easier than ever, but unlike OP, i don't share the same excitement to make anymore. perhaps vibe coding with a phone will get more streamlined with a killer app in 2026.
poured way too many hours into this game long back before it became too painful to play. this almost made me go back and check on the madness but unfortunately the servers are taken offline.
while i don't agree with how devs and the publisher works on community feedback, it is still miles better than what EA does. not that it is a high bar to clear.
the related post from simonw is quite insightful and while the reaction is quite intense, this was quite technically interesting:
> Turns out Claude Opus 4.5 knows the trick where you can add .patch to any commit on GitHub to get the author’s unredacted email address (I’ve redacted it above).
given how capable certain aspects of these models are becoming over time, the user's intent is more important than ever. the resulting email content appears like a poorly-made spam (without the phishing parts), while able to contact someone just from their name!
the title and core argument do not seem to align much. subject is git, but most discourse is around github. the role discussed is for serving packages, while the title refers to it as "database".
regardless of the semantics, git is not ideal for serving files. this has been more apparent in the ai world, where extensions such as git lfs has allowed larger file size.
but as seen elsewhere, network effects trump over any design issues. we can always introduce an "lfs" for better shallow fetching (cached? compressed?) and this would resolve a majority of the op's grievences.
> If I had a dollar for every time I've helped somebody untangle the mess of python environment libraries created by an undocumented mix of python delivered through the distributions package management versus native pip versus manually installed...
macos and linux usually come with a python installation out of the box. windows should be following suite but regardless, using uv vs venv is not that different for most users. in fact to use uv in a project, `uv venv` seems like a prerequisite.
> macos and linux usually come with a python installation out of the box
Yep. But it's either old or broken or both. Using a tool not dependent on the python ecosystem to manage the python ecosystem is the trick here that makes it so reliable and invulnerable to issues that characterize python / dependency hell.
imho the dependency hell is a product of the dependencies themselves (a la node), especially the lack of version fixing in majority of projects.
conda already had the independence from python distribution, but it still had its own set of problems with overlap with pip (see mamba).
i personally use uv for projects at work, but for smaller projects, `requirements.txt` feel more readable than the `toml` and `uv.lock`. in the spirit of encouraging best practices, it is still probably simpler to do it with older tools. but larger projects definitely benefit, such as in building container images.
in the past, cropping out a part of a video would meant reencoding it in some random preset. this would often take longer than required. however, accidentally realized the difference when trying out avidemux [1] and clipping together videos blazing fast (provided in same container and format)!
[1] http://fixounet.free.fr/avidemux/