Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Website hosted on a 24 year old Linux server (serialport.org)
154 points by j4nek on April 6, 2023 | hide | past | favorite | 119 comments


I remember how magical and mysterious a view counter used to feel when I was a kid. How did the website know when someone visited it? Where was the count stored? And how on earth could a website update its own HTML to show the new count?

Geocities days were magical. I wonder if kids today have similar thoughts about Twitter profiles and such. It’s probably much easier to pick up the knowledge — back then, it was balkanized across IRC and whatever we used before Google.

I also started writing a FF7 walkthrough all on a single page, because I didn’t know how to link from one page to another. I made the different chapters different font colors, reasoning that it was good enough to change the color when you’re trying to figure out which section to get to. Good times…

EDIT: Whoa. The view count is real time. You can see how quickly HN is hammering the page by refreshing it. The guestbook works too. I wonder what the tech stack looks like.

Doing this in modern times would be so complicated. You’ll need a database, then software to talk to the database, and maybe webpack.


> Doing this in modern times would be so complicated.

No, it wouldn't. All you need to store this kind of data is a simple file. Most people would probably use PHP for something like that due to its ubiquity, but it could even be a few lines of bash, or Python, or Go, or whatever...


> All you need to store this kind of data is a simple file.

You mostly likely need a WAL Mode SQLite database. Most of the time, it's way simpler that handling state handling in concurrent situations yourself. (also, bindings are often available - if not outright bundled by default - under most common languages)

The "easy" way is all fun and games until your file is accessed in a concurrent fashion, and then your options are:

a) flat out die when concurrent things happens (file locks; default sqlite3 behaviour);

b) just write blindly to file and pretend concurrecy doesn't exist, but randomly lose data - (write to files directly like a crazy person; sqlite3 PRAGMA schema.synchronous = OFF)

c) allow reading at anytime, but serialize writing somehow (file locks + write + move atomic file operations; append writing and a journal; sqlite3 PRAGMA journal_mode=WAL)


> You mostly likely need a WAL Mode SQLite database.

Overkill, If you are writing to the text file only to increment a number of visitors, none of what you mentioned above is even required.

Create a "counter-hit" file for each visitor, count the number of files in the resources directory. Populate the master file with the file count and delete all the temporary files. Configure it on a crontab for say every three seconds.

Online now was even easier. IFrame refresh within the homepage. Upon refresh, populate the file with something like "online=true". Read the file every X, if the file modifcation hasn't been refreshed within 30 seconds, nerf the text file and mark that user as offline. psuedo-dynamic, but times were different then. I coded these in perl back in the day for my RTCW clan and how I miss it much.

Being a webmaster was art, an art that's now lost..


64 Player Depot. I always had a fondness for Tram. Played CAL Main. Good times. v43


> b) just write blindly to file and pretend concurrecy doesn't exist

This is likely what was done. Nobody really cares if a hit counter on a web page loses a few updates.


Given it's a single-threaded CPU, there's a fair chance the web server isn't concurrent anyway.

Linux got proper POSIX thread support with NPTL only in 2002.


The RaQ line used Apache which definitely supported concurrent requests. We're talking about 1999 here. Maybe this isn't as obvious to people today looking back, but concurrent request handling was absolutely required from the beginning: consider how slow your clients are! One single client on a slow dial-up modem can't be allowed to drag the whole site down.


It will be a forking web server. But the CPU scheduler will schedule process A while B is opening the file and reading and that. While B is then increasing the counter, leading to multiple processes with "wrong" data.l racing for the writes.


As I remember it, you would flock the file. You didn’t care if another process blocked for the short period it was held. It is not like you were getting huge amounts of traffic.


It can be much simpler if you're willing to write a couple lines of assembler: use an 8-byte file that contains one 64-bit counter, mmap() it, mlock() it, and use atomic CPU instructions to increment and read it.


Genuine question: Does it have to be mlocked for this to work?


Great question: it does not! It just removes a potential source of stalls.

If the page isn't resident in the page cache, the thread(s) executing the atomic increment will take a page fault and be blocked until the file data is read from storage. The latency of the fault might be significant, but the counter will remain accurate.

There's an important caveat I should have added: the counter file might be very stale after a power failure. If you care about the counter integrity, you have to msync(MS_SYNC) periodically, and that's expensive. It might actually stall all threads interacting with the page, depending on the filesystem; that used to be true but I'm not certain it still is (see https://lwn.net/Articles/486311/). Where writeback is allowed to race with writes, whether you would be guaranteed the 8-byte value written back wasn't "torn" without explicitly blocking increments while syncing is also an interesting question if DMA is involved...


I am going to ask gpt 4 to implement this for me so I can see what it looks like. Amazing.


I have to go to dinner so this isn't finished, but this is most of the code to prove it works: https://gist.github.com/jcalvinowens/0d7a5c327d863fca7c84daa...


How did it go?


For a guest book, you don't need anything like that. A simple solution could be to create a file for each entry. It's append-only anyway.

For a view counter - yes, you can use a full blown database to concurrently increment a simple integer value; or, you could do it in a few lines of code by yourself too.


$ apt install redis-server

$ redis-cli INCR hit-count


Still a huge overkill, and you need to properly set up persistence if you use it this way.

If you know that you'll need a cannon anyway, then sure, go for it, but when all we deal with are flies there's no reason to go that far. You could easily set up this kind of website on servers so simple that getting redis running on them would inflate their complexity quite considerably.


I think you can avoid the lock and all else, if all you need is a counter: Open a file in append mode (O_APPEND) and then write a byte for each visit. To get the count take the file size.

Of course you have to make sure to not run out of diskspace or max file size of the OS/filesystem (2GB on a 32bit system?)


Sqlite?? MS Access for the win!


You can do (c) in python with flock

https://www.php.net/manual/en/function.flock.php


> You mostly likely need a WAL Mode SQLite database

Or redis.


I think you mean journal mode off. Synchronous off is nothing to do with concurrency.


d) write to a temp file and atomically move it onto the read path. you will miss counts, but nothing will break.


I feel like you could do this without a database using the Linux kernel inotify API. Every time a file is accessed trigger an increment. It’s too beautiful of a day outside to try this right now but an interesting brain teaser!


Pretty sure Linux didn't have inotify then.

Probably a CGI script updates a counter in a file.


> Pretty sure Linux didn't have inotify then.

Did FUSE exist yet? You could hack it up in userspace...

(Have the HTML on a FUSE filesystem, and every time it gets a read it updates the HTML)



Can confirm, that's how it was done. A simple text file.


What about dnotify? I think that predated inotify. If so, I think it has the ability to notify a process when a file was accessed, so could fill a similar role.


> It’s too beautiful of a day outside to try this right now

That's what laptops are for!


If you're just sitting outside, you may as well be inside.


My mood says otherwise.


Only if your laptop has some e-ink/e-paper tech screen really.


Sitting in the shade works for me, but yes, e-ink or STN works well (decades-old tech).


Would be flying with all the bots trying to hack Wordpress bugs


Geocities got reincarnated as https://glitch.com/ and it has many more features and you can clone others sites or starter packs.


I'd say https://neocities.org/ is the modern reincarnation of Geocities.


i think neocities would be the more direct analogue https://neocities.org/browse


> I wonder if kids today have similar thoughts

I'm sure that some do, and most don't. Most kids 30 years ago didn't have similar thoughts either. Few people are meant to be programmers, engineers, etc. Were you not shunned as a nerd for having an interest in how computers worked?


> Were you not shunned as a nerd for having an interest in how computers worked?

I remember those days. There was one table in the lunch room where the 10 or so kids who were into computers sat. I found out later talking to some other classmates that nobody else wanted to talk to us because they thought they weren't smart enough.


Christ, that there were other kids with an interest in computers is amazing. I sat on my own lmao.


I had a friend to sit with at lunch and talk to excitedly about computers, PC gaming, robots, military hardware, etc. This was in the 90s.


you guys had friends?


> Were you not shunned as a nerd for having an interest in how computers worked?

I certainly wasn’t in the 10s. Usually people would come to me and my nerd buddies for PC building advice, smartphone advice, and of course the occasional “can you pirate this for me” request ;)


I think the early-00s was when it started to shift from something stigmatized to something that a person could be into and still be cool—or, at least, not regarded as automatically weird. By 2010 the shift was basically done, and The Great Techbro Wave was washing over the industry.

It's been super weird to watch the common idea of a computer person shift from someone with bad skin and shit posture in a pit-stained button-up and thick-rimmed glasses and who rarely sees the sun, to a trim, decent-looking person who's into rock climbing and wears lots of new, clean clothes from Patagonia. It was a really fast transition, in hindsight.


Back when I was in college in the mid-00s and TheFacebook was cool, because it was still limited to people with .edu email addresses from top-tier and Boston-area schools, I remember a group named "I'm a Computer Science Major and I Shower Regularly." I joined it even though I probably didn't shower regularly enough.

(For those who didn't have access to TheFacebook in those days, joining groups was part of how we expressed our political views, interests, sense of humor, etc. Groups had a name and a photo and a list of members and often served little other purpose.)


Tech had long since gone mainstream by that point though as the barrier of admission was lowered significantly, it was no longer the hobby of nerds.


I would still say the people in Linux club were _nerds_ but the platonic ideal of “nerd” had shifted from what it was in the 80s/90s.


(I'm probably getting some details wrong here because this was all nearly 30 years ago, but anyway...)

The realtime hit counter and fascination therewith brought up a memory.

Back in '95 the regional dial-up ISP I used offered personal website hosting on one of their boxes (the standard http://www.isp.com/~username gig). I was very excited to put up a personal website. That was fun for awhile but soon I wanted to do more with it. I liked the look of hit counters, guest books, etc.

I became annoyed because this "CGI" thing I'd read about didn't work. I could FTP up Perl scripts named with ".cgi" filenames, however accessing them only resulted in seeing the Perl code.

I setup a webserver (NCSA httpd) on my Slackware box and started experimenting. I learned about .htaccess files and, more importantly, learned I could override the server's main httpd.conf with directives in my directory's .htaccess file.

A little experimenting with my ISP's web server turned up that, sure enough, they had not configured "AllowOverride None". Lo and behold I could enable server-side includes[0] in my directory-- including executing CGIs via SSIs!

Once I figured out how to chmod +x my scripts thru the FTP server I was in business. I had a hit counter! I had a script to check the referer (sic) and add a link in the footer back to the "main page" if you didn't "come from" there. I wrote out my own log files (because I didn't have access to the server's main log file) so I could see the IPs, user agent strings, and referers of visitors. (I also learned about file locking and parallel execution of the script by multiple requests occurring simultaneously. Fun!)

(The ISP didn't offer shell access but I figured out, pretty quickly, that I could write a rudimentary web shell with my SSI-based CGIs. I didn't do too much with that because I didn't want to get caught and I'd mostly gotten the desire to do "unauthorized pro-bono remote system administration" out of my system by that time...)

This little "site" my friend and I ran got listed on Yahoo (we submitted it... >smile<). I remember seeing a bunch of clients with reverse-lookups of the form "xxx.yahoo.com" that day.

Wired had recently done an article on Yahoo[1]. I saw a client named "srinija.yahoo.com" and remembered the article talked about a "Srinija Srinivasan" being an employee there (working on ontology). It was really exciting to see somebody named in a Wired article accessing my silly little site. (The most memorable client name from Yahoo was "ratbastard.yahoo.com", BTW. I'd love to know the backstory. Guess I already wrote about this once...[2])

[0] https://www.oreilly.com/openbook/cgi/ch05_01.html

[1] https://www.wired.com/1996/05/indexweb/

[2] https://news.ycombinator.com/item?id=16741431


Pretty sure the first guestbook/counter I wrote were in perl and wrote to a text file :D


There are (still) dozens of us!


Young kids today do not have any kind of thoughts about the workings of apps they use any more than they have thoughts about how their favorite shows or movies are made, or how their house was built, or how a car works.


Round here some of the teenagers show promise as tinkerers.

Reverse engineering the e-scooters so you don't have to pay is popular.

Old cars everywhere being rebuilt and modified.

One guy has added a small engine to his pushbike using some kind of gear wheel coupling.

They are making their own short films and fashion shoots now the Sun is out. Lets face it a smartphone is all you basically need to create (some kind of) media now.


DIY is huge in 2023. Just look at YouTube.


This is a very HN take on ‘kids these day’ complaints which seem to go back as far as human records do.

https://www.reddit.com/r/history/comments/7btv14/the_more_th...


Yelling at sky much?


If by young kids, you mean college students, hahaha.


Similar to how the upcoming WAL2 works in SQLite, could do it atomically or "concurrent safe" like:

- Main count file.

- 2 "Append" files.

======

- Append 'a' to append file 1.

- After some time, swap append file 1 with 2.

- Count up 'a' in non-active append file 1, add it to main count.

Then just repeat, swapping append files.


Why would it be more complicated today than back then? Just use Python, write the view counter into a text file and use javascript with polling (or websockets) for the realtime refresh.


you could do this on an ftp server, no need for webpack and databases


Except it's not, not really - it's hosted by Cloudflare which even requires you to enable JavaScript to pass.


I've got a pentium 90 I occasionally turn on and I'll reply again when I do here. Hosting stuff from it is wildly taxing. It's on my internal network and I proxy it through apache over a raspberry pi. But it does genuinely serve content to the proxy - which does not cache it.

The machine runs netbsd with apache and has 128MB of memory and, as a cheat I'll admit, uses a SD/IDE bridge device to go to an ATA/100 interface (my older compatible PATA drives were failing on it ... I think there's some shelf-life degradation on those things although I never actually looked it up).

But even on the 100MB/s nic, the thing is unacceptably slow in serving pages. Maybe modern apache not being designed for 1994 hardware has something to do with it. I have some bullshit toy webserver I wrote, geez, 18 years ago, I wonder if it will be faster (https://github.com/kristopolous/apac) ... exciting things to look forward to after I bike home from this coffee shop.

That weird readme was some kind of pre-markdown markdown I had made and have long lost the interpreter for.

update: just tried compiling it. still works and serves pages. I like how I had SunOS support, lol. It's probably comically insecure so have fun I guess?

Here's the pentium 90 running apache: http://bootstra386.com/~hn/


it's worth noting, this is a public machine. You can log into it and even make your own account

It's wild to see what's actually slow that's imperceptible now and what claims to be intel 586 32-bit compatible which actually no longer is.


This was a cute blast from the past, thanks for sharing.


The server may be 24 years old and serving pages, but it's only serving them to the Cloudflare cache, which is kinda meh.


It's not, Cloudflare is just being used for rate limiting / ddos protection here.


Never needed that on my server that's now also a decade old laptop. At least, not for the kind of attention HN#1 gets you (perhaps r/all #1 would be different). I can also imagine it being useful if you run a site that attracts controversy, or if your livelihood depends on it but you're not big enough to have your own datacenter and people want to extort that. Neither is really the case for me, I guess people don't put enough crap on my unfiltered file upload service for me to need big brother protection


I think it's a reasonably good precaution to take with a 24 year old server. I'd imagine on hardware that old you could easily get DOSed by a single mean client (or a web scraper bot with bad behavior), so a ddos isn't even needed.

Also this is available on Cloudflare's free plan so it's much safer to take the precaution in case you might need it down the line, rather than get taken down and have to fiddle with setting up Cloudflare on the spot.


You can also kill my server with a single client. Nobody has for the ~15 years that I've hosted on old laptops now. I've run game servers, a wiki mirror, file upload sites, a Tor exit node, torrent seeding, recently a VM image for a malware analysis course, all sorts of random tools and scripts, you name it; various different audiences but most with some technical know-how, yet nobody has felt the need.

People have messed with things and found bugs (so far always reported more-or-less ethically), and lots of scanners go across the Internet daily, but I've never seen a deliberate take-down effort. (Kind of wondering whether I'm calling that upon myself now, but so be it. Let's see what happens.)

This fear of having to react to a DoS attack by knocking on big brother's door and thus preemptively knocking, it's so anti self hosting mentality, but is also pervasive throughout the self hosting community, I really don't understand it.


The chance that someone wants to connect to your ancient server from an ancient client that can't pass Cloudflare's DDOS protection is probably way higher than someone wanting to DDOS it. (For example, most people may not realize that Cloudflare DDOS basically makes the website inaccessible via TOR for many people.)

Preemptive protection against attacks that never come makes the internet worse for everyone.


This old chestnut again, really? You can whitelist Tor without any issues on CloudFlare. It's one of the first things most people do. Read the documentation.


And service owners go out of their way to check the CF settings and opt in to the darknet... how often exactly?

It's implied with CF that you block people they can't track and prove to be innocent. Hence this old chestnut still existing: it's no joke.


It's gotten somewhat better now that CF is using proof-of-work DDOS protection vs the older captchabullshit, but if you browse the web on Tor you will find LARGE numbers of sites that are CF blocking you.

(Props to HN, it works over tor)


You can even serve up websites via the Tor network instead of using exit nodes: https://developers.cloudflare.com/support/firewall/learn-mor...

If users choose to use a non-standard method to access a service that's not actively supported by the service provider, that's on the user.


How can you tell the difference between cloud flare caching and cloud flare ddos alone?


The "CF-Cache-Status: DYNAMIC" response header seems to indicate that the file was not cached.

https://developers.cloudflare.com/cache/about/default-cache-...

> Cloudflare does not consider the asset eligible to cache and your Cloudflare settings do not explicitly instruct Cloudflare to cache the asset. Instead, the asset was requested from the origin web server. Use Page Rules to implement custom caching options.


That's correct. The only cached resources on this page are the gifs, jpgs, pngs and favicon.

The html page and the hit counter (counter.pl) aren't cached.


I can't even pass with everything enabled (again). I've tried refreshing fifteen times now to try to hit it and I give up.


You cannot have nice things these days anymore, it seems :-/

Still happy to see this project online


Its using cloud flare as a cloud flare tunnel. Still hosted in the cobalt server.


I immediately close the browser tab every time I see the cloudflare spinner.

Very few websites are worth supporting the massive MITM that is cloudflare.


I just see "Enable JavaScript and cookies to continue" and leave.

You're not running arbitrary code on my machine without earning my trust first.


Unfortunately due to a huge volume of spam and bot attacks, we have to turn on maximum Cloudflare settings. Otherwise the server simply can't stay online with the current interest level.


Hey, site admin here. The server got overloaded! we'll try to reboot it and bring it back

  bash# w
  sh: fork: Resource temporarily unavailable


24 years is not actually THAT old...

The cpu (AMD K6-2 300 MHz) is a lot faster than a raspberry pi 1 for many things, it has 512mb of ram, so for running a simple webpage, it should still work good enough.

edit: especially with cloudflare infront of it...


Yes, around that time (circa 1999) there was the matchbox web server ar Stanford with much lower capabilities (as hardware):

https://web.archive.org/web/19991128033948/http://wearables....

>The Matchbox Webserver, which is serving this and the other web pages to you, is a single-board AMD 486-SX computer with a 66 MHz CPU, 16 MB RAM, and 16 MB flash ROM, big enough to hold a useful amount of RedHat 5.2 Linux including the HTTP daemon that runs the web server.


Thanks for posting this, I thought it was long gone and forgotten. That server at one time was kept live on the Internet, and I showed that back in the day during a talk to demonstrate Linux capabilities. The page however says it was created in 1999, while I seem to recall it was about one year before, but the server is that one without doubt: I could never forget that photo showing it side by side with the matchbox.


Yes, first version was 1998, and was upgraded in 1999:

https://aruntechgeek.wordpress.com/2008/08/30/worlds-smalles...

AFAICR it stayed online several years.

It is to be noted how the project belonged to the "wearable" laboratory/section.

The related paper (it is a postcript inside a .gz archive) is still available:

http://boole.stanford.edu/iswc.ps.gz

here it is in .pdf format:

https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.18...

In 2000 Sergei Brin used it in a talk as an example of what could be done with mobile devices:

https://www.wired.com/2014/05/brin-mobile-2000/


Old GV opened both fine (even the gzipped PS).


There were micro Linux released on floppies such as basic Linux and mulinux.


+1

the og pi is still fairly capable, and you can achieve a lot with it if you offload a lot of things or write more efficient systems.


* Guestbook: check

* Animated gif: check

* View counter: check

* Repeating background image: check

* Mobile? What's that? : check

Those were the days.


* Asking to bookmark because search engines weren't great

("We recommend that you bookmark this site so that it is easy to access in the future.")

check


You've missed "some text chasing the mouse pointer". I wouldn't think a webpage without it.


* Under construction animated GIF: CHECK


Ah man, I always wanted a Cobalt RaQ server - they looked so cool! I seem to remember people reselling access to them and they had web guis that looked like them as well, I think a friend had one


So many comments and none about RaQ, really.

The Cobalt RaQ was one in a series of Cobalt 'server appliances'. These appliances were and remain unique largely because of their design, including (1) Industrial design incorporating round moulded transparent blue plastic elements and an LCD/button basic network config and status system. (2) Hardware design which initially (for RaQ v1 + RaQ v2 + Qube v1 + Qube v2) was based upon MIPS processors, a rarity in commercial Linux products. This server is unfortunately a RaQ v3, so AMD K6 (intel) based rather than MIPS. (3) Software design, which incorporated a complex (IIRC perl v5 based) software module upload function which allowed for commercial sale of new software packages easily installed on the devices with web configuration.

One of my first jobs circa 2000 was writing a VPN module for the Cobalt devices in perl v5 which was sold by an Australian company. I gave away my hardware years ago.


When the '24 year old Linux server' is asking me to solve captchas for cloudflare...


Yeah its a shame and wasn't our intended way to get this out to the world.. unfortunately in modern times, botnets and script kiddies had other plans and this server was on the receiving end of a lot of crap after the last Youtube video where we announced it being online.


  Icon for raq.serialport.orgraq.serialport.org
  Checking if the site connection is secure
  raq.serialport.org needs to review the security of your connection before proceeding.
  Why am I seeing this page?
Cloudflare's bullshit strikes again!

Here's a real server, hosting without a caching proxy, that's 30 years old:

http://elsie.zia.io/


Oh man, this brought back memories. I used to work on Cobalt web servers, I had a stack of them at home after they eventually ended up decommissioned and these were the basis (along with some Sun pizza boxes) for most of my home lab projects for many years.


same here. I had a raq3 for a little while and a Cobalt Qube (2? I think? it was a MIPS unit) and they were a lot of fun. I was sad when Sun bought up Cobalt, though. the units were discontinued soon after.


This is a great video covering the story on the company, who build this server and an early example of using Linux for web hosting!

https://www.youtube.com/watch?v=PJ6AvtV3Ya4


Thanks for mentioning, that is our video and this raq talked about here is one we restored :)


I remember the Cobalt Cube. It was used for one of the first public demonstrations of SELinux released by the NSA. They gave everyone on the internet root that was under a restrictive MLS policy. That little machine handled a large number of people ssh'ing into it none of whom believed they were really logged in as root. I regret never buying one of those little servers.


Surely hope that isn't running a 24-year old Linux distribution...


Is there actually a bug that's remotely exploitable in the TCP stack, if you drop everything besides bound tcp ports in iptables?

Any time I see a customer using an ssh or apache version from ten years ago (different from the kernel, but the kernel doesn't give you version disclosure), the list of bugs is all like "if they use RewriteRule and the second URI parameter is echoed unfiltered into a Location header by the application code and it's on Windows then you can do something actually interesting. The standard services' standard feature set is annoyingly boring when outdated ... most of the time.

Predictable sequence numbers would be my first worry, but with TLS and SSH being the main protocols of relevance, it doesn't really matter if you can send off-path traffic into a TCP connection.


I assumed myself there must be a lot of root privilege vulnerabilities in Apache, but it appears there isn't a single one:

https://www.cvedetails.com/vulnerability-list.php?vendor_id=...

All the above seem to be in other Apache products. I think the fact that Apache doesn't run as root helps to mitigate these risks. Having said that, I have had a server compromised (about 20 years ago), through apache, suexec and a vulnerable cgi script, so I think it's best to be paranoid about security. I don't even see suexec on my current ubuntu 20.04 server (I used to just delete it).


Don't get me wrong, I don't mean to recommend running ancient software and not caring about security! It was rather out of curiosity: for the past 20-something years, which is no guarantee for the future (especially with software getting more complex than ever), would there be a blanket issue if you follow best practices in general (like dropping ports in iptables if you don't need them), or is it only specific circumstances like if you use a found-to-be-vulnerable function like for filename sanitisation? I don't know of any blanket linux/apache compromises off the top of my head, but there very well might be some.


From my perspective, there seems to be a lot more vulnerabilities found today than 20 years ago, so I don't think it's wise to have an unpatched 24 year old kernel or web server.



I had one of these for my first dedicated server. I kept up with the vendor's updates but it still was compromised. So I hope it wasn't running a 24-year old Linux distribution 24 years ago.


If it were, it could be any of these hot commodities: https://soft.lafibre.info/#year1999


Yep, looks like we finally killed it.


> Website hosted on a 24 year old Linux server

Isn't that kind of mundane? The web was well-established by 1999, and the hardware of that era was literally designed and built to serve websites like this.


> CPU: AMD K6-2 300 MHz

Ah, I was hoping this was a MIPS model


I used to run servers for a decade or more. Biggest fears: 1) fan on motherboard going out, 2) rebooting, if it hadn’t for years as I didn’t trust the startup disk.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: