IBM 100/10 Ethernet Adapter

In case anyone struggles in figuring out what to do with this card, here’s the deal. On eBay, there are cards labeled as IBM 100/10 Ethernet Adapter, which is a fairly odd, as 100mbit ISA cards aren’t too common. They look like this:

IBM 100/10 ISA Ethernet Adapter

These cards tend to come with part numbers like 25H3511 or maybe 25H3501. I’ve also seen them referred to as IBMFEI cards in some literature. I found a resource online with the mapping of these cards to drivers, which can be found here. Ultimately, for DOS, Win 3.1, Win95, or OS/2 you’ll want to get a hold of ETI1001.EXE (and optionally ETI1002.EXE if you’re using SCO). These are DOS executables which end up writing a floppy image to a 1.44MB diskette, so have one ready.

Windows 2000 and CF Cards

I am currently in the midst of a retrocomputing kick and one of my multiple projects has been to get a Windows 2000 (Win2k) computer up and operational. While I have aimed for largely period (1999-2001) parts, one area I was not keen on going back to old technology was in the space of storage. Spinning disks are relatively slow, fairly prone to failure, and at least somewhat noisy, which left me with a few different choices, the first being to use a CF card adapter.

For those unfamiliar with the technology, CF cards are essentially IDE (PATA) devices in a small form factor, thus being pretty ideal for use in retrocomputing. There are a number of adapters you can buy, such as Startech’s CF to IDE adapter, that will plug into the IDE port on your computer and present the drive to the computer as just another IDE drive. In other word’s You could use the adapter above, plus say a 64GB CF card and suddenly have a solid state drive for about $50. I had actually used this combination on my Compaq Presario 466 (486DX2-66) with a 4GB CF card with no issues whatsoever, dual booting PC-DOS 7 and OS/2 Warp 3. In fact, thanks to the ease of the CF card being so readily movable, I also had a second 2GB CF card which I installed Win95 on, so the computer essentially become a triple booting Win95/DOS/OS/2 computer, which was pretty sweet. Knowing this, I moved onto my PIII-1ghz based computer, applying the same thinking. Unfortunately it didn’t go quite as well.

First, my goal on the P3 was to dual-boot OS/2 Warp 4 and Windows 2000, so I use the adapter above, plus a 64GB Sandisk card as my primary IDE HDD. I first booted with the OS/2 Warp 4 diskettes (modified to support large HDDs) and partitioned the drive. One note is that OS/2 doesn’t like to boot partitions above 8GB (this is fixable with some additional fixes, but outside of what I wanted to deal with), so I created 5 total partitions:

  • A 7MB OS/2 Boot Manager partition
  • A 5GB (C:) blank primary partition, whose goal was to eventually become the Windows 2000 partition
  • A 3GB (C:) HPFS primary partition for OS/2 Warp 4
  • An extended partition containing 2 logical drives
    • One of about 30GB (E:)
    • Another of about 24GB (F:)

The thinking on the extended drive/logical drives was to build large data partitions for each OS, so I could get around any weirdness with the 8GB bootable mark. There are likely other, more reasonable, ways of going about it, but this seemed reasonable.

I then went ahead and installed OS/2 without a hiccup; was able to see all of the partitions and everything was going well. I then went back and attempted to install Windows 2000, which presented a new host of issues. The first of which being a rather odd error once the Windows 2000 CD (with SP2) began to try to install.

When the Windows 2000 installation process got to a certain point of identifying the drives available to the system, it would see all of the partitions on the drive without much issue. I would then select the partition I wanted to install on, which would then lead to an error of:

"0x5, 0, 0, 0x2"

which was not particularly helpful. I ended up twiddling a BIOS setting to say that my OS was PnP compatible and retrying, which then got rid of that error and I was able to complete the install. It seemed like it put me on a good path to completion.

Once Win2k was installed, I then attempted to make use of the larger logical drive. First, Win2k didn’t identify it as a partition I could simply format, so I fired up the Computer Management / Storage snap-in and could see the partitions. Attempting to right click/format or assign a drive would result in an error about the drive not being available. I thought this pretty benign and Win2k suggested a reboot, which I did, but that didn’t help at all. What was also curious is that the drive showed up as removable. Long story short, apparently Win2k has an arbitrary limitation that removable drives can only have one partition available on them, and so when you would do things like attempt to delete the logical drive through the storage manager, it would actually delete the entire partition table and you would be left with what essentially a blank drive. This resulted in several re-installations of Win2k and OS/2 while I was troubleshooting this issue.

What I then discovered was that CF cards can be presented as removable or fixed, but finding fixed is nearly impossible in this day and age – sometimes they’re referred to as industrial CF – and they tend to be very expensive. Years ago Sandisk offered a utility that would modify some bits on the drive to turn a removable into a fixed, but the software is exceedingly difficult to locate and, even if you could find it, it doesn’t work on modern CF cards anyway. So, your options for using a CF card as a fixed drive on a Win2k computer is likely pretty limited: use one large partition or find an old school industrial CF. Rumor has it there exist some CF adapters that have an actual controller on them that translate from removable and present fixed to the host, but they also seem pretty rare.

What I ended up settling on was an IDE to SATA adapter and a cheap 60GB ADATA SSD drive. I used the partitioning scheme above and everything worked just fine. OS/2 and Win2k see the drive just fine and everything is working well. A cheap SSD and adapter cost roughly the same amount as the CF drive, you simply lose the ability to do swapping out easily for other OSs and have a more portable array of cards.

Unexpected Security Issues In The Cloud

Lots of new and interesting things happen when you move to the cloud that you didn’t expect to have to deal with before (e.g. how to automatically bootstrap auto-scaled instances, etc.) One area that has a lot of complexity and uncertainty around it is certainly cloud security. As someone found out on OpenStack, there are things you rarely ever needed to think about, like the RNG not being random enough and doing things like generating the same SSH key multiple times. Not that this couldn’t have happened outside of the cloud, but as you scale systems and spin up instances dozens or hundreds of times a day, problems with small chances of occurring can suddenly start to appear at an alarming rate.

 

Dreamhost’s Dreamobjects

Dreamhost offers a decent looking object storage implementation called DreamObjects, powered by Ceph, which reminds me a little bit of EMC Atmos. What I didn’t immediately find while looking through their documentation was whether or not the data is ever synched to another datacenter, but I rather suspect it’s not. If that’s the case, even with their durability SLA of 99.99999% it sits somewhere between S3’s standard durability SLA of 99.999999999% and their Reduced Redundancy Storage durability SLA of 99.99%. Reduced Redundancy Storage costs $0.076 in US East, at the time of this post, and $0.095 for Standard Storage. With DreamObjects offering $0.07, it’s actually a pretty good deal, particularly for home users that want to play with an object store that offers an S3 API. Great as another place store critical data.

AWS Route53 and ELB Health Checking

In case it wasn’t obvious by now, AWS is going after Akamai with their latest release of: Amazon Route 53 Adds Elastic Load Balancer Integration for DNS Failover

This fills a long-standing gap in the ability for companies to take advantage of true GSLB/GTM capabilities for high availability. Previously, they closest you could get would be to instrument Route53 to use latency based DNS, but it wasn’t well suited for building highly redundant active/active applications, generally forcing teams to either choose between that level of redundancy, going to Akamai and signing up for their GTM services, or implementing their own with something like F5’s GTM solution (though the latter generally only available to enterprises with multiple datacenters already.)

AWS’s CloudFront is already a compelling offering in the CDN space, so as AWS chips away at Akamai, I wonder what will be next? My hope is something along the lines of Kona, but more likely will be something like application acceleration.