Saturday, June 9, 2018

Building An AWS Multi-region Serverless Application With A Single Lambda, Multi-master Database, And Deep Ping Healthchecks


Amazon published this excellent article:, which moves us closer to pagerless computing (

But Amazon's solution has multiple lambdas, depends on API Gateway, and has no backend.

This article shows how to start with that solution, and:
  • Modify it to have a single lambda.
  • Minimize use of API Gateway.
  • Connect to a DynamoDB backend that uses global tables (so the entire end-to-end stack fails over).
  • Check health with a tunable combination of shallow (front-end only) and deep (back-end) pings.
  • Keep almost the entire solution in the free tier.
There might be some controversy about the first two items, so let's start by addressing those:

Q: Why a single lambda?
  • We want the health check to be a reliable indicator that the application is working. If the health check hits a different lambda than a constellation of N lambdas constituting the application, it could report that the health check lambda is working fine, but the application is actually broken. (The terminology is a bit tricky here. There's actually a pool of lambdas running in instances, not just one lambda. Our point is that each instance is running the same code.)
  • We want to use the health check to keep the lambda warm, and hit the same lambda with our other calls (in this example, just "hello", but any number of other calls can hit the same lambda), so calls rarely encounter a cold start.
  • We want to reuse the database connection as much as possible (although establishing a connection to DynamoDB is fast anyway).
  • We don't want to have to manage N different lambda files, particularly with them hitting the same shared database, and needing much of the same code.
There are arguments for and against single lambdas, and much active discussion online. One of the cons is that a larger lambda takes longer to cold start, but our lambda really isn't very big (and we're not packaging very much with it).

For our particular use case, a single lambda works well.

Q: Why minimize use of API Gateway?
  • Because we only have one lambda, we can only have one handler (AWS limitation), which drives us towards lambda proxy integration, which inherently minimizes API Gateway.
  • Using lambda proxy integration makes it easy to change our client-side application and lambda implementation without having to go back in and add more configuration to API Gateway. It speeds up development.
There are arguments for and against lambda proxy integration, and much active discussion online. Security is one of the main cons. But in our use case, the client application needs almost exactly the same permissions for all operations, so we're not appreciably increasing our attack service by having a single endpoint.

With that out of the way, let's get started.

Select Two Regions

Pick two regions that you will use throughout.

Because this is a multi-region failover solution, it probably doesn't make sense to pick regions on different continents.

We chose us-east-1 and us-west-2.

Create The Deep Ping Table In Both Regions

Select one of the regions.

Go to the DynamoDB service.

Select "Create Table".
  • Enter "prod.Hello" for the table name.
  • Enter "HelloKey" (type string) for the primary key.
  • Ignore the sort key.
Based on experience, we had to start scaling earlier to account for scale-up time:
  • Under "Table settings" uncheck "Use default settings".
  • In "Autoscaling", "Read capacity":
    • Set "Target utilization" to 50%.
    • Set "Minimum provisioned capacity" to 1.
    • Uncheck "Apply same settings to global secondary indexes".
  • In Autoscaling, "Write capacity":
    • Check "Same settings as read".
Click "Create Table".

Wait for the table to be created.

Click "Global Tables".
  • Click "Enable streams".
  • Click "Add region".
  • Select your second region.
  • Click Continue.
  • Wait for the table in the other region to be created.
Go to Items.
  • Click "Create item".
  • Switch to Text view.
  • For the HelloKey, enter "Hello".
  • Add an attribute Hello, with text "Deep hello from " (note the trailing space).
  • It should look like this:
  "HelloKey": "Hello",
  "Hello": "Deep hello from "
Click Save.

Switch to the other region, and verify that your item propagated automatically.

  • Do not enable encryption. If you do that, the deep pings will exceed the number of Key Management Service requests in the free tier.
  • You may be tempted to enable Point-in-time recovery. Don't bother. It doesn't work for global tables. (Neither does optimistic locking.)
  • You may see an alarm similar to "Consumed read capacity < 0.3 for 15 minutes TargetTracking-table/prod.Hello-AlarmLow-some-uuid-string". Ignore this. The alarms are supposed to help you avoid overprovisioning, but it's not possible to provision less than 1 capacity unit, so this warning is pedantic. This has been reported to AWS (they filed a bug).

Set Up The Front End

Go to

Git clone that articles's git repo.

Follow the steps under Prerequisites.

When your buckets are created:
  • Go to Properties for each of them and enable encryption, AES-256.
  • Go to Management for each of them and set up an expire lifecycle rule, set to 1 day for everything. (There's no reason to keep temporary uploads.)
In helloworld-api, move helloworld-sam.yaml to a backup.

Download to helloworld-api.

Note: There is a commented-out section in the downloaded yaml that shows how to set up permissions for a table you write to, not just read from. That might come in handy later (but not in this article).

Download to helloworld-api.

Read through thelambda.js to get a feel for how it works (there are a lot of comments).

In thelambda.js, follow the instructions for generating a random string, and use it to replace <PUT YOUR UNIQUE ID HERE>.

Continue Amazon's blog where it says "You can only use SAM from the AWS CLI, so do the following from the command prompt" (execute the two sets of bash commands documented there).

Go to Cloudwatch, Log Groups in the console and, for both regions, set "Expire Events After" to 1 day (the health checks generate a lot of logs).

Note: This only deletes lot events--log streams are kept (even though they are empty). You should probably periodically clear out the empty log streams. AWS has filed an enhancement request to also delete the log streams.

Configure the endpoints to be regional as shown in Amazon's blog.

You should see a different API Gateway than what is shown in Amazon's blog:

Note that there is no /helloworld after /prod in the invoke URL.

Note: AWS SAM is what creates the stage Stage. CloudFormation doesn't do that. It's a known issue: Supposedly you can fix it by changing the template to use resource type "AWS::ApiGateway::RestApi" instead of "AWS::Serverless::Api". We left it as is, because it works, but it might be interesting to convert the template.

Where Amazon says to test with curl, replace the curl command with (adjusting the region if you didn't use us-east-1):
curl https://
<yourinternaldomain><PUT YOUR UNIQUE ID HERE>
You should see:
{"message":"Shallow hello from us-east-1"}
{"message":"Deep hello from us-east-1"}
depending on the deep-ping threshold percentage (you can keep executing the command until you see it flip from shallow to deep or vice-versa).

curl https://
<yourotherinternaldomain><PUT YOUR UNIQUE ID HERE>
You should see:
{"message":"Shallow hello from us-west-2"}
{"message":"Deep hello from us-west-2"}
(again adjusting the region if you didn't use us-west-2).

Continue Amazon's blog, with the "Create the custom domain name" section, but be careful in the dialog for "New Custom Domain Name" to set the base path mapping destination to "thelambda" instead of "multiregion-hello".

Continue Amazon's blog, with the "Deploy Route 53 setup", but once you complete that section, go to Route 53 in the console and edit both health checks so their path is:
prod/health?healthCheckerId=<PUT YOUR UNIQUE ID HERE>
Note: This manual step wouldn't be necessary if someone could get to work.

In both of the health checks, add an alarm that sends you email.

Optional: Our application is specific to the United States, so we didn't see the point of beating on it from areas outside the U.S. You can configure health checks to run from fewer locations by going to "Advanced configuration", "Health checker regions", "Customize", and deleting ones you don't want (down to a minimum of three).

Note: Your heathchecks will wind up in us-east-1, because that's where all health checks live (according to AWS support). Route 53 is global, and has no notion of what region a health check is for (and it can check the health of URLs not in AWS).

Continue Amazon's blog, with the "Using the Rest API from server-side applications" section, but change the curl URL to:
<replacewithyourcompanyname>.com/v1/prod/hello?healthCheckerId=<PUT YOUR UNIQUE ID HERE>
Continue Amazon's blog, with the "Testing failover of Rest API in browser" section, but change client.js to:
    url: 'https:/hellowordapi.
    data: {
        "healthCheckerId": "<PUT YOUR UNIQUE ID HERE>"
    dataType: "json",
When Amazon's blog says to set the environment variable STATUS to fail in the Lambda console, you'll see that we called it FORCE_FAIL, and you set it to true.

Verify that you receive an email for the failover.

Continue through the rest of Amazon's blog.


Updating The Displayed Region

The Amazon blog says: "During an emulated failure like this, the browser might take some additional time to switch over due to connection keep-alive functionality". We haven't been able to get the browser to update. It seems to just cache the call, even if we go to developer tools and disable caching. But if we relaunch the browser, it displays the failed-over-to region. It would be great if someone could figure out how to reliably make the browser display fail over automatically, because it would be a much more compelling demo.

Spurious Health Checks Warning

If you look at the health checks that are mapped to the regional endpoints, you may notice a warning: "The selected health check specifies the endpoint by domain name. Confirm that the name of this resource record set isn’t the same as the domain name in the associated health check. If the names match, health checking won’t work correctly." That warning is spurious. It should only display that warning if the domains are the same. This has been reported to AWS (they agree it's a bug).


Review the Cloudwatch logs to get a feel for how fast your lambda executes. You'll see three basic times. One is sub-millisecond, the other is ~80-150 ms, which we think are deep pings. And occasionally you'll see a longer delay, which we think is a cold start.

Running Lambdas From Console

To run thelambda from the Lambda console, you need to supply the path parameter and the healthCheckerId. To do that, go to the Test dropdown, select "Configure test events", and enter:
  "pathParameters": {
    "proxy": "health"
  "queryStringParameters": {
    "healthCheckerId": "<PUT YOUR UNIQUE ID HERE>"
  "pathParameters": {
    "proxy": "health"
  "healthCheckerId": "


Experimenting With Environment Variables

You can experiment with different settings by going to the Lambda console and changing the environment variables. For example, you can increase or decrease the deep-ping percentage.

Incomplete Requests When Running Lambdas From Console

When running lambdas directly from the Lambda console, keep in mind that the request event doesn't have content like what is shown in, because the request didn't come through the API Gateway.

Restricting Callers By IP Address

There's another way to restrict healthcheck calls: by IP address. You could set an address-based policy that only allows health checkers, and you, to call the health and hello endpoints. That would get rid of the need for the health check ID, which would simplify things a lot.

However, while you might be able to specify origin addresses for yourself that are stable, you can't do that for health checkers, because the addresses change.

It's possible to find out the current values, from, and you can subscribe to an SNS topic "AmazonIpSpaceChanged" to be notified whenever there is a change to the AWS IP address ranges, so it would be possible to write some tooling that updated your whitelisting, and redeployed.

But why does it have to be so difficult? AWS could add a principal for "healthchecker", and whitelist those, internally maintaining whatever information they need to keep track of the healthcheck IP addresses. They filed an enhancement request.

Paid Support

We got stuck on parts of this project, being entirely new to AWS and using this configuration to learn. While there was a lot of information online, sometimes our questions required talking with an expert (and a few times we demoed bugs we found). If you find yourself in this situation, sign up for paid support. If you don't need to screen share, you can probably get by with Developer. We wound up signing up for Business. It's only $100/month, they respond very quickly, the engineers who respond are experts, and you can shut it off once you're up to speed.

Opportunities For AWS Improvement

Single Handler Per Lambda

The biggest pain point for us is the restriction that there can only be one handler on a lambda. If that wasn't the case, we could have used API Gateway directly, without lambda proxy integration, so we didn't have to have a central dispatcher:
    if (utilsService.isNullOrEmpty(requestKind)) {
        response = createResponse(400, "INVALID REQUEST (missing requestKind)!!!");
    } else if (requestKind.indexOf("INVALID REQUEST (event requestKind ") === 0) {
        response = createResponse(400, requestKind);
    } else if (requestKind === "hello") {
        if (!healthCheckerIdMatch(event)) {
            response = createResponse(500, "Failed");
        } else {
            response = helloHandler(event, context);
    } else if (requestKind === "health") {
        if (!healthCheckerIdMatch(event)) {
            response = createResponse(500, "Failed");
        } else {
            doCallBack = false;
            healthHandler(event, context, callback);
    // TODO: Add more else/ifs for your application's functions, using something other than the healthCheckId for authentication.
Each of the request kinds would just have been separate handlers on the shared lambda.

Unsecured Health Check Endpoints

The second biggest pain point was having to use a cumbersome shared secret to protect the health and hello endpoints from malicious callers. Being able to whitelist health checkers as principals, plus our IP addresses, would have cleaned that up a lot.

Almost But Not Quite In Free Tier

We don't want to sound like ingrates, because AWS gives individuals access to billions of dollars of infrastructure for a few bucks a month, but... it would be great if the entire failover solution fit in the free tier. That way, anyone could start out knowing that their application was going to be secure, available, and performant before writing a single line of their application-specific code, and they would only be charged for their application. It's not a huge ask--pretty much the only costs in the configuration come from Route 53 (for string-matching healthchecks), and API Gateway.

Incomplete Scriptability

The Amazon article ends with "The setup was fully scripted using CloudFormation, the AWS Serverless Application Model (SAM), and the AWS CLI", but that's not really true. Much of it is, but parts aren't. For example, having to set endpoints to regional. We don't know if that was just the author not knowing how to do it through SAM, or if it's a gap. Ideally the whole configuration could be created from only templates, with no manual intervention.

Little Gaps In Managed Services

Even managed services can have little misses on the part of AWS. For example, deleting CloudWatch log events but not the log streams that contain them. Ideally every managed service could be configured to be completely automated, including garbage collection.

Productize Entire Configurations As Packaged Offerings

The opportunities listed above are tactical--they're minor improvements that would streamline things and make them a bit easier. But the biggest opportunity we see is for AWS to embrace the cross-region failover configuration as a first-class citizen. Imagine if it was a fully worked out bundled solution that could simply be ordered up from the console (or just from a web page). Lightsail on steroids. Up pops a wizard that asks for a domain name, regions, backend database (which pretty much is limited currently to DynamoDB, pending Aurora multi-master GA), and a few other pieces of information, then some time passes and boom, here is a complete failover solution, with a place for the developer to add their code and schema.

We want AWS to be a super-reliable, super-fast, super-available, planet-wide app server that requires no care or feeding on our part.

The Promise Of Pagerless Computing

Serverless computing is all the rage, as well it should be, but serverless is a how.

Pagerless is the why.

Maybe you're a developer who really enjoys being on-call, yanked out of sleep at 3:46 am to heroically deal with some kind of production crisis. Maybe you really like configuring CIDR blocks and subdomains and bastion hosts, and subscribing to security alerts, and keeping your machine images up to the latest patch levels, and paying for licenses, and etc. Maybe you actually miss buying physical hardware and installing it at the "co-lo".

If you are that kind of developer, AWS would love to hire you. You'll fit right in!

But for the rest of us, that's all undifferentiated heavy lifting. Yes, it's critically necessary, but there's nothing application-specific in any of it. We can't even use it to make our application stand out relative to other applications in terms of uptime, security, performance, etc., because these days users just expect stuff to work. The only way to stand out operationally is by screwing up.

Unless devops is a core competency, it's irresponsible not to outsource it to an army of technicians in white lab coats who specialize in this stuff.

Amazon didn't always understand this, but they learned ( "It became obvious that developers strongly preferred simplicity to fine-grained control as they voted "with their feet" and adopted cloud-based AWS solutions, like Amazon S3 and Amazon SimpleDB, over Dynamo. Dynamo might have been the best technology in the world at the time but it was still software you had to run yourself. And nobody wanted to learn how to do that if they didn't have to. Ultimately, developers wanted a service."


Instead of going "The developers aren't doing it right", Amazon went "Huh, that's weird, why are they doing that?", and learned from the answer.

Today, an individual developer can set up a cross-region multi-master HA/DR system in AWS for a couple bucks a month ( It would have cost millions and millions of dollars to do that 20 years ago. Many reasonably large companies couldn't have pulled it off. It's the democratization of operational excellence.

With everything managed, developers only need to be paged when their software--the part where they are the experts--goes insane.

A lazy programmer is a good programmer.

See also:

Monday, February 24, 2014

Comparison Of Two Kickstarter-funded Sous Vide Devices

Sous Vide

Sous vide cooking requires heating a tub of water to a precise temperature and holding it there over a long period of time (often several hours), putting food in plastic bags, evacuating the air from the bags (typically), and dropping the bags in the water for the desired period of time. Often this is followed by removing the food from the bags and normal-cooking it for a brief period (for example, to add a nice charred/seared finish).

There are two basic approaches for keeping temperature constant:
  1. A water bath with heating elements underneath (and sometimes also on the sides) that relies on convection to produce an even temperature throughout. One vendor of these calls them "water ovens".
  2. An immersion circulator that provides heat, plus actively circulates the water to ensure even temperature.
Both approaches use accurate thermometers and control devices (typically PIDs) to avoid thermal undershoot and overshoot. Both approaches have safety features such as low-level sensors, etc.

Advocates for immersion circulators have demonstrated unwanted temperature gradients in water ovens, by dropping a lot of frozen food in the water at once. Water-oven advocates claim this was an unfair test, at least for home use, because the quantity of frozen food was unrealistic. They may or may not have a point.

The one undeniable drawback to the water-oven approach is the size of the machine. In a restaurant, this is perhaps not a concern, but in a house it actually matters. The leading model of water ovens is as big as a bread maker, and for casual home use could be a boat anchor if it's not used all the time (which is true come to think of it of bread makers too). In contrast, an immersion circulator can be clipped into any large pot the home cook already owns, and can be stored in a small volume when not in use.

So, we wanted to buy an immersion circulator instead of a water oven.

Early sous vide experimenters repurposed expensive lab equipment, and at least one lab-equipment vendor (PolyScience) was smart enough to realize they had a new sales channel, and start making purpose-built immersion circulators.

But they were still expensive.

Note: PolyScience sells lower-priced immersion circulators: But we didn't know about them at the time, or perhaps they were recently introduced. We only knew about immersion circulators costing $700 or more.


While we were looking for an affordable model, Nomiku did a Kickstarter for a lower-cost (but still high-quality) immersion circulator:

We invested in one.

Then Sansaire did a Kickstarter for another immersion circulator:

We invested in one of those as well, in case the Nomiku failed to deliver, or wasn't any good.

(Note that both projects were funded quickly and greatly exceeded their funding goals. There was obviously pent-up demand for such a product.)


Having now received both units and tried them out, here is our evaluation:

  • Smaller (width and height)
  • Quieter
  • More expensive
  • Cover is recommended to keep heat in, and pre-notched cover is not available from vendor
  • Slower to reach desired temperature
  • Smaller range of water levels
  • Harder to clean inside
  • Bigger (width and height)
  • Louder (but still not very loud)
  • Less expensive
  • No cover needed
  • Reaches desired temperature quickly
  • Larger range of water levels
  • Easier to clean inside
  • Some burrs on mounting clip (they're working on improving this)
  • Well-designed
  • Well-manufactured
  • Well-documented
  • Well-packaged
  • Excellent customer support
  • Prompt shipping
  • Hold constant temperature correctly with little or no variance once equilibrium is reached
We haven't used either unit long enough to comment on long-term reliability. At some point I'll post an update.


If space is at a premium and/or if you plan to only cook small amounts, get the Nomiku. Otherwise, get the Sansaire.


Despite showing metal pots on their websites, both vendors recommend using Cambro polycarbonate tubs (12 quart for Nomiku:, 4.75 gallon for Sansaire: The tub recommended for the Sansaire has ridges on the underside that provide some insulation from cold countertops.

Although not required for the Sansaire, a cover will reduce evaporation, and holds in heat (which reduces energy cost, but does not affect the ability of the Sansaire to keep constant temperature). No pre-notched cover is available for the Sansaire (same as for the Nomiku), and cutting one would be problematic for a Sansaire because the hole would need to be some distance from the edge, producing a long void that would leak heat. If you want a cover with the Sansaire, you can use floating plastic balls, which are available from (case of 1000, 20mm, polypropylene floating spheres: These are also sold by PolyScience, in smaller quantity, for a lot more money.

(PolyScience sells tubs and pre-cut covers, in various sizes, for their immersion circulators. It would be helpful if Nomiku did the same.)


Side-by-side comparison of Sansaire and Nomiku, in recommended tubs:

Nomiku cover, courtesy of my brother. Hole was cut with 2+1/4" hole saw in a drill. Notch connecting edge to hole was cut with fine-toothed hacksaw. Edges were lightly sanded, but googling around for advice on how to cut polycarbonate, some posts recommend lightly running a butane torch flame over the cuts, which will (they claim) melt them gently into a smooth finish.

If you screw up, you can get another top from Amazon for about $9.

Similar review on another site:

Inexpensive DIY Wine-preservation System

Wine goes bad pretty quickly when exposed to air, which wouldn't be a problem if every opened bottle was finished right away, but sometimes a bottle is only partially consumed.

To address this, there are many solutions available: google for wine preservation system.

The various solutions fall into one of these categories:
  1. Replace the bottle with a bag that can be squeezed so the wine comes up to the top, leaving no air to react.
  2. Displace the wine up to the top of the bottle by pouring clean inert beads into the bottle.
  3. Pour the wine into a smaller bottle, and have a set of bottles of various sizes.
  4. Remove the air from the bottle (by creating a partial vacuum in the bottle).
  5. Replace the air in the bottle with an inert gas.
  1. Bags work but turn an aesthetically pleasing bottle of wine into an ugly plastic sack. And the sack is difficult to clean and dry. Plus I don't really like the idea of wine sitting in contact with plastic for days.
  2. Beads work but are hard to clean and then dry (a colander is good for cleaning, but drying is a pain, because water sticks in the voids between the beads).
  3. Smaller bottles don't work as well as beads, because the step sizes between bottles are larger than the volume of one bead. And the bottles are hard to clean and dry.
  4. Creating a partial vacuum can diminish/change the flavor of the wine, probably because it draws off useful volatiles (for example, dissolved CO2).
  5. Using an inert gas works very well. The step size is the size of a molecule, and when the bottle is opened the gas disperses and there is nothing to clean or dry.
You can buy an inert-gas system. They cost a lot of money, and typically even if the base unit isn't outright expensive, the replacement gas canisters are expensive (including, oddly, Hazmat fees for shipping the bottles, even though the gas is harmless). They wouldn't need to be expensive, but, like giving away the handle and charging for the blades, that's how the inert-gas-wine-preservation-system vendors make their money.

You can instead create an inexpensive inert-gas solution entirely from off-the-shelf parts. You don't need to buy anything fancy.
  1. Call your local industrial-gas supply shop. For example, Airgas. They are pretty much everywhere, because welders, food systems, labs, etc. need various gases.
  2. Ask for their smallest tank of argon. I asked for food-grade argon, and they didn't have it in the smallest bottle size. But when I asked what the "contaminants" are in regular argon, they said it was just air, and the percentage is very low. They also said that's what wineries use (they don't need food-grade). Don't worry about it.
  3. While at the gas supplier, also get a regulator. Show them the photo below so they know what kind of regulator you need. Ask them to install and test it. Ask them to show you how to use it (hint: you need to turn the big valve on top of the tank first, then the little valve on the top left of the assembly in the photo).
  4. Pay for the regulator, the tank rental, and the argon.
  5. Take the tank/regulator assembly and an empty wine bottle to a local hardware store with a good selection of small pipes and adapters, and ask them to make something that looks like the photo.

The idea is for the narrow end to fit into the neck of the wine bottle, with room around the nozzle for air to escape as you fill the bottle. You might consider adding a small hose to the end of the fitting, to poke down into the bottle.

When filling a bottle with argon, open the small valve the least possible amount, to inject argon into the bottle as gently as possible.

When you feel you've replaced the air with argon as much as possible, quickly remove the nozzle, shut off the small valve (firmly, but don't overtighten), put the cork (or some other stopper) in the bottle, and then shut off the big value (again firmly, but don't overtighten).

Even the smallest bottle of argon lasts for ages--not much gas is used per wine bottle. We're still on our initial purchase of argon.

When you finally run out of argon, take the bottle/regulator assembly back to the supplier, and get a replacement bottle. (They just swap bottles, don't refill the one you have). Ask them to install and test the regulator on the new bottle.

Note: It's a good idea to attach a chain, wire, rope, strap, etc. to a wall and around some part of the argon tank (so it doesn't tip over and snap off the top, which is a rare occurrence but quite spectacular when it happens).

Caveat: Wineries have big tanks and special piping/techniques for sparging (filling the headspace with inert gas, When you manually squirt some gas into a wine bottle, even taking care not to create turbulence, you can't achieve the gentle laminar flow wineries achieve, so you probably aren't displacing all of the air so much as diluting the air. But that's still better than nothing, and in practice this has worked well for me.

Saturday, July 21, 2012

Well-designed, Easily Assembled, Reasonably Priced, Adjustable-height Desk

There are lots of companies selling adjustable-height desks, but they tend to cost nearly a thousand dollars, or more.

I finally found something that works and doesn't cost so much: If you already have a desk, you have a desktop, and can probably reuse it, so all you need is the frame. (If you don't have a top, they sell complete desks too.)

The frame breaks down into a fairly small, dense package. It only took about an hour to assemble (not counting idiotic rework because a certain hamfisted idiot flipped the top the wrong way the first time).

The mechanism is metal on nylon bushings, and operates smoothly.

The engineering and manufacturing are first-rate.

A very good value.

My only complaint is that the lowest height of the desk is still too high for a normal-height woman, and there's no way to shorten the legs because the mechanism is integral.

Saturday, July 14, 2012

Building A Completely Silent PC

Under load, a PC fan becomes a distraction while listening to music with quiet passages, coding, etc.

So I built a couple of completely silent PCs. (The optical drive makes noise, but I only used it to load the OS.)

If you want to do the same, you'll need to follow steps similar to these:
  1. Get a fanless-PC chassis. These are also called "media PCs", and they're silent so they can be used in home theaters without distraction. There are several manufacturers. The best price/performance ratio seems to be Streacom. You can get them from Perfect Home Theater, and from Quiet PC. Both vendors are a pleasure to work with. The prices were better at Perfect Home Theater, but he was out of stock in silver, so I wound up getting them from Quiet PC, and then got the accessories from him. The FC8 chassis I used has the smallest footprint, but requires an external power supply. For our offices, there wasn't space on the rack for a flatter, wider chassis like the FC5, FC9, or FC10. Also, I wanted front-panel USB sockets. Be careful to get fanless, because Streacom makes other models that look like the fanless versions, but aren't. You can get them with remote controls, which is useful for media PCs, and useless for a regular PC.
  2. Get the necessary parts. You will need a motherboard, CPU, RAM, SSD, and, if you want an internal optical drive, the special slimline optical drives from Perfect Home Theater. I wanted a fast system, so I used an Intel DH67CF motherboard, Intel Core i7 3770S 3.1 GHz 4 Core LGA 1155 CPU, Crucial 8GB RAM, and Intel 520 180 GB SSD. Not being a gamer, the CPU's audio and video is perfectly adequate, so I didn't need any other cards. Make sure you select a motherboard that is compatible with the chassis (Streacom lists compatible boards on their site--make sure you get one with SATA 6). The CPU I used is the fastest 65W available for a motherboard compatible with the FC8.
  3. Get some thermal paste. Selecting a paste feels like it takes longer than building the PC ( I wound up using Prolimatech PRO-PK1-5G, which is available from Newegg. The paste makes a mess no matter how hard you try to be careful, so be sure to put down some plastic or layers of paper towels, and wear some throwaway plastic or latex gloves if you have them.
  4. Follow the detailed and very helpful instructions on the Perfect Home Theater site. The two most-important pieces of information are the FC8 manual, and the connection map. Make sure you connect the SSD to a SATA 6 socket.
The only tools needed are two screwdrivers (small and really small), small wire cutters (if you want to cut off the floppy power pigtail), and a small cresent wrench (if you want to tighten the power socket more than finger tight, although finger tight seems pretty tight already). Magnetic screwdrivers are very helpful

There wasn't a lot of room between the micro-PSU and the right heatpipe, so I (gently!) bent the heatpipe up a bit, and cut off the Molex socket that faces into the case (because an identical socket on the other side of the micro-PSU faces away from the heatpipe).

I also cut off (carefully!) the power pigtail for a floppy drive, to remove a bit of clutter from the interior.

There are a number of small screws--get a bowl to put them in so they don't disappear.

It took about three hours to build the first one, due to fumbling around and learning how everything connects. The second took under an hour. (Those times do not include how long it took to load and configure the software.)

Sunday, June 17, 2012

Flossing, Seatbelts, And Dynamic Type Checking

"It's all fun and games until someone loses an eye."

Researchers, many of whom were really really smart, deduced that you should floss, and wear seatbelts, and eat a balanced diet.

In software, similarly intelligent researchers determined that strong typing was like flossing and the wearing of seatbelts: a very minor inconvenience that saved you a lot of trouble later on.

Now the trend is increasingly towards dynamic languages that discover type mismatches at runtime.

You know who does that discovering?

Your users.

The only reason your users aren't already at your gates with torches and pitchforks is that browsers turn off JavaScript errors by default.

Is this really the best we can do? The argument against static type checking boils down to "I'm a very careful driver", which is what every driver thinks right up until they get in an accident.

Languages with static type checking allow programmers to opt out (even Ada has unchecked conversion). That's analogous to allowing passengers to not put on their seatbelts if they're crazy enough not to want to wear them. JavaScript doesn't allow programmers to opt into static type checking. That's analogous to a driver taking all of the seatbelts out of the car, even those for the passengers, including children.