Tuesday, May 20, 2014

Maker Faire 2014 Highlights: Electronics

robopeak showed off their $400 360-degree LIDAR unit with 6m range.  Not as polished as the $1150 Hokuyo, but about the same size and a whole lot cheaper.

They also had a 2.8” USB LCD touchscreen display for $35, which I couldn’t resist:

I almost bought an IFC6410 from inforce’s booth.  It’s basically an Android phone main board in a pico-ITX form factor.  With a Quad-core snapdragon 600, 2GB RAM, wifi, uHDMI, sata, gigabit ethernet and more, it was by far the fastest and most feature rich of the proto boards I saw.  They paired it with Ytai Ben-Tsvi’s IOIO board to drive some robots, but I think of these boards more as hackable Androids.

Unfortunately, they didn’t have any boards to sell on-site, and the price was $60 or $75 depending who I asked, but then ballooned to $100 with shipping, while the website price is listed at a “limited time promotion price” of $150 (but $75 with a promo code they mention in a blog post).   I was still game at $100, but after filling out my shipping address and email address, they handed me a 6-page license agreement to sign.  

At that point I gave up.  It’s a shame, because this board has cutting-edge hardware, and I’m excited to see makers with their hands on it, and not just big companies working on next-gen phones in secret.  

To me it’s a manifestation of the exploding maker and big-industry mobile spaces starting to intersect in earnest.  I’m excited for the big mobile players like Qualcomm to bring their cutting-edge hardware to the table, and I hope they can learn from the best of the makers by being transparent, consistent and plain-dealing.  Pick a price point, ditch the crazy EULA and multi-week lead time, and they’ll have a hot item on their hands.


One board I did manage to buy is this Mojo board from Embedded Micro, an Arduino-sized board with an Atmega 32U4 and a Spartan 6 FPGA.  That board plus a shield with 32MB of SDRAM came in under $100.

I’m excited to play with it, although this $200 MicroZed looks more impressive, with more respectable CPU, RAM and storage specs.

The Good and the Bad of the Internet of Things

The Good

Where I really got excited was Seeed’s booth.  They have a line of tiny Arduino-compatible boards and accessories called Xadow.  Instead of Arduino’s standardized header pins, Xadow uses flex cables to connect their boards, allowing a lot more mounting configurations.
I bought their CPU board and one of basically every accessory they had on hand for a grand total of right around $100.  I got a CPU board, battery, tiny OLED display, magnetometer, 9DOF IMU, vibrator, RTC and storage board.  Unfortunately, they were out of GPS and Bluetooth boards.

I’m really excited by these super tiny, feature-rich boards, especially as the wearables market heats up.

The other product is more well known: spark.io.  It’s a small arduino-compatible module with a wifi chipset, and a very clever technique for bootstrapping onto new wifi networks: an app on your smartphone sends out UDP datagrams with encrypted data that’s opaque to the Spark module (since the keys to join the network are precisely what it lacks), but whose size encodes the information the Spark module needs.  The person at the booth described it as a sort of morse code for wifi.


They didn’t have any modules to give out due to overwhelming demand, but they’re starting to ship now as fast as they can make them.

The Bad

As much as I like the product, the Electric Imp looks like it’s on the bad side.  Their device is sort of a programmable eye-fi: a controller and wifi chipset in an SD card form factor:

They have a clever approach to the problem of connecting these devices to a wifi network: load an app on your phone which flashes the screen to send the details to the imp via a photodiode embedded in the back side of the card.
Their director of sales, Peter Keyashian, explained their business model as a sort of turnkey Internet of Things: let them worry about the cloud service, microcontroller and wifi parts of a home appliance, and just keep building washing machines or dishwashers or whatever you’re good at.

Peter graciously gave me an Imp and dev board for free, and I was excited to try it out, until I showed it to a friend of mine.  He pointed out that the device is locked to their cloud service, and that they’re planning on charging for it down the line.

So that pretty much kills the excitement for me.  I had assumed the board would come ready to interoperate with their cloud service, but that if that wasn’t the right solution for me, I’d be able to reflash it to do, well, whatever else I needed it to.  But that isn’t how they’ve chosen to do it. Instead, they’re building a walled garden where the device is tied to their service, and they plan to charge for it down the line.


Maker Faire 2014 Highlights: Manufacturing

I stopped by the Full Spectrum Laser booth again this year.  People often ask me for recommendations on 3D printers, and I usually steer them toward lasers instead.  They’re super easy to use and work on a wide range of materials much cheaper than PLA filament.

Full Spectrum’s entry-level laser is $3500, far cheaper than the fancy Epilog lasers, so I’ve always figured that’s what I’d buy if I needed a laser.  Recently, though, a friend told me about hassles with the control software on a Full Spectrum laser.

I asked them about this at their booth, and they said that their latest “fifth generation” entry level lasers use a fancy control board from their more expensive models, and work a lot better than previous models.  So I’m cautiously optimistic, and still a big fan of lasers in general.



Looking Glass Factory slices up a 3D model and prints each slice on transparent film, then laminates the film together and embeds it in a solid block of plastic.  Here’s a 3d model of lower Manhattan they managed to scrape from Google Earth:


Imagineer James Durand and his wife (who’s a Mechanical Engineer at SpaceX, naturally) showed off James’s built-from-scratch blow-molding machine.  It heats polyethylene wax to 110C, injects it into a cooled metal mold, which solidifies the plastic touching the mold.  Then it blasts compressed air in to force out the remaining molten plastic, leaving a shell.

They cut the molds on the CNC mill in their living room, and had entertaining stories about second degree burns from early prototypes which motivated them to build the clear plastic cover sooner rather than later.

I believe they used an Industrino for the controller.  The electronics were nicely mounted over bus bars near the bottom of the enclosure.

I can’t find any photos of the completed machine, but that’s partly because they finished it just in time for Maker Faire!

Just across the aisle was this $600 injection-molding machine that’s surprisingly simple.  A heated reservoir melts plastic pellets and attaches to the spindle of your drill press.  Clamp your mold underneath the nozzle at the bottom of the reservoir, then force the plastic into the mold by lowering the quill on the drill press.

I got to see an actual PocketNC and meet Matt and Michelle, the husband-and-wife team of Mechanical Engineers designing an ambitious and beautiful 5-axis CNC mill from scratch.  They both quit their jobs a few years back to pursue their dream, and I really wish them well.


Their biggest holdup at the moment is software.  They have a few options when it comes to software for translating G-code, the decades-old language universally used by CNC mills for describing where to go and when, into the pulses that advance the stepper motors.

LinuxCNC is the oldest and most mature option, but also the hardest to hack on -- even the build process was intimidating to me, and I’m a software guy.

GRBL is another option.  This package has to be small, since it compiles small enough to fit on ATMEGA arduino boards.  I recently hacked RaspberryPi support into GRBL, and I was impressed at the code and comment quality.  Unfortunately for PocketNC, GRBL is built around cartesian coordinates: linear X, Y and Z axes all at right angles.  PocketNC, a 5-axis mill, has normal X, Y and Z axes, but also has two rotary axes.

The third option is the Syntheos TinyG.  They had a booth with a pendulum demo like this one showing off their third-derivative motion controller:

It’s not obvious to me which of these is best for PocketNC, but it is clear that we software guys should get our acts together so that we don’t hold up awesome projects like PocketNC.

Maker Faire 2014 Highlights: Miscellaneous

AeroTestra has a fully waterproof UAV with 20 minutes of flight time.  They’ve instrumented it with water quality sensors and measured local bodies of water for salinity.





TechGyrls is a collaboration of the YWCA of Silicon Valley, Intel and TechShop to create an after-school program just for 5-14 year old girls.

GIGAmacro sells a gigapixel macro imaging rig for a few thousand dollars.  They put a DSLR camera on a 3-axis gantry rig the size of a Shopbot Buddy then take hundreds of photos, merging them into a single wide depth of field image using Zerene and AutoPano.  This is very much like Google’s Art Project scans of famous paintings, but with more Z-axis capacity.  So it’s a sort of combination large flatbed scanner and microscope.

Screenshot from 2014-05-18 18:40:11.png

This 70 pound 3d-printed vehicle is really quite impressive in person.  This isn’t just a cookie-cutter RC car sled:


Wednesday, May 07, 2014

Shiny spheres

My friend told me that many years ago, she lost points in an MIT art class for putting the highlight in the wrong place on a sphere.  So when she saw my last post about radiance, she posed the question to me.

Turns out the answer is (-0.24,-0.25,-2.06).



Here's one where I moved the light source and just liked how it looked (before I added specular reflection):




// Projects a sphere onto the screen with ambient, diffuse and specular
// lighting, and prints the location of the specular highlight.
//
// Viewpoint is 0,0,0
// Screen is from -1,-1,-1 to 1,1,-1
// Sphere is at 0,0,-3 with radius 1
// Light source is at -1,-1,0 and projects
//
// Surface of the sphere emits ambient light in red, specular reflection in green and
// diffuse reflection in blue.
//
// Compile with gcc -o sphere sphere.c -lm -lfreeimage
// and make sure you have the libfreeimage-dev package installed.

#include <stdio.h>
#include <math.h>
#include <FreeImage.h>
#include <stdlib.h>

#define IMAGE_WIDTH 1000
#define IMAGE_HEIGHT 1000
#define SAMPLE_DENSITY 300

FIBITMAP *image;

void setup_image() {
  FreeImage_Initialise(FALSE);
  image = FreeImage_Allocate(IMAGE_WIDTH, IMAGE_HEIGHT, 24, 0, 0, 0);
  if (image == NULL) {
    printf("Failed to allocate image.\n");
    exit(1);
  }
}

void setpixel(double x, double y, double r, double g, double b) {
  RGBQUAD color;
  const double brightness_boost = 40.0;

  int red = fmin(r * 255 * brightness_boost, 255);
  int green = fmin(g * 255 * brightness_boost, 255);
  int blue = fmin(b * 255 * brightness_boost, 255);

  color.rgbRed = red;
  color.rgbGreen = green;
  color.rgbBlue = blue;
  FreeImage_SetPixelColor(image, x * IMAGE_WIDTH + (IMAGE_WIDTH / 2),
                                 y * IMAGE_HEIGHT + (IMAGE_HEIGHT / 2), &color);
  //printf("%lf,%lf\n", x, y);
}

void teardown() {
  FreeImage_Save(FIF_PNG, image, "out.png", 0);
  FreeImage_DeInitialise();
}

int main(int argc, char **argv) {
  const double pi = 3.14159;

  const double sphere_z = -3;
  const double light_x = -1;
  const double light_y = -1;
  const double light_z = -1;

  const double light_brightness = 1;
  const double specular_reflection_exponent = 10;
  const double reflected_brightness = 500;
  const double ambient_brightness = 0.2;

  double latitude;
  const double latitude_elements = SAMPLE_DENSITY / 2;
  const double latitude_step = pi / latitude_elements;

  setup_image();

  // For MIT's art department, keep track of where the specular highlight looks brightest
  double max_reflected_value = 0;
  double highlight_x = 0;
  double highlight_y = 0;
  double highlight_z = 0;

  // Walk over the sphere and compute light from the source and to the viewpoint at each spot.
  for (latitude = 0; latitude < pi; latitude += latitude_step) {
    double y = cos(latitude);

    double longitude;
    const double radius = sin(latitude);
    const double circumference = 2 * pi * radius;
    const double longitude_elements_at_equator = SAMPLE_DENSITY;
    const double longitude_step = (2 * pi) / longitude_elements_at_equator;

    for (longitude = 0; longitude < pi; longitude += longitude_step) {
      double x = radius * cos(longitude);
      double z = radius * sin(longitude);
      // Now we have x,y,z relative to the sphere center.  To get absolute coordinates,
      // we just have to translate in z, since the sphere is at 0,0,sphere_z.
      double absolute_z = z + sphere_z;

      double distance_to_viewpoint_squared = x*x + y*y + absolute_z*absolute_z;
      double distance_to_viewpoint = sqrt(distance_to_viewpoint_squared);

      // Dot product is x1x2 + y1y2 + z1z2 and yields cosine of the angle between
      // them if they're both normalized.
      // But x1=x2 and y1=y2, and |x,y,z|=1, so we only need to normalize by
      // distance_to_viewpoint.
      // This dot product is between the viewpoint and surface normal
      double view_dot_product = (x*x + y*y + (absolute_z * z)) / distance_to_viewpoint;

      // Now we'll calculate the dot product for the light source.
      // First, the vector from the sphere surface to the light
      double lx = light_x - x;
      double ly = light_y - y;
      double lz = light_z - absolute_z;
      double light_distance_squared = lx*lx + ly*ly + lz*lz;
      double light_distance = sqrt(light_distance_squared);
      // Now we can compute cos(angle to the light)
      double light_dot_product = (lx*x + ly*y + lz*z) / light_distance;
      // Cosine law for intensity and inverse square law
      double light_intensity = fmax(light_dot_product, 0) / light_distance_squared;
      // Reflection of light vector is light - 2(light dot normal)*normal
      // (also normalizing here)
      double reflected_x = (lx - 2*light_dot_product*x) / light_distance;
      double reflected_y = (ly - 2*light_dot_product*y) / light_distance;
      double reflected_z = (lz - 2*light_dot_product*z) / light_distance;
      // Now we compute cos(angle between reflection and viewpoint)
      double reflected_dot_product = (reflected_x*x + reflected_y*y + reflected_z*absolute_z)
        / distance_to_viewpoint;
      reflected_dot_product = fmax(reflected_dot_product, 0);
      // Raise to a power based on how shiny the surface is
      double reflected_intensity = pow(reflected_dot_product, specular_reflection_exponent);
     
      // Lambert's Law and inverse square law to viewpoint
      double view_intensity = fabs(view_dot_product) / distance_to_viewpoint_squared;

      double light_value = light_brightness * light_intensity * view_intensity;
      double ambient_value = ambient_brightness * view_intensity;
      double reflected_value = reflected_brightness * view_intensity * reflected_intensity;

      //printf("light:%.4lf ambient:%.4f\n", light_value, ambient_value);

      // If the sphere is lambertian, then the cosine of the angle between the surface
      // normal and the viewpoint gives the intensity.
      double projected_x = x / absolute_z;
      double projected_y = y / absolute_z;

      //printf("[lat=%.02lf lng=%.02lf (%.02lf,%.02lf,%.02lf) ", latitude, longitude, x, y, z);
      setpixel(projected_x, projected_y, ambient_value, reflected_value, light_value);

      if (reflected_value > max_reflected_value) {
        max_reflected_value = reflected_value;
        highlight_x = x;
        highlight_y = y;
        highlight_z = absolute_z;
      }
    }
  }

  printf("Brightest specular highlight was at (%.02f,%.02f,%.02f)\n",
      highlight_x, highlight_y, highlight_z);
  teardown();
  return 0;
}

Monday, May 05, 2014

Why are there only pi nits in a hemisphere? (Or, why do my reflected and incident light meters show a ratio of pi?)

I just spent about a week trying to sort out why people kept claiming that surfaces with a given radiance (or luminance) emit pi times as much light over a hemisphere, when there are in fact 2*pi steradians in a hemisphere.  Here's what I learned.  Special thanks to my friend Mike for helping me figure it out!

Quick overview of light units:

Radiance / Luminance (and their neighbors) are analogous to each other.  The radiance units weight photons based on their energy, while the luminance units weight photons based on human visual response.

Radiant flux (Watt) / luminous flux (Lumen): "I have a 1W LED"

Radiant intensity (Watt per steradian) / luminous intensity (Candela): "I have a 1W LED focused in a 33 degree cone"

Irradiance (Watt per square meter) / illuminance (Lux): "I put my 1W LED in a soft box whose front surface is 1m by 1m"

Radiance (Watt per square meter per steradian) / Luminance (Nit, or cd / m^2): "I measure 1uW of light being emitted from a 1 mm square patch on my soft box into a 33 degree cone perpendicular to its surface" BUT with a major catch (read on)

Leading up to The Major Catch:

Incident light meters (the ones with a white dome) collect light from a broad area and report it as lux.  I think of it as "if I had an ideal 1 square meter solar cell, this is how many watts it'd put out (or how many photons it'd receive per second) given the current ambient lighting".

Reflected light meters (when you look through the 1 degree eyepiece) tell you how much light is coming off of a surface.  If you're shooting something with a dark color, you'd rather know how much light is coming off it than how much it's receiving, since you probably don't know its BRDF.  Reflected light meters report in nits, or candela per square meter, or to expand fully: lumens per steradian per square meter.

This Sekonic L-758Cine is the one I have.  (Its lower priced siblings, unfortunately, won't report in lux or cd/m^2, but only in terms of camera settings)


The Major Catch:

So intuitively, we should be able to use an incident light meter to measure the ambient light coming off, say, a wall, and get, say, 2*pi lux.  We could (wrongly) assume it's emitting equally in all directions, and since we know that there are 2 pi steradians in a hemisphere, say that the wall has luminance of 1 lumen per square meter per steradian.

That actually seems completely reasonable, so I'm not sure why I felt the need to check that experimentally, but I'm glad I did.  I turned on a big soft box and measured a tiny patch of it using the reflected light meter.  Then I switched to incident mode and put the dome right up next to the center of the box, expecting the lux to be 2*pi as large as the nits.

But that's not what I got.  In fact it was closer to a ratio of 2, but if I squinted just right I could get a ratio of pi.  But definitely not 2 pi.  I repeated the experiment on a nearby wall and got even closer to a ratio of pi.  WTF?

Searching online I came across quite a few instances of people saying that the ratio of illuminance to luminance is indeed pi for a flat Lambertian surface.  Now that really baked my noodle.  How could a surface emit a certain number of photons per second per steradian into a 2 pi steradian hemisphere and come up with anything other than 1/(2pi) per steradian?  And what's the deal with Lambertian surfaces?

Lambertian surfaces are surfaces that have the same brightness no matter what your angle to their normal is.  That means they're isotropic emitters, right?  Nope!  Look at a business card square on, then angle it away from you.  Your eye perceives it as having the same brightness in both cases, but in the second case, the card subtends a smaller angle in your vision even though you're still seeing the entire card.  If each photon were to leave the card in a uniformly random direction, you'd receive just as many photons from the card when you viewed it obliquely as you did when you viewed it straight on.  Since it's foreshortened when you view it from an angle, you'd get the same number of photons in a smaller solid angle, and the card would appear brighter and brighter (more photons per steradian) the more you angled the card.

So for the card to have the same brightness at any angle and thus qualify as a Lambertian surface, the card must emit less light the further away from perpendicular you get, to compensate for the fact that the card's solid angle is shrinking in your view.  That brightness reduction turns out to be cos(theta), where theta is the angle between your eye and the surface normal.


Page 28 of the Light Measurement Handbook has some good diagrams and explanation on this topic: http://irtel.uni-mannheim.de/lehre/seminar-psychophysik/artikel/Alex_Ryer_Light_Measurement_Handbook.pdf

It turns out that lots of things in nature are more or less Lambertian surfaces, which is pretty convenient for us -- it means things look about the same as we walk past them.  And it means we have an accurate sense of how bright something is regardless of what angle we're seeing it from.  It's interesting to imagine a world where most surfaces were very non-Lambertian and wonder how that might screw up us carbon-based lifeforms.  But I digress.

Imagine a 1m^2 patch of glowing lambertian ground covered by a huge hemispherical dome.  The patch of dome directly above our glowing ground will receive the most light, and the perimeter (horizon) of the dome will get none, since it's looking at the ground on-edge.  When aimed straight up, let's say we get radiance of 1W per square meter per steradian, falling to zero at the horizon.  (Note that we could aim our reflected light meter at the patch from anywhere -- regardless of angle or distance -- and read the same value, thanks to the fact that the surface is lambertian.  The reduced light as we go toward the horizon will be perfectly balanced out by the foreshortened view of the patch, so the meter will report a constant value of lumens per steradian per square meter).

Let's integrate over the dome to find the total emitted power.  We'll slice the dome with lines of latitude, so that the bottom slice is a ring touching the ground, with a slightly smaller diameter ring on top of it, going up toward the north pole as the rings get smaller and smaller.  Kind of like slicing up a loaf of bread and then putting the end of it heel-up on the table.


The point in the middle of our glowing patch of ground is the center of the hemisphere.  Let theta be the angle between the north pole and the circumference of a given ring.  Then the radius of that ring is sin(theta), and its circumference is 2*pi*sin(theta).

The width of the ring is d*theta, and the radiance of the light hitting the ring from the glowing patch is cos(theta).  Multiply them together and we get power radiated through the ring.  Add it up for all the rings and we capture all the power being emitted by the glowing patch:



So our lambertian surface with radiance 1W / m^2 / sr emits a total of pi watts over a hemisphere, even though a hemisphere consists of 2 pi steradians, and even though we can measure that 1W / m^2 / sr radiance from any angle!  The reason that seeming paradox happens is that the constant radiance is a result of less light over a smaller angle.  The north pole sees the entire patch and thus gets the most light, whereas the horizon sees the patch from the edge and sees only a thin sliver of light.  Add it all up, and you get half as much light as if the dome were being uniformly lit everywhere.

Saturday, May 03, 2014

Nexus 4 USB tethering to Raspberry Pi is broken

I tried to get my raspi online by plugging it into my Nexus 4 via USB, then enabling USB tethering under Android's wireless settings dialog.  The 'usb0' interface appeared briefly with ifconfig, but dmesg revealed that the phone was connecting and disconnecting itself about once a second: lots of "usb disconnect" messages and stuff about unregister rndis_host.

My best guess is that it's something to do with broken USB OTG support, since the "charging" notification on the lock screen kept flickering on and off.  Super lame.

Switching over to a Nexus 7 tablet, USB tethering worked just as expected; plug it in, make sure USB tethering is enabled on the android side, then "sudo dhclient usb0" on the raspi side.