Bradford Embedded2020-08-18T11:29:46-04:00http://www.bradfordembedded.com/Andrew BradfordLock Screen on Suspend via systemd2020-08-18T00:00:00-04:00hhttp://www.bradfordembedded.com/2020/08/xscreensaver-lock-on-suspend<p>I use xscreensaver to lock my screen after a timeout or by key command from
within the awesome window manager. I also like to have my screen lock when I
suspend my laptop by closing the lid.</p>
<p>Since systemd now handles detecting lid closure and initiating the suspend, we
can tell systemd to have xscreensaver lock the screen before doing that, too.</p>
<p>Create a file at <code class="language-plaintext highlighter-rouge">/etc/systemd/system/xscreensaver_suspend@.service</code> with
contents like:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[Unit]
Description=xscreensaver suspend lock service
Before=sleep.target
[Service]
User=%I
Type=oneshot
Environment=DISPLAY=:0.0
ExecStart=/usr/bin/xscreensaver-command -lock
[Install]
WantedBy=sleep.target
</code></pre></div></div>
<p>Then enable this for your user (my user is “andrew”) as your user, you don’t
need to be root or use sudo, like:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ systemctl enable xscreensaver_suspend@andrew.service
</code></pre></div></div>
<p>Now when you suspend your laptop, xscreensaver should lock the screen first. If
you have xscreensaver set to fade the screen, the fading action may not complete
until after you resume your laptop, so you screen’s contents at suspend time may
be visible for a second or two upon resume. If this matters to you, disable
fading in xscreensaver-demo interface.</p>
FTDI USB Serial Device Naming2020-07-16T00:00:00-04:00hhttp://www.bradfordembedded.com/2020/07/ftdi-naming<p>I have multiple of the exact same FTDI USB to serial interface cables attached
to my PC and I wanted to name them physically and within Linux. I solved the
physical part with a label maker but I also want the device names to match up.</p>
<p>To do this I created a special udev rule which names each interface based on its
USB serial number. Then when I plug in the “BOARD1” cable it will show up as
/dev/ttyBOARD1 consistently.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cat /etc/udev/rules.d/ftdi.rules
SUBSYSTEMS=="usb", KERNEL=="ttyUSB*", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", ATTRS{serial}=="ABCD1239", SYMLINK+="ttyBOARD1"
</code></pre></div></div>
iOS Keep Originals Photo & Video Pain2020-01-02T00:00:00-05:00hhttp://www.bradfordembedded.com/2020/01/ios-keep-originals<p>For Christmas I got a new to me iPhone 8. I’m glad I tried Android but I’m very
happy to be back on iOS. However, Apple’s made a change to how pictures and
videos which the phone takes are stored within iPhones and this causes some
frustration.</p>
<p>The new default, since iOS 11 I think, is to store photos in the HEIC format and
videos in h.265 format. Prior to iOS 11, JPEG was used for photos and h.264 for
videos. Most everything can deal with JPEG and h.264 these days but some things
cannot deal with HEIC or h.265.</p>
<p>There’s a setting to revert back to using JPEG and h.264 for storage but it’s
confusingly named “Most Compatible” within <code class="language-plaintext highlighter-rouge">Settings -> Camera -> Formats</code> menu
system, and this selects both JPEG and h.264. There’s also a setting for
having the phone convert the HEIC and h.265 files into JPEG and h.264 files on
the fly when you transfer them to a PC called “Automatic” within
<code class="language-plaintext highlighter-rouge">Settings -> Photos</code> menu, but it’s buggy as hell and often results in
“A device attached to the system is not functioning.” error messages when
transferring video files to a Windows PC.</p>
<p>My family’s Windows 8 laptop has no issue playing h.265 videos but apparently
has no method to deal with HEIC photos, especially not within Picasa which we
use for our photo library. Hence, JPEG is a hard requirement for us, but h.265
or h.264 are both fine.</p>
<p>Unfortunately, most of my Google results for the “A device attached to the system is
not functioning” error message took me to Apple support forums where the
solution is generally to reboot everything until it works, transferring as much
as you can each time but no one really ever explains what exactly is going on.
And then people who get things working go on to complain that they can’t see any
of their photos or that the videos all lack actual video data, just sound plays.
Figuring this all out took me much longer than it should have.</p>
<p>I’m all for newer photo and video encoding schemes if it means I get better
quality and take up less space, but it’s not explained what the potential
downsides are anywhere clearly so that people can understand the trade-offs that
are being made for them with the default settings.</p>
<p>I’m happily now setting <code class="language-plaintext highlighter-rouge">Settings -> Camera -> Formats</code> to “Most Compatible” so
I can just have the file types which I know work well for my needs, even if the
quality is slightly lower and the storage space taken up is slightly higher.</p>
No Battery Backed Real Time Clock Linux Scripts2019-05-14T00:00:00-04:00hhttp://www.bradfordembedded.com/2019/05/nobbrtc-scripts<p>Sometimes on embedded Linux systems the system will always want time to move
forward but the cost of adding a battery backed real time clock is unacceptable.
In this kind of situation, I’ve found the following solution to be useful.</p>
<p>At shutdown we will write out a file to the “disk” (usually flash memory) with a
timestamp of the current time. At boot, after systemd moves the clock ahead to
its compile time but prior to starting any processes which need time to be
roughly right and before any time sync beings (ntp, systemd-timesyncd, etc), we
will restore the saved timestamp to be the system’s current time. We make sure
to store the timestamp in a format which will always have later times be larger
numbers so we can easily compare them and we don’t use the normal seconds since
the Unix epoch as there might be interesting issues in about 20 years. This
makes the restore process slightly more complicated as it’s not a normal format
to store a Unix timestamp in but it’s much easier for humans to understand when
they look at the timestamp file.</p>
<p>My examples use systemd but you can do something similar with other init
systems, too. Just be sure to restore the time as early as you can in the boot
sequence and to save the time prior to unmounting your read/write flash memory
during shutdown.</p>
<p>There’s two components to this scheme, first is the systemd service file which
will run the script at startup and shutdown (replace with your own mechanism for
non-systemd init). Key here is that the ExecStart script run may exit with a
non-zero result, such as if the current time is already ahead of the saved
timestamp so we need to allow for this to fail:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[Unit]
Description=Poor man's replacement for a battery backed real time clock
After=local-fs.target
Before=basic.target
Conflicts=shutdown.target
DefaultDependencies=false
[Service]
ExecStart=-/usr/sbin/nobbrtc.sh restore
ExecStop=/usr/sbin/nobbrtc.sh save
Type=oneshot
RemainAfterExit=yes
[Install]
WantedBy=basic.target
</code></pre></div></div>
<p>The second part is the script which actually does the saving and restoring of
the time. Store this as an executable script at /usr/sbin/nobbrtc.sh:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#!/bin/sh
# Save/restore the clock to/from a timestamp file.
echo_usage() {
echo
echo "Usage: ${0} <save|restore>"
echo "Save/restore the system clock to/from a file"
echo
}
# Save the current time to /etc/lasttimestamp
do_save() {
date -u +%4Y%2m%2d%2H%2M%2S > /etc/lasttimestamp
}
# Restore the time from /etc/lasttimestamp if it's later than the current time
do_restore() {
if [ -s /etc/lasttimestamp ]; then
read TIMESTAMP < /etc/lasttimestamp
else
exit 0
fi
SYSTEMDATE=`date -u +%4Y%2m%2d%2H%2M%2S`
# If timestamp is newer than now, update the time to it's value
if [ ${TIMESTAMP} -gt ${SYSTEMDATE} ]; then
# Format the timestamp as date expects it (2m2d2H2M4Y.2S)
TS_YR=${TIMESTAMP%??????????}
TS_SEC=${TIMESTAMP#????????????}
TS_FIRST12=${TIMESTAMP%??}
TS_MIDDLE8=${TS_FIRST12#????}
date -u ${TS_MIDDLE8}${TS_YR}.${TS_SEC}
else
exit 2
fi
}
##########################
# Execution Starts Here! #
##########################
if [ ! ${1} ]; then
echo_usage
exit 1
fi
case ${1} in
save)
do_save
;;
restore)
do_restore
;;
\?)
echo_usage
exit 1
;;
esac
</code></pre></div></div>
Android Phone Desires2018-04-27T00:00:00-04:00hhttp://www.bradfordembedded.com/2018/04/android-desires<p>I’ve now had my LG G5 Android phone for about 5 months and I’m generally happy
with it but there’s a few things I’d like to change from a hardware perspective
in order to improve it. LG’s software and support sucks, plain and simple, but
that’s not the point of this post (why do I not even get quarterly security
updates!?!?!?!).</p>
<h2 id="camera">Camera</h2>
<p>The LG G5 has a 16 MP main camera, an 8 MP wide angle camera, and a selfie
camera. The cameras, including the default camera app, are just ‘meh.’</p>
<p>The 16 MP camera is OK but I actually prefer using my wife’s almost 5 year old
iPhone 5S camera. The images produced by my G5 look pretty good when taken
outdoors in sunlight but anything in low-light conditions is full of noise and
shoots with a very low shutter speed, so my pictures of my kids are almost
always blurry. The main camera has OIS (optical image stabilization) which
takes out my shakes when holding the camera and seems to have influenced the
design of the shutter speed to ISO choice algorithm used in the camera software
but it doesn’t prevent the kids from moving and so many of my pictures of my
kids are blurry, which sucks. The main camera also produces <strong>HUGE</strong> jpeg
files, due to what seems like excessive amounts of noise in the images. 5 to 9
MB jpeg images are normal with my G5 where as I have excellent looking 24 MP
images from a Nikon SLR clocking in at 2-3 MB in size.</p>
<p>The wide angle lens is neat but I rarely use it. The distortion near the edges
is easily visible, it lacks OIS, and it is not needed.</p>
<p>The selfie cam works fine, but the camera app has this slider on it which will
“touch up” blemishes automatically. I’m sure this touch up feature was great in
all the reviews but it’s amazingly annoying, I know that I have pores, just take
my damn picture!</p>
<p>In my next phone, I will gladly trade off mega pixels to get less noise and
higher shutter speeds, especially in low light. 8 MP is plenty, heck I’d take 4
MP in low light if I could get 1/60 shutter speeds in typical indoor shots so
that I can avoid blurry kids.</p>
<p>Also, when I press the shutter button, take the damn picture! Why there’s
random unpredictable lag between when I press the shutter and when the actual
picture is taken is beyond me. My wife’s iPhone 5S is very consistent and
almost instantaneous about taking the picture when the shutter is pressed. This
shouldn’t be that hard but apparently LG is special.</p>
<h2 id="physical-size">Physical Size</h2>
<p>The 5.3” screen in 16:9 ratio is quite nice. I wouldn’t change it, but the
resolution is 2560x1440 which is higher than is really needed in this size, a
1920x1080 resolution would be plenty. But after trying out a Google Pixel 2
with it’s only 5” screen, I definitely noticed the missing 0.3” (although I have
no idea how).</p>
<p>In my next phone, I’ll likely be looking for a screen between 5 and 6 inches.
Big enough to be easy to use but I don’t need a tablet in my pocket. 1080p
resolution is plenty, even on a 6” screen, going higher isn’t needed.</p>
<p>But the bezels around the screen, they need to be smaller. My G5 is just
slightly too big for me to do everything one-handed. If the bezels were slim
like the late 2017 to early 2018 phones are all sporting, with a 5.3” screen,
that would be a perfect size for me.</p>
<p>The G5 has an IPS LCD which can do “always on” ability to always show the time
and notification icons. I really really like this! But it needs to be more
“always on” because the way LG implemented it uses the proximity sensor to
actually turn off the “always on” screen when the phone is face-down or in a
pocket. But LG’s implementation doesn’t react quickly enough to a change of
location for the phone, it takes a few seconds to realize I’ve pulled it out of
my pocket. If it reacted quicker, like sub-1 second, that would be much
appreciated.</p>
<h2 id="ram-and-flash">RAM and Flash</h2>
<p>The G5 has 4 GB of RAM and 32 GB of flash. I thought that this would be enough
since I could add an SD card to increase the flash. I have changed my mind.</p>
<p>I bought a 128 GB SD card as I figured that’d be plenty, and it will be. 128 GB
of SD card storage is more than I really need, even a week long trip to Disney
world only resulted in me taking about 20 GB of photos and videos. But 32 GB is
not nearly enough, I’d say right now somewhere between 64 and 128 GB of internal
flash is the sweet spot. Plus, the way Android seems to deal with SD cards and
permissions isn’t quite consistent with how it deals with permissions on
internal flash devices, which is annoying, so I’d prefer to simply have enough
internal flash (plus, SD cards are slow as hell compared to modern managed flash
which is integrated into the phones).</p>
<p>4 GB of RAM is OK for 95% of my needs. Only sometimes do I notice laggy
operation which is likely due to RAM usage. On my next phone I’d like more than
4 GB of RAM, definitely not less than that.</p>
<h2 id="durability">Durability</h2>
<p>I’d like to have some water resistance, more of a “just in case” thing than a
real feature.</p>
<p>I’ve not been using a screen protector on my G5, I’ve come to the opinion that
they are useless if the phone has a proper modern glass screen. The fancy new
glasses used on screens are hard as hell, adding a protector is just a waste.</p>
<p>But I do not like needing to buy a case. Just make the phone 1-2 mm thicker and
integrate decent drop protection into the phone itself. Integrated design of
drop protection will always be superior to some fat case but all the phone
makers seem to just want to make the slimmest phone which will have a huge
Otterbox wrapped around it. The Fairphone design or the apparent durability of
the LG V20 (go YouTube for V20 drop test videos) should be standard now so
people don’t need to buy stupid cases. To go along with this, stop making
phones so damn slippery!</p>
<p>I do like that I can replace the battery easily, but it doesn’t need to be as
easy as my G5 makes it. If I could just unscrew the back panel to replace the
battery or something that’d be fine, I’m only wanting to replace the battery
like once per year at most, I just don’t want to deal with sticky tape all over
the damn place to get the battery out.</p>
<h2 id="radios">Radios</h2>
<p>Dual chain Wi-Fi ac in 2.4 and 5 GHz is plenty. I don’t need more. A few
hundred Mb/s LTE is also plenty, going to Gb/s LTE will just let me rack up my
data use faster than I’d feel comfortable with. Pretty much every modern phone
has fast enough radios for my needs, I don’t care to improve them in my next
phone.</p>
<p>I do like having NFC, although I can’t say I actually use it for anything
useful, so I probably wouldn’t miss it if it went away. I don’t use the “tap to
pay” systems.</p>
<p>My car doesn’t have Bluetooth, but apparently most new cars do, and there’s the
aptx profile (or something) which makes Bluetooth audio sound less like shit, so
I’d like that on my next phone. But most phones seem to have this standard now.</p>
<h2 id="usb">USB</h2>
<p>My next phone must have a USB type C port with host and device and SuperSpeed
capabilities. It must also support USB PD (power delivery) charging, this
Qualcomm quick charge stuff is stupid, I want standards order and decorum
damnit! :)</p>
<p>It’d also be nice to be able to hook up an adapter to output HDMI/DisplayPort
video from my USB type C port, although I don’t find I’d use it that often
really.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Basically, I want Google’s Pixel 2 phone but with smaller top and bottom bezels,
a slightly more rugged mechanical design, and a slightly easier to replace
battery.</p>
Buying My First Android Phone2017-11-28T00:00:00-05:00hhttp://www.bradfordembedded.com/2017/11/android-buying<p>About a month ago I bought my first Android phone. I want to document my
reasoning behind why I bought the phone that I did and in time I can review my
decision making and hopefully improve next time I buy a phone.</p>
<p>I had previously used an Apple iPhone 5, my first smartphone (yeah, I know, I
was late to this smartphone party) but with the release of iOS 11, Apple no
longer would support the iPhone 5 with updates of any kind and the phone was
starting to feel very very slow. I had previously replaced the battery once,
which helped with battery life but there really isn’t any way to improve the
phone. I also simply wanted something newer and more capable, especially a
better camera.</p>
<p>My budget was to keep the purchase price under $300 for a brand new (not
refurbished or used) unlocked (from a carrier perspective) phone with a warranty
and a good camera. Buying a new phone with a warranty was important to me
because I cannot afford to throw away a few hundred bucks, so if the phone has
any issues then being able to immediately get the manufacturer to fix it for
free is important. I had bought my iPhone 5 refurbished, but not by
Apple, and the LTE never really worked correctly. My price point was $300
because that’s as high as I could convince my wife to let me go, but having a
reasonable price ceiling was good as it limited my choices so making a decision
was easier. Buying an unlocked phone was important to me because I’ve
previously switched carriers a few times in my life and not needing to buy a new
phone in order to make a carrier switch just seems like a reasonable thing to
expect in 2017. Having a good camera is important to me because I have small
kids and take a lot of pictures/movies of them.</p>
<p>Secondary goals were to get a phone which had expandable flash, an easy to
replace battery, an unlockable bootloader, waterproofing, and support from one
of the 3rd party ROM communities. All of the secondary goals relate to being
able to keep the phone for a long time without worrying about outgrowing it or
losing software update abilities.</p>
<p>I bought an <a href="http://www.lg.com/us/cell-phones/lg-RS988-Silver-g5-unlocked">LG G5, model RS988</a>. I paid $250 plus tax at B&H Photo
at the end of October 2017. Overall, I’m pretty happy with my decision.</p>
<p>The LG G5 is a 2016 flagship phone that was launched in the first quarter of 2016.
It has a Snapdragon 820 SoC (2x Cortex-A7x-like cores and 2x
Cortex-A53-like cores) which is decently powerful, 4 GB of RAM which is still
quite good in late 2017, 32 GB of flash, an SD card slot, and a fairly highly
rated camera for when it was released. The LG G5 originally came with Android
Marshmallow, was then upgraded to Android Nougat 7.0 in late 2016/early 2017,
and is expected to probably get Android Oreo soon.</p>
<p>The RS988 is carrier unlocked and has good support for the frequencies used by
Verizon, AT&T, and T-Mobile in the USA. When I first received the phone I used
it on <a href="https://ting.com/">Ting’s</a> T-Mobile MVNO network and it worked fine although service
at my house was less than stellar. I’ve since switched to <a href="https://www.att.com/prepaid/index.html">AT&T’s</a> prepaid
service and get very good service at my house but less than stellar service at
work. My lesson learned here is that mobile phone provider coverage maps lie
and one must actually test each provider’s coverage in the areas which matter to
you. Verizon, based on coworkers’ phones, has good coverage at my office but
my company is moving in the spring to a new building so I’ll likely stick with
AT&T for now. I really liked Ting, they have great customer service and decent
prices, but T-Mobile’s network is lacking where I live.</p>
<p>The RS988 has an easy to replace battery. You power off the phone, hold the top
securely in one hand while depressing a small button, and then slide the bottom
of the phone away and the battery comes with it. Pop the old battery out of the
bottom piece of the phone, insert new battery, and reassemble. The total
experience is very easy and takes only a minute or two. I am very excited by
this as in a year or two when the original battery starts to show reduced
capacity I can easily replace it without the use of any tiny screw drivers,
plastic spudgers, or heat guns as are needed on 98% of phones today.</p>
<p>The RS988 has an unlockable bootloader. <a href="http://developer.lge.com/resource/mobile/RetrieveBootloader.dev">LG has a website</a>
and an official policy where you can request a bootloader unlock key so that you
can install any software you like onto the phone. I’ve requested and received
my bootloader unlock key but I haven’t yet actually unlocked the bootloader as
doing so will likely cause Netflix to stop working and possibly not allow me to
receive software updates from LG for the stock ROM.</p>
<p>Other LG G5 models have official support from <a href="https://wiki.lineageos.org/devices/">Lineage OS</a> and the
RS988 model is included in the build system for Lineage. Hopefully this just
means that no one has actually stepped up to take ownership of a Lineage RS988
build but I haven’t yet learned enough about AOSP to fully understand this.
Possibly this is an opportunity for me to continue to get software and security
updates for many years once LG drops support.</p>
<p>The one downside of buying a carrier unlocked phone that I’ve learned since
getting my G5 is that the fancy calling features like VoLTE (voice over LTE), HD
Voice (which may simply be VoLTE, I don’t really know), and Wi-Fi calling do not
generally work as the carriers block these services from working on
non-carrier-branded phones. This isn’t a huge deal except that if Wi-Fi calling
would have worked for me then I would have been able to stay with Ting as I have
decent enough Wi-Fi at home even with poor T-Mobile coverage so making calls
still would’ve worked fine. The VoLTE issue is likely only to present itself in
areas where a carrier only has LTE coverage and no 3G service, which is likely
fairly rare, but is something that T-Mobile is doing now apparently. I’m unsure
if Apple iPhones which are purchased directly from Apple unlocked have these
features be usable or not.</p>
<p>My transition to Android has gone fairly smoothly, I’m still able to sync my
calendar and contacts from iCloud using <a href="https://www.davdroid.com/">DAVdroid</a>, so my wife and I
still can share these while she’s still on her iPhone. I was able to replace
the stock LG launcher which tries very hard to be like iOS, but fails miserably,
with <a href="https://github.com/OpenLauncherTeam/openlauncher">OpenLauncher</a> which is pretty simple and works well. I
tried a bunch of different email clients and have found <a href="https://k9mail.github.io/">K-9 Mail</a> to
work really well for my needs. And I like that I can use Firefox for web
browsing (I’ve been a Firefox devotee since shortly after its 1.0 release, and a
Mozilla/Netscape user since I first got on the web). The G5’s camera is pretty
good, I actually really like the wide-angle abilities, although the camera app
is full of features which I don’t care about (the selfie cam can fix your
blemishes! For goodness sake…).</p>
<p>The biggest things I miss from iOS are:</p>
<ul>
<li>IPP Everywhere (Air Print) printing, it just works on iOS but most printers
seem to need an app/plugin for Android.</li>
<li>mDNS name resolution just works everywhere in iOS (and in Linux with Avahi and
on Windows with Apple’s Bonjour) but doesn’t seem to be something Android
offers (although mDNS-SD does work).</li>
<li>iMessage. Apple does iMessage very well. I’ve yet to find an app like it on
Android which doesn’t provide plain text to Google which is anywhere close to
iMessage’s abilities.</li>
<li>Timely software/security updates. My G5 is still on the July 2017 security
update while my wife’s iPhone has gotten multiple security updates that fix
CVEs since the iOS 11 release in September. This can be resolved by using
Lineage OS but I really hope LG does better after Oreo lands.</li>
</ul>
<p>But for a $250 phone, compared to paying much more for an Apple phone, I went
into this understanding that there’d be trade-offs and changes from iOS and that
likely I’d need to put in some time and effort to get a 3rd party ROM working in
the long run.</p>
<p>Just today, November 28th, 2017, B&H Photo now lists the RS988 model as
discontinued. There’s still a few vendors online who will sell new one but I
expect those will decline fairly quickly now. The G5 seems to have been a
fairly unloved phone by reviews sites so hopefully the used market will provide
the ability to buy low cost gently used RS988 phones in the near future so that
I can get one cheap to work on Lineage OS with :)</p>
ArchiTech Hachiko Board UART Sanity Rework Exact Steps2017-07-14T00:00:00-04:00hhttp://www.bradfordembedded.com/2017/07/hachiko-uart-sanity-rework<p>I have an ArchiTech Hachiko development board which has a Renesas RZ/A1 ARM SoC
on it. I’m quite excited about this SoC as it has 10 MB of SRAM inside the
package along with a 400 MHz ARM Cortex-A9 processor, which is enough to easily
run real Linux without requiring any external SDRAM.</p>
<p>Documentation on this board can be found from the vendor:</p>
<ul>
<li>http://architechboards-hachiko-tiny.readthedocs.io/en/latest/</li>
<li>http://downloads.architechboards.com/doc/Hachiko/download.html</li>
</ul>
<p>The design of the ArchiTech Hachiko board uses an FTDI UART to USB chip with a
mini USB port directly on the board. Sadly, this FTDI is connected up such that
when you connect a USB cable to the mini USB port that the entire board gets
powered up. So, if you want to use this FTDI, it’s hard to do interesting
development work for very early boot operations of the SoC.</p>
<p>To fix this, we’re going to do a little rework of the board so we can use an
external 3.3 V logic level FTDI cable (like you’d buy for BeagleBone Black use)
connected to the J2 header pins for Tx and Rx and we’ll find a ground for the
FTDI over on the ARM JTAG header (or anywhere else you can find a ground pin on
the board).</p>
<p>This will let us keep our FTDI UART connection open on a PC while the Hachiko
board is on, off, in reset, or in any other state which might affect the
on-board FTDI chip.</p>
<h2 id="step-1-locate-the-resistors">Step 1: Locate the resistors</h2>
<p>On page 9 of the schematic, locate resistors R15 and R17. We’re going to remove
these two. In the process of the rework, you may find it easier to remove
resistor R16, too as I’ve done, it won’t serving any useful purpose so feel free
to remove it.</p>
<p><img src="http://bradfa.github.io/images/hachiko/hachiko-uart-sanity-rework-schematic.png" /></p>
<p>Now, grab your Hachiko board and find these resistors. They’re near the USB
type A connector.</p>
<p><img src="http://bradfa.github.io/images/hachiko/locate-resistors.jpg" /></p>
<h2 id="step-2-remove-the-resistors">Step 2: Remove the resistors</h2>
<p>Remove the jumper from the J2 header.</p>
<p>Grab your soldering iron or hot air rework station and pull the R15 and R17 (and
R16 if you desire) resistors off. Don’t worry too much if you damage the FTDI
chip itself in this process.</p>
<h2 id="step-3-add-a-green-purple-wire">Step 3: Add a green (purple) wire</h2>
<p>Now, solder down a very short wire between pad 1 of R15 and pad 1 of R17. Be
sure that your wire does not touch any of the other now-exposed pads.</p>
<p><img src="http://bradfa.github.io/images/hachiko/rework-complete.jpg" /></p>
<h2 id="step-4-connect-up-some-wires">Step 4: Connect up some wires</h2>
<p>Connect an orange jumper wire to pin 1 of J2 and a yellow jumper wire to pin 2
of J2. Connect the other ends of each of these wires to the correspondingly
colored wire on the external 3.3 V logic level FTDI cable. Then connect a black
ground wire to pin 20 on header CN1 (the ARM JTAG header) and to the black wire
on the external FTDI cable.</p>
<p><img src="http://bradfa.github.io/images/hachiko/connect-ftdi-wires.jpg" /></p>
<p><img src="http://bradfa.github.io/images/hachiko/ftdi-header-colors-lineup.jpg" /></p>
<h2 id="step-5-connect-ftdi-cable-to-pc-and-open-your-favorite-terminal-program">Step 5: Connect FTDI cable to PC and open your favorite terminal program!</h2>
<p>You should now be able to interface with the Hachiko board in your favorite UART
terminal program!</p>
<p>The default code that comes loaded in the QSPI flash on the board uses UART
settings of 115200 8N1, like most all Linux based boards use these days.</p>
Going Deep?2017-01-10T00:00:00-05:00hhttp://www.bradfordembedded.com/2017/01/going-deep<p>So far in my career I’ve always been a generalist. I’ve not focused on any one
detailed topic for very long, as I’ve always bounced between a few different
things day to day both in my career and in my hobby projects. Even my 2017
goals aren’t that focused on any one area of technology, they’re spread quite
wide.</p>
<p>I’m worried that this is spreading me too thin and that it may not be the best
strategy, longer term. But I don’t really know. I’d love to dive deep into one
area of technology for an extended period of time and really be able to master
it.</p>
<p>At my job, this isn’t really possible as I work on a product development team
and my ability to contribute in many different places is well leveraged and
would be sorely missed if I artificially curtailed myself so that I could focus
on just one thing. We simply don’t have enough engineers who overlap my skill
sets to allow me to focus on just one or two things. I do make progress
learning across the broad set of topics that I work on, but it always feels like
2 steps forward and 1 step back as if I don’t keep working on something then my
memory starts to fade in that area over time.</p>
<p>I think going deep into one or two specific topic areas will end up needing to
be a personal endeavour for me. Finding time to do so can be hard, but I need
to try.</p>
<p>Possibly starting the process of being a Debian Developer and contributing to
KiCad parts libraries are two things which are too different such that I can’t
do both well. But contributing to KiCad parts libraries and making open source
hardware projects would go well together…</p>
<p>It’s hard for me to pick one thing to focus on and then stick with it…</p>
2017 Goals2017-01-02T00:00:00-05:00hhttp://www.bradfordembedded.com/2017/01/2017-goals<p>These are my engineering and computing related goals for 2017. I feel like
writing them down will better enable me to understand what topics I want to
focus on in 2017 and allow me to evaluate if I achieved my goals at the end of
the year.</p>
<h2 id="core-goals">Core Goals:</h2>
<ul>
<li>Begin the process of becoming a Debian Developer by participating more in
Debian, fixing bugs and possibly adopting (via NMU and the mentors program, at
first) some orphaned packages that I care about.</li>
<li>Start and contribute regularly to a website that tracks my accomplishments (even
small ones), both those which can be shared publicly and those which should stay
mostly private (ie: private things are work-related non-open source
accomplishments).</li>
<li>Continue to run my NTP server in India and try to help improve the state of the
India NTP pool.</li>
<li>Continue to avoid social media usage as it is distracting and addicting for me.
Although this will reduce the amount that I am able to interact with my Internet
Friends, when I’m not on social media I have found that I am much more focused
on my family during family times and on my work during work times.</li>
<li>Create at least 1 new open hardware design and have it manufactured using a
structured design methodology which has similar benefits to most Time To Market
systems used at the companies I’ve worked at so that I am focused on what I am
building and why.</li>
<li>Contribute to the KiCad parts library and footprint libraries.</li>
<li>Continue to contribute to the Cross Linux from Scratch project.</li>
</ul>
<h2 id="reach-goals">Reach Goals:</h2>
<ul>
<li>Become a KiCad librarian who has commit access to the KiCad library and
footprint git repos.</li>
<li>Create 2 or more open source hardware board designs and have them manufactured.</li>
<li>Actually become a Debian Developer.</li>
<li>Get a technician level AARL license.</li>
</ul>
NTP Pool Server in India2016-10-24T00:00:00-04:00hhttp://www.bradfordembedded.com/2016/10/ntp-pool-india<p>About a month ago, <a href="https://lwn.net/Articles/701222/">LWN ran an article about the NTP pool system</a>.
I’ve been meaning to learn more about NTP and how the pool system works because
I’d like to be able to setup a vendor zone for my dayjob employer in the near
future and so I figured it was as good a time as any to dive right in!</p>
<p>I wanted to setup my server somewhere that it would be hit hard and also provide
services to an underserved part of the world. India was the only country I
could find where there was a low cost VPS provider and also a dearth of NTP pool
servers…</p>
<p>So, I now run one of the NTP pool servers in India! You can see my server’s
statistics as measured by the pool monitoring station on <a href="http://www.pool.ntp.org/user/bradfa">my pool user
page</a>. My server is a stratum 2 server, so it tracks the time
from a handful of stratum 1 and other stratum 2 servers. At peak times my
server is serving almost 1 million clients and doing upwards of 10 Mbps inbound
and outbound network traffic, which I find quite impressive!</p>
<p>My server is hosted on <a href="https://m.do.co/c/38c608229292">Digital Ocean</a> (that’s a referral link)
in their BLR1 zone in Bangalore, India on a $5/month droplet, and I’m quite
happy with it. Currently, Digital Ocean does not actually track network
transfer totals, which is a good thing, as I expect my total monthly transfer
will be in the 2 to 3 TB range.</p>
<p>Although at this point my NTP pool account lists an IPv6 server, it is due to
be removed from the pool in a week or so. When my server is in the IPv6 pool,
there are times when CPU usage goes extremely high and my server starts to fall
behind in responding to NTP requests but I’ve not yet been able to debug why
this sometimes happens. By removing my server from the IPv6 pool, this issue
completely goes away, so at least for the short term, this is my remedy.</p>
<p>It’s interesting to watch my network transfer rates, during normal business
hours, there’s a steady level of NTP traffic at about 8 Mbps, then around dinner
time it starts rising up to a peak around 10 Mbps, then finally around midnight
it starts to taper off down to a minimum of about 2 Mbps. This cycle happens
consistently, day to day, and week to week. I still need to setup local
monitoring and have it produce pretty graphs, I just haven’t gotten to that,
yet.</p>
<p>The official <a href="http://www.pool.ntp.org/en/join.html">“How do I join pool.ntp.org”</a> web page states,
<em>“Currently most servers get about 5-15 NTP packets per second with spikes a
couple of times a day of 60-120 packets per second. This is roughly equivalent
to 10-15Kbit/sec with spikes of 50-120Kbit/sec.”</em> which might be true for
servers in the west, but is definitely not true for underserved parts of the
world. I see thousands of packets per second on my server.</p>
<p>If you’d like to set up an NTP server and have it join the world-wide NTP pool
system, I highly recommend it! You can find underserved parts of the world to
focus on by drilling down within the <a href="http://www.pool.ntp.org/zone/@">Global region</a> listing. I’ve
found that the Africa and Asia regions are not well served, especially India and
China, but both countries have quite large online populations. India is much
easier to find hosting providers, in my experience, so that’s where I focused.</p>
Thinkfan2016-10-17T00:00:00-04:00hhttp://www.bradfordembedded.com/2016/10/thinkfan<p>I have a Lenovo Thinkpad T420s. It’s a nice little laptop. But the fan runs,
constantly, and annoys me. Thankfully, there’s a nice tool called
<a href="http://thinkfan.sourceforge.net/">thinkfan</a>!</p>
<p>On Debian Stretch with a Linux 4.x kernel, here’s how you can set it up to work
so that your fan only spins up at temperatures above normal levels, saving your
battery and keeping you from going insane due to fan noise:</p>
<p>Install thinkfan:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo apt install thinkfan
</code></pre></div></div>
<p>Enable the thinkfan service in systemd:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo systemctl enable thinkfan.service
</code></pre></div></div>
<p>Make sure the thinkpad_acpi kernel module loads with fan control enabled:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ echo "options thinkpad_acpi fan_control=1" | \
sudo tee /etc/modprobe.d/thinkpad_acpi.conf
</code></pre></div></div>
<p>Create a thinkfan.conf file with sensible fan rates for sensible temperatures,
which looks like this, which will turn off the fan until the CPU temperature
reaches 48 degrees C and will turn the fan on full-blast above 63 degrees C:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># For Linux 4.x kernels which no longer have the /proc interface
hwmon /sys/devices/virtual/hwmon/hwmon0/temp1_input
(0, 0, 48)
(1, 45, 52)
(2, 49, 56)
(3, 53, 59)
(4, 56, 62)
(5, 59, 66)
(7, 63, 32767)
</code></pre></div></div>
<p>Finally, enable thinkfan to allow itself to start up at boot time:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo sed -i 's|START=no|START=yes|' /etc/default/thinkfan
</code></pre></div></div>
<p>You can now either reboot or start thinkfan manually using systemctl and enjoy
your silence!</p>
Exact Steps - Use OpenSSL to Sign a File2016-06-01T00:00:00-04:00hhttp://www.bradfordembedded.com/2016/06/openssl-file-signing<p>Sometimes you might want to deploy a file, like a tarball, with an embedded
public/private key signature so that a recipient can validate that the file came
from the source they think it came from. This technique is often used for
deploying software updates.</p>
<p>Generally, a public/private key signature is distributed separate from the file
of interest. For example, most Linux distributions will provide a signature
digest file or hash checksum value for people who download the distribution
installer to check the installer against. This works fine but it requires 2
files, it could be more convenient to have both the file itself and the
signature concatenated together in such a way that it’s easy to work with.</p>
<p>A way that I’ve found to accomplish this using OpenSSL (so as to avoid using
GnuPG) follows these exact steps:</p>
<p>First, create an 2048 bit RSA private key using openssl (<strong>KEEP THIS PRIVATE KEY
SECRET!</strong>):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>openssl genrsa -out private.pem 2048
</code></pre></div></div>
<p>Then, create the associated public key for this private key:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>openssl rsa -in private.pem -out public.pem -outform PEM -pubout
</code></pre></div></div>
<p>Now you have <code class="language-plaintext highlighter-rouge">private.pem</code> and <code class="language-plaintext highlighter-rouge">public.pem</code>. Distribute <code class="language-plaintext highlighter-rouge">public.pem</code> to all
entities which need to validate that a file was signed by <code class="language-plaintext highlighter-rouge">private.pem</code> and
keep <code class="language-plaintext highlighter-rouge">private.pem</code> in a secure location (and don’t lose it!).</p>
<p>Finally, to create a signature of a file and then append that signature to the
file:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>openssl dgst -sha256 -sign private.pem -out digest tarball.tar.gz
cat tarball.tar.gz digest >> tarball.tar.gz.signed
</code></pre></div></div>
<p>Now you can distribute one file which has an embedded public/private key
signature! :)</p>
<p>If you receive a concatenated input file plus digest, you will need to first
extract the digest portion to a file on its own and then remove the digest from
the original file. This can be done using dd:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>SIZE=$(stat -c%s tarball.tar.gz.signed)
dd if=tarball.tar.gz.signed of=digest bs=1 skip=$((SIZE-256))
dd if=/dev/null of=tarball.tar.gz.signed bs=1 seek=$((SIZE-256))
mv tarball.tar.gz.signed tarball.tar.gz
</code></pre></div></div>
<p>And to validate a given digest file against the file it supposedly signs using
the public key, use openssl again:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>openssl dgst -sha256 -verify public.pem -signature digest tarball.tar.gz
</code></pre></div></div>
Exact Steps - Backups with bup to a USB disk2016-04-15T00:00:00-04:00hhttp://www.bradfordembedded.com/2016/04/backups-with-bup<p><strong>EDIT: bup backups to a remote location don’t exactly work as you’d expect (I
learned this the hard way) so take these steps with a grain of salt</strong></p>
<p>I recently learned about the <a href="https://github.com/bup/bup">bup</a> backup tool!
It’s pretty neat. It stores backups like git does, but deals with big files in
a fairly efficient way.</p>
<p>I like to do backups to an external USB disk so that I can backup more than a
single machine to a given disk and so I can take the disk with me to avoid
having all my data get lost if there’s a catastrophe at a single location. The
downside to using an external disk like this is that I use luks encryption on
the disk and it’s not always plugged in, so doing something like daily backups
is slightly harder even though it can be overcome via cron jobs and by storing
a luks key on my machine(s) (such as I could encrypt a luks key with gnupg using
my Yubikey NEO to do the encryption so as to avoid storing a plaintext luks key
on my disk).</p>
<p>But regardless, here’s my exact steps for getting started with bup:</p>
<ol>
<li>Mount hard disk to /mnt/usb (you are strongly encouraged to use luks
encryption on this disk!)</li>
<li><code class="language-plaintext highlighter-rouge">mkdir /mnt/usb/.bup && bup init -r /mnt/usb/.bup</code> to create and initialize
a .bup directory on the USB disk. You may need to do some sudo and chmod/chgrp
action depending on how your USB disk gets mounted. You’ll save the actual
files from backups here later.</li>
<li><code class="language-plaintext highlighter-rouge">bup init</code> to create the core bup repo in your home directory, the index
will live here</li>
<li><code class="language-plaintext highlighter-rouge">bup index ~/</code> to index your home directory</li>
<li><code class="language-plaintext highlighter-rouge">bup index /etc</code> to index your /etc directory</li>
<li><code class="language-plaintext highlighter-rouge">bup save -n ${HOSTNAME}-etc -r /mnt/usb/.bup /etc</code> to save the backup of
your /etc directory to your USB disk. This backup set will result in a branch
named after ${HOSTNAME}-etc, so that if you backup other PCs to this same .bup
directory on your USB disk, you’ll be able to see each one in a separate
branch.</li>
<li><code class="language-plaintext highlighter-rouge">bup save -n ${HOSTNAME}-home -r /mnt/usb/.bup ~/</code> to save the backup of
your home directory to your USB disk. Similarly, your home directory will go
in a ${HOSTNAME}-home branch.</li>
</ol>
<p>To make a future incremental backup, mount your disk and start from step 4.</p>
<p>If you have more than one user on a given PC, you may want to backup the entire
/home directory, or you may want to pick a more descriptive name for the backup
branch which includes the username.</p>
EC2 Computer Costs2015-09-14T00:00:00-04:00hhttp://www.bradfordembedded.com/2015/09/ec2-computer-costs<p>I’ve <a href="http://www.bradfordembedded.com/2012/07/next-computer/">previously written</a> about not buying another desktop computer but
instead simply getting a nice portable laptop and monitor setup and then using
Amazon’s cloud computers for computation. So I figured I should actually try
that and see what it’s like.</p>
<p>I did a little research on price points and how AWS’s EC2 actually works then
tried out a compiling work-load, since most modern laptops can do every other
task with ease, except big compiles, it’s really the only use-case that makes
sense for me. My test case was to build the Yocto Project’s Poky
“core-image-minimal” targeting the Beagle Bone.</p>
<p>Building Poky is just a huge set of compiling, first a bunch of tools for the
host machine, and then a root file system and Linux kernel for a target (in this
case the Beagle Bone), so I felt it was a good example to show how much it will
cost to get the computation of a modern workstation when using AWS’s EC2. All
my tests were run with the “fido” release of Poky and do not include performing
the fetching of software packages (as this might be very network dependent and
not very consistent).</p>
<p>Here’s my findings:</p>
<p>My day job workstation is an <em>Intel Xeon E3-1270v3</em> based machine and it can
build the “fido” release of Poky for Beagle Bone in about <em>32 minutes</em> (not
including fetching all the downloads). I don’t consider this to be “fast” but
it’s certainly not slow for a modern workstation, but this is my reference as
it’s the fastest machine I use and buying one is somewhere in the $1500 range.</p>
<p>My personal laptop is an <em>Intel Core2Duo</em> based machine running at a screaming
1.7 GHz which was donated to me by my sister a few years ago after she finished
using it as her college laptop. It was a nice little laptop when new and still
runs fine, so it works great for me as it was free. It takes <em>226 minutes</em> to
build Poky, which is basically unusable.</p>
<p>I then configured a Debian Wheezy (old stable now) HVM EBS root backed (64 GB of
SSD gp2 EBS volume) instance on EC2 and ran it with various EC2 instance types
to compile Poky. In total, I believe my full AWS costs for this experiment were
in the $1.60 range, so quite affordable to tryout a few workstations :)</p>
<p>On a <em>t2.micro</em> instance which has <em>1 shared CPU</em> and costs <em>$0.013/hour</em>, I
stopped the build after <em>559 minutes</em> because it wasn’t even half way complete
and I found that AWS will allow full usage of the CPU for a short while but then
throttle it to only allow up to 10% usage. For doing normal things like
fetching software or serving up a simple blog, this wouldn’t be a bad choice,
but it’s not even an old laptop replacement in terms of computation.</p>
<p>On an <em>m4.large</em> instance which has <em>2 CPUs</em> and costs <em>$0.126/hour</em>, I built
Poky for Beagle Bone in <em>136 minutes</em>. This is definitely faster than my laptop
but is not a modern workstation.</p>
<p>On a <em>c4.xlarge</em> instance (which uses Intel Xeon E5-2660v3 CPUs) which has <em>4
CPUs</em> and costs <em>$0.22/hour</em>, I built Poky for Beagle Bone in <em>59 minutes</em>.</p>
<p>On a <em>c4.2xlarge</em> instance (again, E5-2660v3 based) which has <em>8 CPUs</em> and costs
<em>$0.441/hour</em>, I built Poky for Beagle Bone in <em>33 minutes</em>. I however did find
that the “8 CPU” are likely actually only 4 CPU and then 4 HT cores, so it’s not
“really 8 CPU” but more like 4 with HT.</p>
<p>In conclusion, it seems like if you want a modern workstation on AWS EC2, you’re
going to pay at least $0.441 per hour of usage, plus EBS fees (SSD gp2 costs
$0.10/GB*month, so my 64 GB SSD gp2 EBS disk would have cost me $6.40/month).
If you were to step up to a c4.4xlarge instance at $0.882/hour and use a
slightly larger EBS SSD disk, I think you’d have quite a nice “workstation”. I
didn’t find EBS’s gp2 SSD disks to be slow at all, although my understanding is
that a larger disk gets you more guaranteed IOPs and burst IOPs, so going
slightly bigger is probably better, as the cost is not that much compared to the
EC2 instance costs for decent processing power.</p>
<p>If I was working as someone who could charge an hourly rate and compiling was a
bottleneck for me, I would definitely consider using EC2 for some compiling
work-loads rather than buying a new Xeon E5 workstation. Scaling the CPU power
up and down is easy and if you don’t need to shell out a few thousand dollars up
front to buy a workstation, spending $0.882/hour (or even stepping up to the
c4.8xlarge at $1.763/hour) could easily be justified for a few hours per day of
usage while still staying “reasonable” in cost.</p>
<p>The one annoying thing I found to using EC2 this way was that every time I
launched an instance, I got a new public IP address, so using my own DNS was not
possible and my ssh known keys file filled quickly with crap. Amazon offer a
service to avoid this, where you reserve a public IP address for your own use,
but you pay $0.005/hour of time where this IP is not bound to a running
instance. So basically it’s roughly $3.50/month to reserve an IP address. If I
was actually using EC2 for work, I’d definitely do this to keep a sane IP
address setup.</p>
<p>Overall, I now understand AWS EC2 slightly better and would highly recommend
using it if you have a need for high compute power. It’s not as convenient as
having a workstation in your office, but it can be a reasonably priced way to
get compiling done.</p>
Debian Jessie and LVM Cache Exact Steps2015-03-26T00:00:00-04:00hhttp://www.bradfordembedded.com/2015/03/lvmcache<p>The whole concept of using a pile of big disks as a RAID in order to get lots of
storage and some redundancy has always intrigued me. Back in 2006, when I last
built a home computer for myself, I bought a PCI-X battery backed RAID card and
loaded that system up with four 400 GB disks (not small for the time). It was
very nice to know that my data wouldn’t get lost due to an unexpected power loss
and that the RAID card (with 256 MB of RAM itself) would do some nice caching
for me to speed things up.</p>
<p>But now I’m a bit more thrifty ($300 for a RAID card is not in my budget!) and
I’ve gotten addicted to the disk speeds of SSDs.</p>
<p>Hence, the introduction of md-cache into Linux and the lvmcache capability into
the lvm2 tools is very interesting to me as now I can have a software RAID
(without some of the nice features the hardware RAID had but with a $0 price
tag) and use an SSD as a cache for the big slow disks!</p>
<p>There’s a few decent resources on the web saying how to build up your disks into
a software RAID and then add dm-caching to it, but I wanted to put all my
personal experiences in one place for my own reference.</p>
<p>You’ll need at least Linux 3.9 in order to enable dm-cache and you’ll need a
relatively recent version of the lvm2 tools (I am using lvm2 v2.02.111 in Debian
Jessie whose dependencies include a bunch of systemd things which may make
backporting it to wheezy not super simple). You can, in theory, make this work
on Debian Wheezy if you can backport (or simply package) a modern lvm2 tools
package and use a backported 3.9 or newer Linux kernel.</p>
<p>My test installation virtual machine has 3 big disks (20GB each) and 1 smaller
SSD (8GB). Each of the big disks are <code class="language-plaintext highlighter-rouge">/dev/sda</code>, <code class="language-plaintext highlighter-rouge">/dev/sdb</code>, and <code class="language-plaintext highlighter-rouge">/dev/sdc</code>
while the SSD is <code class="language-plaintext highlighter-rouge">/dev/sdd</code>.</p>
<p>During install, I created 3 partitions on each of the big disks:</p>
<ol>
<li>sdX1: 256 MB partition for software RAID</li>
<li>sdX2: 2 GB partition for swap</li>
<li>sdX3: remaining space for software RAID</li>
</ol>
<p>Then, setting up the software RAID in the Debian installer, combine all of the
sdX1 partitions into a RAID1 and all of the sdX3 partitions into a RAID5.</p>
<p>Set up the sdX1 RAID1 as an ext4 file system and configure it to mount to
<code class="language-plaintext highlighter-rouge">/boot</code>. Set up the sdX3 RAID5 to be used for LVM. Set up each of the
sdX2 partitions (all 3, independently) as swap space (the swapping will take
advantage of all the drives in parallel so no need to burden them with software
RAID).</p>
<p>Set up the LVM space to split into two logical volumes, one for <code class="language-plaintext highlighter-rouge">/</code> and one for
<code class="language-plaintext highlighter-rouge">/home</code>. Format each for ext4 and the proper mounted locations.</p>
<p>Continue with the Debian installer as normal. After the installer finishes, I
found that grub was not installed correctly and I had to manually help it out a
bit with:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ grub-install --recheck /dev/sda
$ grub-install --recheck /dev/sdb
$ grub-install --recheck /dev/sdc
</code></pre></div></div>
<p>Manually check that your <code class="language-plaintext highlighter-rouge">/etc/fstab</code> is correct, too. Just in case.</p>
<p>Once you can boot into your system, then setup caching for both the <code class="language-plaintext highlighter-rouge">/</code> and
<code class="language-plaintext highlighter-rouge">/home</code> logical volumes.</p>
<p>Setup <code class="language-plaintext highlighter-rouge">/dev/sdd</code> as a physical volume:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo pvcreate /dev/sdd
</code></pre></div></div>
<p>My volume group (which contains the <code class="language-plaintext highlighter-rouge">/</code> and <code class="language-plaintext highlighter-rouge">/home</code> lvs) is called <code class="language-plaintext highlighter-rouge">vg0</code>, so to
add the <code class="language-plaintext highlighter-rouge">/dev/sdd</code> pv to <code class="language-plaintext highlighter-rouge">vg0</code> you need to extend <code class="language-plaintext highlighter-rouge">vg0</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo vgextend vg0 /dev/sdd
</code></pre></div></div>
<p>Now create a pair of 3.6 GB cache logical volumes and a pair of 32 MB cache
metadata logical volumes (one cache and cache metadata for each lv you want to
cache):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo lvcreate -n lvrootcache -L3.6G vg0 /dev/sdd
$ sudo lvcreate -n lvhomecache -L3.6G vg0 /dev/sdd
$ sudo lvcreate -n lvrootcachemeta -L32M vg0 /dev/sdd
$ sudo lvcreate -n lvhomecachemeta -L32M vg0 /dev/sdd
</code></pre></div></div>
<p>Turn your cache lvs and cachemeta lvs into cache pools (one cache and one set of
metadata per cache pool, ideally you might want to put the metadata on a
different physical SSD from the cache itself):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo lvconvert --type cache-pool --cachemode writeback --poolmetadata \
vg0/lvhomecachemeta vg0/lvhomecache
$ sudo lvconvert --type cache-pool --cachemode writeback --poolmetadata \
vg0/lvrootcachemeta vg0/lvrootcache
</code></pre></div></div>
<p>Lastly, configure each of the actual logical volumes (root and home) to be
cached by each of the cache pools:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo lvconvert --type cache --cachepool vg0/lvrootcache vg0/lvroot
$ sudo lvconvert --type cache --cachepool vg0/lvhomecache vg0/lvhome
</code></pre></div></div>
<p><strong>Now, before you boot up with this caching configuration, make sure your kernel
actually supports dm-cache by ensuring it has CONFIG_DM_CACHE set to <code class="language-plaintext highlighter-rouge">y</code> or if a
module that the module gets loaded during early boot!</strong> (Debian Jessie’s present
3.16.0-4 kernel on amd64 does not, so booting is fun!)</p>
<p><strong>EDIT TO ADD</strong></p>
<p>In addition to all of this configuration, you’ll also need the
<code class="language-plaintext highlighter-rouge">thin-provisioning-tools</code> package and to ensure that your initramfs gets built
with the proper modules included and the <code class="language-plaintext highlighter-rouge">cache-check</code> program. There’s a nice
overview on the <a href="http://forums.debian.net/viewtopic.php?f=5&t=119644">Debian forum</a> which includes this script which you
should place into the <code class="language-plaintext highlighter-rouge">/etc/initramfs-tools/hooks/</code> directory so that when you
run <code class="language-plaintext highlighter-rouge">update-initramfs -u</code> that the right things happen:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#!/bin/sh
PREREQ="lvm2"
prereqs()
{
echo "$PREREQ"
}
case $1 in
prereqs)
prereqs
exit 0
;;
esac
if [ ! -x /usr/sbin/cache_check ]; then
exit 0
fi
. /usr/share/initramfs-tools/hook-functions
copy_exec /usr/sbin/cache_check
manual_add_modules dm_cache dm_cache_mq
</code></pre></div></div>
<p>I’m also tracking some of my own notes on this work in a <a href="https://gist.github.com/bradfa/8845aac14a4fd408b5ac">gist</a>.</p>
Debian Backporting Exact Steps2015-03-17T00:00:00-04:00hhttp://www.bradfordembedded.com/2015/03/debian-backporting<p>I’ve been trying to learn more about building Debian packages and I’ve been
learning how to design printed circuit boards. Hence, I’ve attempted to
backport a more recent version of kicad to Debian Wheezy!</p>
<p>This is a rough outline of the steps I took to do a backport:</p>
<p>Configure your <code class="language-plaintext highlighter-rouge">~/.devscripts</code> file to specify the GPG key you want to use by
defining <code class="language-plaintext highlighter-rouge">DEBSIGN_KEYID=</code> or be sure to pass the proper key identification
switches to <code class="language-plaintext highlighter-rouge">dpkg-buildpackage</code> or <code class="language-plaintext highlighter-rouge">debsign</code> as needed later.</p>
<p>Get the sources from Debian testing by adding the testing repo to your
<code class="language-plaintext highlighter-rouge">/etc/apt/sources.list.d</code> directory (just a <code class="language-plaintext highlighter-rouge">deb-src</code> line as you probably don’t
want to actually install packages from testing on your stable). Then do an
<code class="language-plaintext highlighter-rouge">apt-get update</code>. Then you can get the sources from testing with an <code class="language-plaintext highlighter-rouge">apt-get
source $PKGNAME=$VERSION</code> to get the sources for a given version.</p>
<p>You may need to have the backports archive setup to pull in binary packages if
the thing you’re backporting has dependencies on newer libraries or whatnot.
Follow the nice instructions for adding <a href="http://backports.debian.org/Instructions/">backports</a> to your apt.</p>
<p>Now get the build dependencies for the package you’re trying to backport with
<code class="language-plaintext highlighter-rouge">apt-get build-dep $PACKAGE=$VERSION</code>.</p>
<p>Enter into the source directory for your package. Update the changelog to show
that you’re backporting with <code class="language-plaintext highlighter-rouge">dch --bpo</code> (and if you have dch from wheezy, it’ll
show <code class="language-plaintext highlighter-rouge">squeeze-backports</code> which you should correct to <code class="language-plaintext highlighter-rouge">wheezy-backports</code>).</p>
<p>Then build the package with <code class="language-plaintext highlighter-rouge">dpkg-buildpackage</code> and add the <code class="language-plaintext highlighter-rouge">-sa</code> switch if you
need to include the full sources (aka, this isn’t a debian software), and add
the <code class="language-plaintext highlighter-rouge">-j3</code> (adjust for your machine’s number of CPUs) to parallelize the build,
and add the <code class="language-plaintext highlighter-rouge">-v$STABLEVERSION</code> switch if there has been no backport of this
package yet so that the changes file includes all the changes since the stable
version was released.</p>
<p>If for some reason your package didn’t have its dsc and changes files signed by
<code class="language-plaintext highlighter-rouge">dpkg-buildpackage</code>, then use <code class="language-plaintext highlighter-rouge">debsign</code> to sign them now (telling <code class="language-plaintext highlighter-rouge">debsign</code>
which key to use if needed).</p>
<p>Now you have a full set of output .deb files, a changes file, a dsc file, and
both the upstream packaged sources and debian patches in tarballs!</p>
<p>Upload this to <a href="https://mentors.debian.net/">Debian Mentors</a> following the
<a href="https://mentors.debian.net/intro-maintainers">instructions</a> (skip to section 4).</p>
<p>Once you’ve uploaded the package to Debian Mentors, ask for someone to review on
the proper mailing list!</p>
MeRoot2015-02-04T00:00:00-05:00hhttp://www.bradfordembedded.com/2015/02/meroot<p>I often dabble with cross compilers and building little embedded systems. CLFS
<a href="http://clfs.org/view/clfs-embedded/">embedded book</a> basically does this, but you end up with a system that can’t
compile anything as it lacks a native compiler. CLFS main book does this but
the resulting root file system is quite large and full of tools which you may
not really want.</p>
<p>I wanted to see if I could bootstrap together a root file system which was
capable enough to build itself again and to build software using <a href="http://www.pkgsrc.org/">pkgsrc</a>.</p>
<p>So, I built <a href="https://github.com/bradfa/meroot">MeRoot</a>.</p>
<p>It’s definitely not perfect, but it’s a nice start and I learned quite a bit.</p>
FTDI Hate2015-01-13T00:00:00-05:00hhttp://www.bradfordembedded.com/2015/01/ftdi-hate<p>Lots of Linux development kits these days come with FTDI chips soldered down and
a nice little USB port which the main default UART terminal is piped through.
Be it multi-hundred dollar system on module evaluation boards or the smallest
low cost consumer hobby market kit.</p>
<p>This concept of using an FTDI part on a Linux dev kit needs to die! NOW!</p>
<p>If you are developing on Linux for a Linux embedded target, you should be smart
enough to figure out how to add a serial port to your computer. Like a real,
proper signal levels, serial port.</p>
<p>My main gripes with FTDI serial to USB converters being on-board are twofold:</p>
<ol>
<li>
<p>When I plug in the USB cable to the board, the board boots since USB provides
power, and if there’s anything I wanted to do immediately at boot time, I have
to play stupid tricks with the reset button (assuming one exists on the board)
to catch the board coming out of cold-ish boot (like, capture early u-boot
output or loading data over XYZmodem). This is completely avoided by using a
cheap power supply and a real serial port (real serial ports don’t generally
provide power).</p>
</li>
<li>
<p>Plugging and unplugging the board from my PC may change the serial port
numbering used by the OS (yes, there are ways around this buy the default is
like this). Be it Windows where God chooses a new COM port number for you or
Linux where you plug something else with an FTDI on it in and now your board is
1 count higher in /dev/ttyUSB number-land.</p>
</li>
</ol>
<p>To avoid all of this, a little solder along with a serial port level converter
and a 9 pin D-shell connector goes a long way.</p>
<p>STOP USING FTDI PARTS ON DEV KITS!</p>
<p>Please!</p>
<p>I get why this happens, lots of people don’t have serial ports anymore but
everyone has USB ports. But by using an FTDI, lots of things which are really
neat which you can do with serial ports is not easily possible, like using a
serial console switcher or wonky data rates, both of which are handy at times.</p>
<p>But mostly, it’s just based on “Get off my lawn!”</p>
LC Baluns2014-07-10T00:00:00-04:00hhttp://www.bradfordembedded.com/2014/07/lc-balun<p>Sometimes, when working with RF, there will come a situation where one device
being used has a balanced input or output but the device which it needs to
electrically connect to has a singled-ended or unbalanced input or output.
Often these devices will also have different impedances. In these situations,
you need to construct a balun (balanced to unbalanced) circuit to bridge these
two types of RF terminals together.</p>
<p>Balanced RF ports usually are designed to drive or be driven by something like a
dipole antenna. Unbalanced RF ports usually are designed to drive or be driven
by something like a typical 50 ohm coax cable or a monopole antenna. Attempting
to directly connect a balanced RF port to an unbalanced one, even if the
impedances match, will result in poor RF performance.</p>
<p>Usually baluns are made with transformers. Most every amateur radio design book
will show how to construct a magnetic core balun for use at various frequencies
and power levels. But in low cost consumer or similar circuit board designs, a
transformer based balun may be too expensive or too large, or simply impractical
for other reasons.</p>
<p>When a transformer based balun is not reasonable for the design, one option is
to create an LC balun, using only capacitors and inductors.</p>
<p>The goal is to take each of the balanced port pins and introduce a 90 degree
phase shift (one +90, one -90) while matching the impedance to, for example, 50
ohms, such that the unbalanced device has a proper reference.</p>
<p>You can see an example of a 868/915 MHz LC balun in the <a href="http://www.ti.com/general/docs/lit/getliterature.tsp?genericPartNumber=cc1101&fileType=pdf">datasheet for TI’s
CC1101 radio</a> in figure 11 on page 25:</p>
<p><img src="http://bradfa.github.io/images/cc1101match.png" /></p>
<p>A few different things are happening in the circuit between the antenna and pins
12 and 13 (RF_P and RF_N), but since the output antenna is single ended (usually
an SMA connector for coax cable or a monopole PCB antenna in most CC1101
designs), a balun is needed. Since the CC1101 RF port has an impedance of 86.5</p>
<ul>
<li>j43 ohms at 868/915 MHz, as shown in section 4.3 on page 16, the impedance
also needs to be matched to 50 ohms to match the antenna.</li>
</ul>
<p>So what the heck is going on here?</p>
<p>Let’s assume we want to tune this circuit for 900 MHz, that should get us close
to the ideal for covering both the 868 and 915 MHz bands. With this assumption,
that +j43 ohms reactance of the CC1101 needs to be canceled out. To do this,
L131 and L121 add to the inductance of the CC1101 such that a standard 1 pF
capacitor can cancel our all of the reactance. A 1 pF cap at 900 MHz is roughly
-j176 ohms, while the pair of inductors (L131 and L121) add to about +j135 ohms.
Adding the CC1101 reactance of +j43 ohms to the +j135 ohms and subtracting the
C121’s -j176 ohms nets out to about a real impedance with no reactance (it’s
slightly off at 900 MHz but that’s forgivable as the bands we’re working with
are quite spread from each other so this is expected). Now, looking across the
nets which span C121 into the CC1101 should look like just the real component of
the CC1101 impedance of 86.5 ohms.</p>
<p>At this point, now we can try to match the 86.5 ohms impedance to 50 ohms
(although this isn’t what TI appears to do with their given recommended values
for this circuit). In order to bring the net on the left side of L123 to 50
ohms, which is the simplest balun (but which may have other downsides), we first
have to calculate Rinner. Rinner will be used to calculate the L and C values
which will apply to C131, C122, L132, and L122. These 4 components each will
introduce a +90 and -90 degree phase shift and match the impedances, such that
we end up with a real 50 ohm singled ended (unbalanced) point on the left side
of L123.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Rinner = sqrt(Zunbal * Zbal)
</code></pre></div></div>
<p>Actual calculations:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>octave:8> Rinner = sqrt(86.5 * 50)
Rinner = 65.765
</code></pre></div></div>
<p>Now to calculate the L and C values using our w (which is 2 * pi * f):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>octave:9> w = 2 * pi * 900E6
w = 5.6549e+09
octave:10> L = Rinner / w
L = 1.1630e-08
octave:11> C = 1 / (Rinner * w)
C = 2.6890e-12
</code></pre></div></div>
<p>Thus, setting C131 and C122 as 2.7 pF and setting L132 and L122 as 11 nH should
get us from the CC1101 to a single ended (unbalanced) 50 ohm match with the
proper phase shifts.</p>
<p>We can try this out using the <a href="http://tools.rfdude.com/RFdude_Smith_Chart_Program/RFdude_smith_chart_program.html">RFdude matching software</a> to see the
impedance transform when looking into the 50 ohm side of the circuit:</p>
<p><img src="http://bradfa.github.io/images/balunn.png" /></p>
<p><img src="http://bradfa.github.io/images/balunp.png" /></p>
<p>As each of the resulting impedances’ reactances roughly cancel each other out,
we have successfully transformed from the single ended 50 ohm impedance to the
balanced 86.5 ohm impedance.</p>
<p>I really don’t understand TI’s recommended values given in the CC1101 datasheet
for 868/915 Mhz operation. Their match does not work for me. However, the
given topology and values for their 433 MHz match works out fairly close when I
use RFdude to evaluate it, however why they don’t cancel the reactance of the
CC1101 in the 315/433 MHz topology is a mystery to me, since they do cancel it
in the 868/915 MHz topology.</p>
<p>Atmel actually have a nice <a href="http://www.atmel.com/Images/doc8113.pdf">application note</a> on this and their values
actually do work out, which is nice.</p>
Flashbenching2014-05-07T00:00:00-04:00hhttp://www.bradfordembedded.com/2014/05/flashbenching<p>I’m co-mentoring a Google Summer of Code project this year which is focused on
the MMC and SD subsystems specifically for TI’s AM335x but more generally for
all device types which interface to MMC and SD cards. The goal is to improve
the performance as much as possible within the Linux kernel for these types of
“disks”.</p>
<p>Flashbench will be used, at least somewhat, for benchmarking SD card
performance. Arnd wrote a great overview of managed flash memory, flashbench,
and how using cheap SD cards like a disk is both good and bad on <a href="http://lwn.net/Articles/428584/">LWN</a> a
while back. You can grab the source for flashbench from either <a href="https://github.com/bradfa/flashbench/tree/dev">my github</a>
or from <a href="http://git.linaro.org/people/arnd.bergmann/flashbench.git">Arnd’s Linaro git</a> repo. My repo’s “dev” branch has a few small
fixes which are not “upstream” in Arnd’s repo.</p>
<p>So, here’s a quick little run down of important things to capture
with flashbench. These tests are running on a white BeagleBone which has an
external SD card interface wired up to it. Similar tests can be done with a
BeagleBone Black when booting from eMMC so that tests can be run on the microSD
card in the slot.</p>
<p>In this post, I’ll test the <a href="http://www.kingston.com/us/flash/microsd_cards#sdc4">Kingston 4GB microSDHC</a> card which used to ship
with BeagleBones. Don’t worry, I’ve already sent <a href="http://lists.linaro.org/pipermail/flashbench-results/2014-May/000475.html">the results</a> to the
<a href="http://lists.linaro.org/mailman/listinfo/flashbench-results">flashbench-results mailing list</a> (as should you if you test cards with
flashbench).</p>
<p>Grab the info which Linux finds about the Kingston card, and pay attention to
the “name” and “oemid”. The “oemid” often will indicate who has made the
controller within the SD card itself (it’s hex for ASCII, here 0x544d means
“TM”).</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@localhost:~# head /sys/block/mmcblk1/device/* 2>/dev/null | grep -v ^$
==> /sys/block/mmcblk1/device/block <==
==> /sys/block/mmcblk1/device/cid <==
02544d5341303447113533890900d371
==> /sys/block/mmcblk1/device/csd <==
400e00325b5900001d177f800a40008d
==> /sys/block/mmcblk1/device/date <==
03/2013
==> /sys/block/mmcblk1/device/driver <==
==> /sys/block/mmcblk1/device/erase_size <==
512
==> /sys/block/mmcblk1/device/fwrev <==
0x1
==> /sys/block/mmcblk1/device/hwrev <==
0x1
==> /sys/block/mmcblk1/device/manfid <==
0x000002
==> /sys/block/mmcblk1/device/name <==
SA04G
==> /sys/block/mmcblk1/device/oemid <==
0x544d
==> /sys/block/mmcblk1/device/power <==
==> /sys/block/mmcblk1/device/preferred_erase_size <==
4194304
==> /sys/block/mmcblk1/device/scr <==
0235800001000000
==> /sys/block/mmcblk1/device/serial <==
0x35338909
==> /sys/block/mmcblk1/device/subsystem <==
==> /sys/block/mmcblk1/device/type <==
SD
==> /sys/block/mmcblk1/device/uevent <==
DRIVER=mmcblk
MMC_TYPE=SD
MMC_NAME=SA04G
MODALIAS=mmc:block
</code></pre></div></div>
<p>Get what the actual size, in bytes, the card is by using fdisk. This can often
help to indicate the eraseblock size. You can factor this number to see what
the prime factors are, often indicating if a power of 2 number of bytes are
likely in the eraseblock size.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@localhost:~/flashbench# fdisk -l /dev/mmcblk1
Disk /dev/mmcblk1: 3904 MB, 3904897024 bytes
4 heads, 16 sectors/track, 119168 cylinders
Units = cylinders of 64 * 512 = 32768 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mmcblk1 doesn't contain a valid partition table
root@localhost:~/flashbench# factor 3904897024
3904897024: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 7 7 19
</code></pre></div></div>
<p>Then we can run the “read” performance test. This often can indicate where
eraseblock bounds are, where one erase block ends and the next begins. This is
important as each eraseblock must be erased all at once (it’s how flash works)
and so if you, for instance, want to change just one bit within an eraseblock
the controller will often copy the entire eraseblock contents to another
eraseblock but with your one bit change. The controller will then set the old
eraseblock to be erased, possibly in the background. Knowing how big each
eraseblock is can be used to align your partitioning scheme with the underlying
media, to improve performance.</p>
<p>This is just a non-destructive read test. Sometimes read performance when
spanning two eraseblocks will be slower than when reading only in one erase
block. The “pre” reads just prior to an expected eraseblock boundary, the “on”
reads spanning an eraseblock boundary, and the “post” reads just after an
eraseblock boundary. Any spot where the “diff” times drop dramatically may
indicate the likely eraseblock size or the likely write page size (write page
size will always be smaller than an eraseblock).</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@localhost:~/flashbench# ./flashbench -a /dev/mmcblk1 --blocksize=1024
align 1073741824 pre 1.77ms on 2.4ms post 1.66ms diff 686µs
align 536870912 pre 1.72ms on 2.36ms post 1.64ms diff 684µs
align 268435456 pre 1.75ms on 2.39ms post 1.64ms diff 696µs
align 134217728 pre 1.75ms on 2.35ms post 1.62ms diff 667µs
align 67108864 pre 1.74ms on 2.37ms post 1.62ms diff 695µs
align 33554432 pre 1.75ms on 2.37ms post 1.62ms diff 682µs
align 16777216 pre 1.74ms on 2.37ms post 1.63ms diff 681µs
align 8388608 pre 1.72ms on 2.33ms post 1.62ms diff 658µs
align 4194304 pre 1.66ms on 2.27ms post 1.58ms diff 650µs
align 2097152 pre 1.55ms on 2.19ms post 1.63ms diff 605µs
align 1048576 pre 1.6ms on 2.21ms post 1.66ms diff 576µs
align 524288 pre 1.61ms on 2.21ms post 1.65ms diff 581µs
align 262144 pre 1.6ms on 2.2ms post 1.65ms diff 576µs
align 131072 pre 1.61ms on 2.2ms post 1.64ms diff 580µs
align 65536 pre 1.56ms on 2.16ms post 1.62ms diff 566µs
align 32768 pre 1.56ms on 2.09ms post 1.61ms diff 504µs
align 16384 pre 1.53ms on 2.11ms post 1.59ms diff 544µs
align 8192 pre 1.67ms on 1.67ms post 1.64ms diff 19µs
align 4096 pre 1.72ms on 1.74ms post 1.73ms diff 14.6µs
align 2048 pre 1.75ms on 1.76ms post 1.76ms diff 11.8µs
</code></pre></div></div>
<p>Possibly this Kingston card has 2 or 4 MiB eraseblocks but it’s not that clear.
The drop from 4 MiB to 2 MiB and again from 2 MiB to 1 MiB mean the eraseblock
is probably 2 or 4 MiB. We’ll assume it’s 4 MiB for now.</p>
<p>Next, run some “open-au” tests. An “open-au” (open allocation unit) test will
tell how many of those copy-on-write-then-erase (aka: garbage collection)
operations I mentioned above can happen simultaneously. Cheap controllers can’t
handle more than 1 at a time while high end controllers can sometimes do 30 or
more. Any card which can handle 5 or more “open-au” is quite good.</p>
<p>The “open-au” tests will write, in various sizes down to the blocksize you
specify, to a sequence of eraseblocks. If the controller is able to sustain
more than 1 “open-au” then when running with 2 “open-au” the performance should
be about the same as with 1 “open-au”.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
root@localhost:~/flashbench# ./flashbench /dev/mmcblk1 --open-au --erasesize=$[4*1024*1024] --blocksize=$[16*1024] --open-au-nr=1
4MiB 6.45M/s
2MiB 5.19M/s
1MiB 5.19M/s
512KiB 5.1M/s
256KiB 5.15M/s
128KiB 5.14M/s
64KiB 5.1M/s
32KiB 4.94M/s
16KiB 3.71M/s
root@localhost:~/flashbench# ./flashbench /dev/mmcblk1 --open-au --erasesize=$[4*1024*1024] --blocksize=$[16*1024] --open-au-nr=2
4MiB 3.88M/s
2MiB 5.19M/s
1MiB 5.12M/s
512KiB 5.06M/s
256KiB 4.99M/s
128KiB 4.77M/s
64KiB 4.64M/s
32KiB 4.53M/s
16KiB 3.38M/s
root@localhost:~/flashbench# ./flashbench /dev/mmcblk1 --open-au --erasesize=$[4*1024*1024] --blocksize=$[16*1024] --open-au-nr=3
4MiB 4.47M/s
2MiB 5.19M/s
1MiB 5.17M/s
512KiB 5.12M/s
256KiB 4.96M/s
128KiB 4.77M/s
64KiB 4.65M/s
32KiB 4.49M/s
16KiB 3.36M/s
root@localhost:~/flashbench# ./flashbench /dev/mmcblk1 --open-au --erasesize=$[4*1024*1024] --blocksize=$[16*1024] --open-au-nr=4
4MiB 6.06M/s
2MiB 4.49M/s
1MiB 2.82M/s
512KiB 1.25M/s
256KiB 607K/s
128KiB 302K/s
^C
</code></pre></div></div>
<p>I’ve stopped the “open-au” test with CTRL-C as it will take a very very long
time to complete once the card gets slow. Here we can clearly see that 3
open-au have good performance, while 4 is a dog.</p>
<p>Now for the random version of the “open-au” test where instead of writing the
eraseblocks in sequence, they are written “randomly” to stress the controller
dealing with writes out of order. For good performance with a file system, you
want this test to show at least 3 “open-au” and reasonable M/s numbers.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@localhost:~/flashbench# ./flashbench /dev/mmcblk1 --open-au --erasesize=$[4*1024*1024] --blocksize=$[16*1024] --random --open-au-nr=1
4MiB 3.07M/s
2MiB 2.13M/s
1MiB 3.24M/s
512KiB 1.44M/s
256KiB 1.76M/s
128KiB 1.91M/s
64KiB 1.36M/s
32KiB 1.18M/s
16KiB 1.2M/s
root@localhost:~/flashbench# ./flashbench /dev/mmcblk1 --open-au --erasesize=$[4*1024*1024] --blocksize=$[16*1024] --random --open-au-nr=2
4MiB 3.03M/s
2MiB 2.58M/s
1MiB 3.26M/s
512KiB 1.44M/s
256KiB 1.75M/s
128KiB 1.9M/s
64KiB 1.36M/s
32KiB 1.17M/s
16KiB 1.18M/s
root@localhost:~/flashbench# ./flashbench /dev/mmcblk1 --open-au --erasesize=$[4*1024*1024] --blocksize=$[16*1024] --random --open-au-nr=3
4MiB 3.03M/s
2MiB 2.78M/s
1MiB 3.25M/s
512KiB 1.44M/s
256KiB 1.76M/s
128KiB 1.9M/s
64KiB 1.36M/s
32KiB 1.18M/s
16KiB 1.19M/s
root@localhost:~/flashbench# ./flashbench /dev/mmcblk1 --open-au --erasesize=$[4*1024*1024] --blocksize=$[16*1024] --random --open-au-nr=4
4MiB 3.33M/s
2MiB 2.92M/s
1MiB 2.48M/s
512KiB 1.2M/s
256KiB 595K/s
128KiB 298K/s
64KiB 150K/s
^C
</code></pre></div></div>
<p>This Kingston card is definitely no speed demon but it isn’t quite as bad as the
<a href="http://lists.linaro.org/pipermail/flashbench-results/2012-February/000252.html">older Kingston card of the same model number</a> I tested 2 years ago. That
there’s variability within the same model number card is not something you want
to see, as a customer, since results will vary even though you can’t physically
tell the cards apart.</p>
<p>Lastly, we can check if the first few eraseblocks have any special ability.
Some cards will provide for the first few eraseblocks to be backed by SLC flash
instead of MLC, or otherwise improve the performance of these special
eraseblocks. This is important when using the card with the FAT filesystem as
all the metadata is stored in the beginning of the disk and will get the most
wear and small writes.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@localhost:~/flashbench# ./flashbench /dev/mmcblk1 --find-fat --erasesize=$[4*1024*1024]
4MiB 865K/s 3.56M/s 3.01M/s 5.13M/s 5.14M/s 5.12M/s
2MiB 4.3M/s 4.88M/s 5.11M/s 5.16M/s 5.11M/s 5.3M/s
1MiB 3.99M/s 4.9M/s 5.23M/s 5.18M/s 5.17M/s 5.15M/s
512KiB 3.88M/s 4.81M/s 5.15M/s 5.16M/s 5.16M/s 5.15M/s
256KiB 4.35M/s 4.38M/s 5.17M/s 5.18M/s 5.17M/s 5.16M/s
128KiB 3.78M/s 4.8M/s 5.13M/s 5.12M/s 5.15M/s 5.14M/s
64KiB 4.27M/s 4.74M/s 5.08M/s 5.03M/s 5.08M/s 5.07M/s
32KiB 3.62M/s 4.29M/s 4.97M/s 4.96M/s 4.96M/s 4.95M/s
16KiB 3.01M/s 3.31M/s 3.74M/s 4.29M/s 4.3M/s 4.29M/s
</code></pre></div></div>
<p>There doesn’t appear to be any special FAT area in this Kingston card.</p>
<p>In summary, this card is not so hot. But then again, it was bundled with a
BeagleBone and so price was likely much higher concern for the seller than
performance.</p>
Debootstrapping a chroot2014-02-14T00:00:00-05:00hhttp://www.bradfordembedded.com/2014/02/debootstrapping<p>Chroots are very useful for keeping software that does stupid things from
affecting your main system (like TI’s CCSv5) or for running old software safely
without needing a virtual machine, so long as you can do so with the existing
kernel. I use them on a regular basis to run CCSv5, software from Debian
Squeeze, and to try things out with low risk.</p>
<p>Quick exactsteps for debootstrap and schroot on Debian in order to run oldstable
or similar:</p>
<p>Install debootstrap and schroot:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt-get install debootstrap schroot
</code></pre></div></div>
<p>Make a directory for your chroot, I use <code class="language-plaintext highlighter-rouge">/opt</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo mkdir -vp /opt/chroot/squeeze
</code></pre></div></div>
<p>Debootstrap! Using the <code class="language-plaintext highlighter-rouge">buildd</code> variant to get usual software building tools:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo debootstrap --variant=buildd --arch=amd64 squeeze /opt/chroot/squeeze
</code></pre></div></div>
<p>Setup schroot:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cat >> /etc/schroot/schroot.conf << "EOF"
[squeeze]
description=Squeeze
type=directory
directory=/opt/chroot/squeeze
users=andrew
preserve-environment=true
EOF
</code></pre></div></div>
<p>Then you can schroot into your Squeeze system:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>schroot -c squeeze
</code></pre></div></div>
<p>If you’re using Debian’s normal $PS1, your shell prompt will show a description
of which schroot you’re in, like:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>(squeeze)andrew@boomboom:/tmp$
</code></pre></div></div>
<p>If your user is in the <code class="language-plaintext highlighter-rouge">sudo</code> group, and if you have sudo installed in your
schroot, you’ll be able to use sudo just like normal. To add extra packages to
the schroot system during the debootstrap operation, add <code class="language-plaintext highlighter-rouge">--include=sudo</code> or
similar prior to the “squeeze”. Alternatively, you can install software with
apt afterwards like normal.</p>
<p>Be aware, by preserving the environment, your <code class="language-plaintext highlighter-rouge">/home</code> directory will be your
actual home directory in the chroot. The rest of the root file system won’t be
related to your actual root file system, though. Just be careful!</p>
Small, Simple, First2014-01-13T00:00:00-05:00hhttp://www.bradfordembedded.com/2014/01/small-simple-first<p>I learned about Lean engineering at Xerox. It was pretty funny as there was
basically nowhere within engineering that you could actually apply the lessons
and hence it seemed like pointless instruction followed by pointless required
tasks to get a pointless certification.</p>
<p>But now I see exactly where and why Lean can work so well, especially for
physical products.</p>
<p>If you’re designing a product and your competition is existing solutions to the
problem you are attempting to solve, then Lean isn’t that helpful. In order to
be considered by your customers as a viable option you need to do all the things
your competition does and then either be less expensive or do even more. This
is quite difficult in most every industry.</p>
<p>But if you’re designing a product where there is no competition and the
prospective customers don’t even really realize there is a problem that could be
solved, then some options open up. You don’t have to better than established
competitors as there aren’t any, you just have to be better than the
inefficient processes or products that you’re looking to displace.</p>
<p>For instance, if you’re Xerox/Haloid and you’re coming out with the first plain
paper copy machine, you don’t have to be amazing and fast and inexpensive, there
literally is nothing like your product in the market. You’re placing a machine
against humans retyping, the machine is going to win hands down even if it
starts on fire every once in a while and only can print a few pages per minute.
You can literally make the smallest, simplest device that beats the existing
lack of competition and you’ve basically made a license to print money.</p>
<p>These days, at Xerox, it’s very hard to apply Lean within engineering. The
competition is well established both internally and externally, and if your new
printer can’t do everything every printer has ever done before it within a
product category, no one is going to buy your printer. Implementing all these
now required features takes a lot of time and effort just to get to a product
that has nothing special inside. In the 1950s this wasn’t true but since about
1995 it has been. The lesson here is if you want to apply Lean, doing so within
this kind of industry is very hard, so find somewhere else to apply it! Find
that 1950s Xerox situation where no one had ever heard of a plain paper
duplicator and show those same people who’ve never heard of your product concept
why there’s huge value in it for them.</p>
<p>Now, when you’ve found that group of people who’ve never realized there’s this
new technology that could make their lives much easier or better, you can take
advantage of it. You don’t need to have 1000 features, you just need to have 1
feature that’s better than not having your product. So don’t spend time making
the other 999!</p>
<p>Likely your customer doesn’t know which of the 999 other features they want or
would even be interested in using. So just because a marketing person says,
“Look! There’s 1000 features here, you must build all of them!” doesn’t mean you
have to build more than 1! The Xerox 914 copier sucked as viewed from every
copy machine built after it, but it was infinitely better than everything that
came before it.</p>
<p>Just make sure that once you’ve introduced that product with 1 feature that you
engage your customers to ensure the next 9 features of the 999 possible keep
those customers coming back to you for more. Then build on it, you’ll have
inertia and all your competition will be playing catch-up, at least at the
beginning.</p>
<p>This is how I see the MVP (minimum viable product) working for physical products
(ie: NOT JUST WEBSITES or software). Your MVP is a physical product that sets a
direction for your product line, initially being one hit feature but of which
there are further follow-on products covering more and more features over time.
You don’t have to upgrade the existing product in the field to be Lean and have
an MVP, but you do have to iterate and not be afraid to put out that first
product that just has 1 killer feature and nothing else.</p>
<p>Make it small, make it simple, make that first.</p>
Yubikey NEO Smart Card in Debian2013-12-10T00:00:00-05:00hhttp://www.bradfordembedded.com/2013/12/yubikey-smartcard<p>I’ve owned a Yubikey NEO for a while now and I use it every day, both as a PGP
smart card and as just a Yubikey for LastPass.</p>
<p>To get the smart card functionality, you’ll need to (just one time) <a href="https://www.yubico.com/2012/12/yubikey-neo-openpgp/">enable
it</a>. But the not so well documented next thing you’ll want to do is to use
your PGP smart card as your SSH key.</p>
<p><em>Here’s some exact steps!</em></p>
<p>On Debian, you’ll need to install the “gpgsm” and “gnupg-agent” packages. Don’t
install “monkeysphere”.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt-get install gpgsm gnupg-agent
</code></pre></div></div>
<p>Configure gpg to use the agent:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>echo "use-agent" >> ~/.gnupg/gpg.conf
</code></pre></div></div>
<p>Enable ssh support for gnupg-agent:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>echo "enable-ssh-support" >> ~/.gnupg/gpg-agent.conf
</code></pre></div></div>
<p>Use the Yubikey udev rules installed to /etc/udev/rules.d/ 70-yubikey.rules
(provided by the <a href="https://github.com/Yubico/yubikey-personalization/blob/master/70-yubikey.rules">Yubikey personalization</a> sources):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Udev rules for letting the console user access the Yubikey USB
# device node, needed for challenge/response to work correctly.
ACTION=="add|change", SUBSYSTEM=="usb", \
ATTRS{idVendor}=="1050", ATTRS{idProduct}=="0010|0110|0111", \
TEST=="/var/run/ConsoleKit/database", \
RUN+="udev-acl --action=$env{ACTION} --device=$env{DEVNAME}"
</code></pre></div></div>
<p>Likely you’ll want to log out and back in again to have the agent start with the
proper configuration. Then insert your Yubikey NEO (or other PGP smart card).</p>
<p>Then you can import your identity from the smart card (assuming you have already
setup your smart card) with:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>gpg --card-edit
fetch
quit
</code></pre></div></div>
<p>If that’s failing, make sure you’ve generated all your keys, uploaded the
public portions to a key server, and then updated your smart card so it knows
where to find the public keys on the key server, which should show you a line
like:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>URL of public key : http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xF3B5C4876114BEA6
</code></pre></div></div>
<p>Check the status of the card once more to make sure everything is in place and
to ensure gpg is good to go:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>gpg --card-status
</code></pre></div></div>
<p>See that ssh-add will use the card (and my result):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssh-add -l
2048 4a:9b:8b:03:f9:36:91:93:d0:2e:02:f4:3b:4f:23:8c cardno:000000000001 (RSA)
</code></pre></div></div>
<p>Now ssh into something and you’ll get prompted for your PIN and the PGP card
will do your SSH auth for you (look Ma, no more private keys on disk!):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ssh -v user@example.com
</code></pre></div></div>
<p>If you don’t yet have your public SSH key, get it by finding the identifier for
your authentication key and using the last 4 bytes as the ID (ie: auth key
identifier of D69A905F):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>gpgkey2ssh D69A905F
</code></pre></div></div>
<p>You can upload the output of that to Github or your server you’ll SSH to.</p>
<p><strong>EDIT to add on 20160722:</strong></p>
<p>For modern Debian systems, you’ll want to install gpgsm, gnupg-agent,
pinentry-gtk2, yubikey-personalization, and scdaemon packages. You won’t need
to setup the udev rules yourself, they come with yubikey-personalization.</p>
<p>If you need to use an alternative port for SSH (such as if you are behind a
firewall), you can make a <code class="language-plaintext highlighter-rouge">~/.ssh/config</code> file containing something like this to
SSH to said host on an alternate port:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Host github.com
Hostname ssh.github.com
Port 443
</code></pre></div></div>
<p><strong>EDIT to add on 20190110:</strong></p>
<p>On what’s now a modern Debian system, you’ll also need to install the
dbus-user-session package to get proper prompting for pinentry through the
agents.</p>
CLFS Embedded musl2013-10-18T00:00:00-04:00hhttp://www.bradfordembedded.com/2013/10/clfs-musl<p>This week the CLFS embedded book switched from uClibc to musl for its C library.</p>
<p>Some people really like musl as it’s license is MIT rather than GPL. I think
that’s nice and all, but license compliance if you’re using Linux already isn’t
that big of a deal. You’re already jumping through hoops, adding one more
package that probably won’t be customized in use is almost 0 additional work.</p>
<p>The real reason for the switch is I hate manually configuring things. uClibc is
great, so long as you don’t mind wading through a huge configuration. The
defconfig isn’t sane for most architectures I care about. musl doesn’t have any
configuration, unless you really really want to. This is very important to me
as trying to explain how to create a sane uClibc configuration for more than one
architecture, or trying to make a bunch of patches supplying uClibc
configurations, is quite painful.</p>
<p>musl requires an MMU. uClibc doesn’t. For my own sanity, this is generally a
good thing.</p>
<p>As another related note, I’ve also made a bunch of other updates to the embedded
CLFS book recently and it’s getting closer to a set of exact steps that actually
works! The next big change will be cleaning up the kernel and bootloader
instructions.</p>
Github Workflow2013-10-02T00:00:00-04:00hhttp://www.bradfordembedded.com/2013/10/github-workflow<p>Most people who use source control still seem to work on things by themselves,
both at companies and in little open source projects. This is sad.</p>
<p>Some of the projects I work on for my job are just me coding and no one else.
I’m much worse at my job working like this than if I work directly with someone
else writing code and doing design. I know this as in the past 2 years I’ve
worked both just by myself on software projects, with another engineer on a
firmware project, and I’ve sent patches upstream to u-boot. Here’s why:</p>
<p>When it’s just me writing code, no one stops me from committing to master things
that are broken, half baked, or that have errors in them. It’s just me and my
flaws show through (I am flawed). This is OK for little things or one-offs but
it’s not a good idea for anything critical like a product or an open source
project that others will use.</p>
<p>When it’s me and one or more other people all working on a code base, code
review can be really easy and simple. For instance, in the office we
implemented a required code review process within Github using their pull
request feature. No one is allowed to commit their own code to master or any
other branch in the company account for the project. Each engineer forks the
main repo to their own account (don’t worry, private repos are still private
when forked and the company still has full control over who can access them) and
then sends pull requests to the master on the main company repo. Yes, this is
exactly what Github wants you to do! This is awesome as you get code review for
free! But even more importantly, you’re less likely to ask someone to review
code that might be shit, so I found myself spending more time to make sure I did
things right and in ways that are maintainable. This made a huge impact on our
bug counts for bugs present 24 hours or more after initial commit to master on
the company repo, we ended up having only 1 so far and that was due to the data
sheet making difficult to understand recommendations on how to step the core
voltage (no, I won’t got into any more detail than that, sorry).</p>
<p>But the best type of software development process is many eyes and a crowd of
people who will have to maintain your code once you’re gone, like u-boot or
Linux. The maintainers know they will have to fix your bugs and that you
probably won’t still be around when those bugs are found. Hence, the
maintainers are going to be sticklers for making sure your code is clean,
readable, and maintainable before they accept it. This is one step better than
the previous concept as the deadlines within any one company or organization
have no effect since the big group is spread across many companies and they want
only one thing: To make the codebase work in the long term.</p>
<p>Now, every time I’m working on something by myself I get annoyed. I want that
other person / people to drive me to be better, to not let me cheat and commit
shit. It’s hard to impose on myself. I think that’s normal.</p>
Internet of Things!2013-07-01T00:00:00-04:00hhttp://www.bradfordembedded.com/2013/07/internet-of-things<p>The Internet of Things gets in the news these days, which is nice. But we’re
still a ways away from where it gets really interesting, mostly due to cost.</p>
<p>Sure, you can buy a refrigerator that connects to the Internet (don’t ask why
you would want such a thing, no one knows yet, but that’s the beauty). You can
get a telephone that does so, too. Heck, I’m sure someone’s even made a lock
for your front door that lets people in via the web… <a href="https://lockitron.com/preorder">Oh, wait…</a>.</p>
<p>But most of these “things” that connect to the Internet today use Wi-Fi and
Wi-Fi is a horribly expensive way to connect to a network for “things.”</p>
<p>Wi-Fi implies quite powerful hardware, both from a true power consumption point
of view and from a software stack point of view. Even things like Zigbee or
802.15.4/6LoWPAN require quite decent hardware specs (IPv6 has been said to
require at least 16 kB of RAM to function well, afaik). This means not only are
rather large batteries required, many times also needing recharging, but the
minimal hardware to attach to such a network needs at least $3 of hardware.</p>
<p>Antenna, PCB, match network, transceiver, micro, and battery. If you can do
that, and get decent battery life, for less than $3 in volume, you’re doing
quite well. I’m not saying it’s not possible, just very very difficult with
today’s parts.</p>
<p>Hence, I think the really interesting Internet of Things applications happens
when the cost gets below $1 to manufacture. We’re probably not that far away
from it but I don’t think Wi-Fi or 6LoWPAN as specified today will be how we get
there. The really interesting uses of an Internet of Things is when everything
has access to the net, and that won’t happen with commodity items till the extra
cost to buy an Internet enabled device is almost non-existent compared to the
non-Internet enabled device.</p>
<p>Take for instance, my toaster. It’s not Internet connected and really all I
want it to do is to toast things. But if I’m going to spend $30 on a new
toaster, a 10% increase to $33 (see above for costs) seems silly to me, what
exactly does my toaster need the net for? But if the increase in cost is closer
to $0.50 then why not? Maybe I can do some neat little project with it or
something.</p>
<p>Toasters are a good judge of cost, I think for this.</p>
<p>But to get the net into all these inexpensive things we need a different
physical layer than is available today. I think it’ll look a lot more like
passive RFID than an active radio. The need for a power source is very limiting
for many things but passive RFID doesn’t need one. Passive RFID often is
available with “battery assist” in order to increase range, which takes care of
RFID’s often cited limited range (think 30+ meters with battery assist).</p>
<p>The cost of passive RFID is cheap, lots of tags are available for < $1. Often
these tags have no microprocessor but low power micro designs are getting to the
point where they can consume tiny amounts of power and do useful things, which
will be a requirement in order to harvest RF and run. Some of the new memory
technologies, like FRAM, might help here too in order to boot fast and persist
RAM.</p>
<p>Obviously IPv6 will be the middle layer for this new tech, hopefully not
consuming 16 kB of RAM, and we’ll need cheap (think < $50) access points like
our Wi-Fi has today, for any place where these devices want to get online. But
the real hard part is standardizing this stuff and finding a way to get decent
data rates and functionality for < $1 per thing.</p>
<p>I give it 2 years till we see something like this start to emerge. 6LoWPAN and
Zigbee and 802.15.4 are leading the way today but they will soon be surpassed by
a new generation of very low cost batteryless (and battery assisted) passive
“things” that are net connected.</p>
<p>It’ll be fun to watch RFID make this transformation.</p>
Base16 Speed2013-06-04T00:00:00-04:00hhttp://www.bradfordembedded.com/2013/06/base16<p>Base16 conversion from printable characters to binary data, and back again, is
important for one project I work on. Doing conversion as fast as possible is a
good thing.</p>
<p>I had some code that did the binary to printable characters conversion well
enough, using <code class="language-plaintext highlighter-rouge">sprintf()</code>, but I thought it wasn’t optimal, so I decided to find
out.</p>
<p>I wrote some quick and dirty code, in my <a href="https://github.com/bradfa/base16test">github</a>, to test the
performance difference between using <code class="language-plaintext highlighter-rouge">sprintf()</code> and using a lookup table.
Turns out, the LUT is about 10 times faster on my Core-i7 and about 14 times
faster on a BeagleBone. That’s quite an improvement!</p>
<p>I’m sure there’s an even faster way to do other things my code does less than
optimally, now I’m on the hunt!</p>
<p>BeagleBone run:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@localhost:~# ./test test.file
Read 14659584 bytes
sprintf bin to hex took 20850000 clocks
bintohlut bin to hex took 1470000 clocks
htob hex to bin took 10860000 clocks
</code></pre></div></div>
<p>Intel Core-i7 run:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>andrew@brick:~/git/base16test$ ./test /dev/urandom
Read 33554432 bytes
sprintf bin to hex took 2750000 clocks
bintohlut bin to hex took 260000 clocks
htob hex to bin took 1080000 clocks
</code></pre></div></div>
What I Want2013-04-17T00:00:00-04:00hhttp://www.bradfordembedded.com/2013/04/what-i-want<p>I still don’t really know what I want in terms of a job, a company, or
coworkers. But I’m starting to get some good ideas (all of which are apt to
change).</p>
<p>I like working on a team. Not a team where there’s 10 people and each one of
them does some unique thing, but a team where it’s me and one or more others
that all work together solving the same problem. I’ve recently started working
closely with another engineer at work on some microcontroller code and it’s
awesome to be able to troubleshoot, plan, implement, and code, together. We
both have some similar skills, yet we both also have different skills. Putting
us together makes us both more productive and produces better results.</p>
<p>I like the concept of a results driven method of evaluating employees. I’ve
never worked somewhere where evaluations were completely based on results alone,
but basing more than a tiny fraction of an evaluation on “butt in seat time” is
not conducive to getting high quality results. It’s pretty good at building
resentment, though.</p>
<p>I want to work on embedded or server side products. I like *nix. These all fit
together nicely to solve problems.</p>
<p>I want to be less risk averse, less negative, and less pessimistic. I feel I’m
risk averse, negative, and pessimistic because I’d rather reduce risk of failure
rather than reduce other outcomes (cost, schedule, etc). Often, I’m the one
that’s going to say, “Doing that is a bad idea because of risk X and risk X
means possibly encountering problem Y.” If problem Y is quite bad, it’s going
to convince me that risk X isn’t worth it. The downside of how I do this today
is that I often don’t have good numbers to justify why I’m being risk averse,
negative, and pessimistic. So really, either I want to be less of these things
or simply learn how to come up with the numbers.</p>
<p>I want to work in an office with a door. I like being social, just not all the
time, like when I’m trying to get things done. So far, the environments I’ve
worked in, having headphones on doesn’t mean anything. Closing a door does.
I’ve never worked somewhere with a door that closes but right now it’s after 9
pm and everyone in the house is asleep. I’m getting writing done uninterrupted
(plus I just closed a github ticket, pushed a branch, and did some email).</p>
<p>I like older software that’s well documented, even if it’s slow and lacks
features. For example, I’m hesitant about systemd and I enjoy running Debian
stable. Yes, there’s things I can’t do with my older software, but the things I
can do have been around long enough that when I run into issues I can find
solutions quite quickly and get things done.</p>
<p>I don’t want to be the smartest person in the room or on the team. Give me a
topic related to any job I’ve done and I can name you someone who knows more
about that topic than I do. I want to work with those people, closely and on a
regular basis. I want to suck up wisdom from them. I also don’t want to be the
dumbest person in the room or on the team. Now, sometimes, I might be the
smartest and sometimes I might be the dumbest, but I don’t want to be in either
camp too often. I want a normal distribution for this, if that makes any sense.</p>
<p>I want to learn Python and get much better at programming in C (I’m maybe 5000
of the 10000 hours needed for mastery into learning C, I’m maybe 10 hours into
Python). I’d bet 90% of the problems out there can be solved quite well with
just Python or C. I’ve learned a little Ruby and I fail to see how it’s
significantly different from Python. I’ve done some Java, C#, and C++ but I
fail to see how any are significantly better than a well evaluated choice
between Python or C. Sure, Python and C both have failings, but often those
failings are small in comparison to the pros / successes.</p>
<p>I want to be stressed less. My job is my job, it needs to not impact my family
life.</p>
<p>I’m not far from my goals, I’m continually moving closer to achieving them. I
am however not yet to any of these fully. That’s why I want them.</p>
Disappointment2013-03-20T00:00:00-04:00hhttp://www.bradfordembedded.com/2013/03/disappointment<p>When something disappointing happens, there’s 2 ways to deal with it that I’ve
seen. One’s successful in getting people motivated in solving the problem or
making it not happen again, while the other does the exact opposite.</p>
<p>The first involves not placing blame, taking ownership of solving the problem,
and rallying. The second involves expressing disappointment, directly or
indirectly placing blame, and not taking ownership. If you’re a leader and
you’re doing the first method, good job!</p>
<p>If you’re a leader and you’re doing the second method, no one’s going to tell
you. You’re not going to learn that you’re doing it wrong. You’re just going
to alienate your people and build dissent.</p>
<p>Think about this in the frame of a family situation. If something disappointing
happens in my family and I do the second method, my wife and daughter won’t be
as enthusiastic in making sure the situation doesn’t happen again. They’re just
going to be pissed off at me for placing blame and saying, “It’s not my
problem.” If I do the first situation, regardless of who’s to blame (the person
knows who they are, placing blame doesn’t help), we all rally around finding a
solution together and build our family stronger.</p>
<p>The second method is easy. The first is hard. There’s a reason for that: the
second method is crap. Things in life that are easy are generally crap. Things
in life that take some work generally are the best ones, like raising kids,
having relationships, building a career, and helping others.</p>
Close to the Metal2012-12-21T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/12/close-to-the-metal<p>I want to be close to the metal.</p>
<p>Over the past two years and a few months, I’ve really gotten quite into embedded
Linux and related systems. I got hired by a local company (my current employer)
to be their embedded Linux guy and help make a product from the ground up.
It’s been a lot of fun, I’ve learned a lot, and done tons of new things.</p>
<p>In the course of these past 2 years, I’ve grown fond of being close to the
metal. I got my first patches into u-boot, developed a Linux SPI protocol
driver for our product, gotten the TPS65217 power button working on the
BeagleBone in Linux, and worked very closely with our hardware engineer to
define and debug the system I’m developing for.</p>
<p>I’ve done other things, too (like starting to learn Ruby on Rails then dropping
that for web services using FastCGI in C, there’s orders of magnitude difference
in speed between those two), but I’ve not enjoyed the “other things” as much as
I’ve enjoyed working on u-boot, specifying and bringing up hardware, starting to
dig into writing Linux drivers, and being a part of the BeagleBoard.org
community.</p>
<p>Sure, making products is great. But I’ve found I really enjoy simply doing the
low level engineering part of the job, taking some chips and making them work
together to allow for business goals to be achieved on top of them. The actual
implementation of the business goals (web services, etc) isn’t as fun for me.</p>
<p>I’d love to have a job where I get to be close to the metal more. It’s possible
that my current job can transform into that, I’m hoping I can make that happen.
I’m not as concerned about what kind of close to the metal work I get to do, but
if it was open source, that’d be a big plus.</p>
<p>If I could work on u-boot, Barebox, Linux, or dev kits as my main role, that’d
be really cool. I want the projects I work on to be open source as there’s a
huge number of very smart people out there who will critique my code and point
me in the right direction. Learning from them, either by getting feedback or
just watching them work, is quite fun.</p>
<p>Overall, I’m hugely thankful for the opportunity to dive into embedded the way I
have. My company, BeagleBoard.org, the Cross Linux From Scratch project,
u-boot, Linux, and Debian have all been awesome places for me to learn and grow.</p>
<p>I’m not sure how to end this. So, thanks, if you’ve worked with me, helped me,
or simply made something that I’ve used to learn with.</p>
<p>I hope everyone has a great holiday. See you next year!</p>
Which *nix? (or why free/open)2012-11-25T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/11/which-nix<p>Yesterday I installed NetBSD in a virtual machine on my laptop. I was
thinking to myself, “If I could specialize in some *nix that isn’t as
popular maybe that would give me an edge up.”</p>
<p>Well, no. That’s just a stupid thought.</p>
<p>Useful ways to get an edge up, or somehow stand out as an engineer don’t
involve being a specialist in something that basically no one is using.
Sorry BSD. Usually there’s a very good and valid reason why very few people
are using or doing something.</p>
<p>Yes, tons of stuff on the web probably does run on BSD. Yes, NetBSD,
FreeBSD, OpenBSD, and every other Unix-like system out there is probably
holding together huge swaths of the Internet so I can type junk like this.</p>
<p>But upon thinking this over, I’ve realized that what matters is not what you
run your application on, it’s what your application does… if you want to
be an implementer and not a tool builder.</p>
<p>At this point, I’m not a tool builder. I’m a wanna-be tool builder. I’m
trying to get good enough such that I could contribute to tools that I and
others use. But I have a long way to go. However, I am an implementer.
I’m making things out of pieces and parts such that useful business goals
can be accomplished. I’m sticking together tools that others have built in
such a way that really hasn’t been done before. I’m helping create business
value.</p>
<p>What matters to me, and to probably 90% of other “software engineers” out
there, isn’t what’s underneath, as long as what’s underneath can allow me to
piece it together with other tools. What matters is that I can pick a
decent set of tools and utilities, stick them together, write a little bit
of glue, and make something for a customer that solves problems.</p>
<p>Whether what’s underneath is Unix, Linux, Mac, or Windows (eww) doesn’t
really matter. What matters is that the tools I need are available and can
be made to work together on a platform to accomplish something useful for a
customer.</p>
<p>Specializing in Unix, Linux, Mac, or Windows doesn’t completely help me
achieve that goal. Knowing enough about each of those does.</p>
<p>What I should do is become familiar with each of the tool sets out there but
not be an expert. Not unless being an expert is what’s creating value for
others. At this point, me becoming an expert isn’t going to create as much
value for my company or our customers as me being a really good generalist,
so I should work more on that.</p>
<p>No, I’m not the best C coder that’s ever lived. I’m far from it. But
becoming the best C coder shouldn’t be my current goal. My current goal is
to be able to see a technical problem and say, “Yes, this tool would be a
good choice” or “No, that tool would be a bad choice” and then back up
either of those statements.</p>
<p>Here’s where my dislike for Windows and the Mac start to creep in. With
free and open software I can see the code, I can fix the bugs, and I can add
the features I (and my company) need. I can contribute those back to the
community and get code reviews, and eventually, get the community to
maintain my code for me in the same way I will do for others in the
community. On any of the BSDs, most all Linux distributions, and a growing
number of microcontroller platforms, I can do this. They’re open.</p>
<p><em>Windows isn’t.</em> The <em>Mac isn’t.</em> Therefore, spending my time learning
those tools and platforms is a waste because I can’t really understand them,
I can’t fix bugs, and I can’t add features. <strong>How can I be a good
implementer if I can’t fix / improve / give back to the tools I use?</strong></p>
<p>I can’t.</p>
<p>So what matters isn’t that I pick this *nix or that *nix, be it BSD, Unix,
Linux or what have you. What matters is that I’m picking up free and open
software, learning the tools, and giving back when I can.</p>
<p>If you’re a software developer and you’re focused on a closed, locked down,
un-free platform or tool set, <strong>you’re doing it wrong!</strong></p>
Where to Write?2012-11-09T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/11/where-to-write<p>I haven’t written in a while. There a few reasons for that which I’d like
to share.</p>
<p>I bought a paper journal. I write in it almost every weekday. It’s a
great outlet for me and allows me to write different things that I would
write on my blog or other platforms.</p>
<p>I participate more on Google+ now. As much as I feel Google is somewhat
evil, people I enjoy interacting with are on there talking about stuff I’m
interested in. I check Google+ every day. I’m on Facebook once a week, at
best.</p>
<p>I’m actually really into things at work. I’ve been submitting patches to
u-boot, hacking on an evil vendor kernel (hacking is being nice to the
quality of code that’s coming out of my fingers), and making things happen
for customers.</p>
<p>My family situation has changed since July. My wife went back to work in
late August after staying home with our daughter for a year after she was
born. Now we have a new morning routine that’s much more family oriented
which limits my morning time that I was dedicating to making things on the
web or starting blog posts.</p>
<p>Bitching on the web isn’t what most people want to read. A decent number
of the blog posts I’ve written over the last year include bitching. My
readership hasn’t grown.</p>
<p>Clearly, I’m doing it wrong. That needs to change. In July I wrote many
blog posts but my readership was basically flat compared to earlier months
where I wrote drastically less. I have a small group of readers who follow
via RSS or check in on a regular basis to read what I write, but a good
chunk (over 50% last check) is Google searches. Most of those Google
searches end up directing people to my more technical posts. Clearly that’s
where the value is.</p>
<p>When I got the job I have now, the company was impressed with the fact that
I blogged about technical topics. That excited them. We recently
interviewed a guy who started his own blog about 6 months ago, that’s
impressive to me, too. But it’s impressive, both of them, for their
technical content, not for the bitching.</p>
<p>Don’t be surprised if I blog at this reduced rate in the future. If I don’t
have something technical to say, I’m not going to write it.</p>
Hire for Talent, Not Location2012-09-17T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/09/hire-for-talent-not-location<p>When hiring, pick the candidate based on talent, capabilities, and
potential, not based on where they physically are located.</p>
<p>Look at the Linux kernel, possibly one of the most important pieces of
engineering over the past two decades. Linus works from home coordinating
the entire project and none of the people he works with on a daily basis
live with him. The project leverages the Internet to get things done, at an
amazingly fast pace and with very high quality, every release. Thousands of
engineers from all over the world work together to produce Linux.</p>
<p>That’s how your company should work, too.</p>
<p>But, you say, “We build hardware, not software!”</p>
<p>That’s nice. What’s your point?</p>
<p>Engineers working remotely or locally can get the same work done these days.
But engineers working remotely may be way better or smarter or more capable
or cheaper than the ones you can hire locally.</p>
<p>Overnight shipping is reasonably priced, generally under $100 to ship most
things anywhere in the USA, and under $200 generally to ship anywhere in the
world. Development tools are going down in price every year, you can now
get logic analyzers that operate at 24 MHz for about $200 and analyzers
operating at hundreds of MHz for under $500. General purpose oscilloscopes
run $1000, at most.</p>
<p>Even if you have to buy each engineer a scope and a logic analyzer, you’re
still looking at under $2k. Price in a computer and the other tools needed,
and you’re looking at about $5k per engineer. Figure you ship each person
something overnight every week, that’s another $5.2k per year. But the
buying of a scope and logic analyzer and computer doesn’t really matter if
the engineer is local or remote, they’re going to want that stuff anyway.
And the $5k you spend shipping them prototypes every week (if you’ve got a
lean hardware development process, which you probably don’t, but that’s
<a href="/2012/03/iterate-hardware-like-software/">another post</a>) is probably cheaper than paying for 100 sq feet of office
space for a year.</p>
<p>So why do most companies hire locally and never consider any remote
engineers?</p>
<p>My impression is that engineering managers want to be kings. They want to
sit in their little castles and oversee the minions doing the work inside
the castle walls. They want to perch up in the tower and yell down at the
minions, telling them what to do and watching them scurry.</p>
<p>So, basically the same reasons open office plans and cubicles exist,
managers also feel hiring locally is better than hiring the best.</p>
<p>Don’t do that. Engineers don’t want to work for kings, they want to work
for leaders.</p>
Next Computer2012-07-25T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/next-computer<p>I wrote about how <a href="/2012/06/laptops-arent-desktops">laptops aren’t desktops</a> last month. I was upset that I
use a laptop that’s not very portable but not as powerful as a desktop. It’s
the worst of both worlds. I still feel that way, but I’m changing my stance on
buying a desktop.</p>
<p>My next computing system won’t involve a desktop, per se. It’ll just be a
laptop, and a rather portable one, at that. It’ll also probably be a Mac.
Here’s why:</p>
<p>The MacBook Air 13 inch can be specified rather nicely to accomplish 80% of the
things I want to do, for a reasonable price. The MacBook Pro with Retina
display is a bit more money, can accomplish 85% of the things I want to do, but
is still rather reasonably priced for the hardware. Either way, I’d get the 27
inch Thunderbolt display to use as a monitor when I’m in “the office.”</p>
<p>The only non-Apple laptop I’d consider buying, really, is a Lenovo Thinkpad T
series 14 inch. And there, a well specified unit could probably accomplish 90%
of what I want to do, but I’d run Linux on it and I’d be frustrated more often.
The extra 5 to 10% improvement of doing the things I want to do may not be worth
the frustrations.</p>
<p>But here’s my big change, I wouldn’t buy a desktop. Not today.</p>
<p>In order to get the type of desktop that I want, with the processing performance
I want (because, let’s be honest, USB really is good enough for most peripheral
things now and I don’t really really need a second Gigabit Ethernet interface, a
slow second Ethernet interface would be good enough), I’d need to spend $4 to
$5k. I’m not willing to do that. Not today.</p>
<p>I’d rather invest that $4 to $5k into learning about Amazon EC2 and paying for
time on a <a href="https://aws.amazon.com/ec2/instance-types/">Quadruple Extra Large High-I/O instance</a>. It’s only $3.10 per
hour for an 8 core (plus HT), 60 GB of RAM, two 1 TB solid state disks, and 10
Gigabit Ethernet machine. Damn!</p>
<p>I’m only going to need really high performance for a few tasks, maybe 2 hours
per day, on average. That means my costs, per day, would only be something like
$7. Even with 250 working days per year, my yearly cost would come out to under
$2k. And, I bet, a decent amount of the things I’d want to do could run almost
as fast on a slightly lower priced EC2 instance. And, Amazon probably will come
out with new instance types that are either higher performance or lower priced
that I could use, too.</p>
<p>It’s hard to connect peripherals to an EC2 instance, but that’d probably be
OK. I usually don’t need both random peripherals and high compute power at the
same time.</p>
<p>Nothing stops me from trying this idea out now. Even with a huge laptop, I
could still leverage EC2 for some tasks and start my learning. I think that’s a
good idea.</p>
Job Applications2012-07-24T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/job-applications<p>Fred Wilson wrote this morning about <a href="http://www.avc.com/a_vc/2012/07/one-click-apply-.html">One Click Apply</a> where some online job
application sites now have “one click” apply buttons. Fred thinks this is a
good thing, it means more people apply for jobs because it’s really even easier
to do if all you have to push is one button.</p>
<p>I agree that it’s easier. I don’t think it’s better.</p>
<p>More job applications means that the hiring company has more to wade though.
Which means more automated screenings. Which means more work to find the
diamond in the rough. I’m not sure that’s a good thing.</p>
<p>Why aren’t companies making it harder to apply? Or, why are companies even
allowing people to apply?</p>
<p>The really desirable (in Internet terms) companies to work for, like <a href="https://github.com/">GitHub</a>
and <a href="https://37signals.com/">37signals</a>, could make applying for a job really hard and they’d
probably still get inundated with applications for any open positions. They
could make a requirement of applying be that you have to send them, via snail
mail, a copy of your resume on a very particular type of paper, and they’d still
get inundated with applications complying exactly with their request.
But, they’d only get inundated with applications from people who
really want to work there.</p>
<p>37signals already mostly do this type of process. It seems that most people who
get hired by them make amazing web sites specifically to act as their resume for
37signals. That’s really cool. But it says a lot about the company, having so
many people wanting so badly to work for them. They’re doing something right.</p>
<p>So instead of making it easier to apply to companies that are “doing it wrong,”
why not put effort behind making the company a desirable place to work? Then
you don’t have to have a job application process that gets “22% more applicants”
in order to feel success. Then you can get less applications from way more
interested people, and you’ll probably hire a better person, too.</p>
<p>More is not better. Better is better. Trying to get more won’t improve things
in the hiring process.</p>
<p>Better is hard. More is easy. You get out what you put in.</p>
You (Probably) Don't Want an SLR2012-07-23T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/you-dont-want-an-slr<p>You probably don’t want to buy an <a href="https://en.wikipedia.org/wiki/Single-lens_reflex_camera">SLR</a> camera.
What you should be buying is a pocketable high quality point-and-shoot camera.</p>
<p>Here’s why:</p>
<p><strong>SLRs are big and bulky.</strong></p>
<p>You can’t take it everywhere with you and so you’ll
take pictures less often with an SLR than you would with a pocketable
point-and-shoot. There are situations where you just can’t take an SLR,
especially if you have kids or are travelling with other luggage, where you can
take a pocketable point-and-shoot. Buy a high quality point-and-shoot and
you’ll take pictures in more locations of more things.</p>
<p><strong>SLRs are expensive.</strong></p>
<p>The entry level SLR cameras start around $500 for a kit with a crap lens.
If you’re really getting the most out of your SLR, you’ll want better glass
(lenses), especially if you shoot low light, since most kit lenses are very
very slow (f-stop wise). Good lenses cost good money, prepare to spend more on
lenses than on your camera (also, buy used, lenses don’t get much better over
time and 5 year old tech is pretty much the same as today’s tech except for the
image stabilization).</p>
<p>But if you want to really shoot like a pro, don’t buy the entry level SLR, you
should spend more and get an upper level consumer or low level pro model.
That’ll set you back even more, expect to spend a grand for really good quality
stuff.</p>
<p>Then, you’ll need a bag, cause you can’t risk damaging your expensive camera.
And with that spare lens you bought, you now have more than one thing to carry,
and it’s kind of big.</p>
<p>And now that you have a bag and a spare lens, you have to consciously pack it
along (see point 1). You’re not going to want to take it with you everywhere
you go (I’m not going to take pictures at the fancy dinner restaurant, right? I
don’t <em>need</em> to take the SLR this evening). Now, you’re going to miss shots.</p>
<p><strong>High-end point-and-shoots take really good pictures</strong></p>
<p>Until you’re limited by the camera, which will require you to apply some photo
taking skills (which most people don’t have and don’t care to develop), an SLR
won’t allow you to take better photos than a high quality point-and-shoot. And
the high end point-and-shoots have the same sensors and (sometimes) faster
glass than the low end SLRs. Many point-and-shoots have full (or almost full)
manual modes, do macro down to 1 inch focus, and can run > 1 minute exposures.
Some even have hot-shoes for external flashes. Many now come with f2 or faster
lenses (that’s really good!).</p>
<p>You can do a hell of a lot with a high quality point-and-shoot. And it’ll fit
in your pocket. So, you’ll take it to that fancy restaurant, and you’ll take
pictures there, too.</p>
<p><strong>The best camera for the job is the one you have with you!</strong></p>
<p>The iPhone has a good camera, for a phone. But, even $200 point-and-shoots will put
it to shame.</p>
<p>So, if you’re out, and you don’t have your SLR, use your iPhone. But the low
light performance will suck, and you probably won’t be able to get good 8x10
inch prints without noticeable grain.</p>
<p>If you had your pocketable point-and-shoot, you’d get good pictures.</p>
<p><strong>Conclusion</strong></p>
<p>It really all comes down to money and taking more pictures. Taking more
pictures, or having the opportunity to, means having a camera with you more
often.</p>
<p>Buy the <a href="http://www.amazon.com/gp/product/B005MTME3U/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B005MTME3U&linkCode=as2&tag=bradford07-20">$350 point-and-shoot</a>. Put it in your pocket when you go places. Take
pictures. Job done.</p>
<p>Or, spend > $1k on an SLR, buy a bag, and lug it around. I hear those fanny
pack bags are nice…</p>
<p>Want to read more about the Canon S100 point-and-shoot I like? <a href="http://www.dpreview.com/products/canon/compacts/canon_s100">dpreview</a> has good
things to say.</p>
<p><strong>Addendum</strong></p>
<p>Of course, if you’re a photo journalist, taking a photography course, or are
really into one or two specific types of photography (landscapes, night shots,
wildlife, sports, etc) an SLR probably is the right camera. But if you’re
doing these things, you probably already own a nice camera, a bunch of lenses,
and you’re heavily invested in going on trips with your SLR for the main
purpose of taking the pictures you like to take.</p>
<p>Most people aren’t like this.</p>
<p>I used to be (back when film was cool and Kodak made profits), but now I just
want to take pictures and not think about it… So do 95% of people out there.
And for those people (and quite a few of the “already own an SLR” people), a
really nice piont-and-shoot will get the job done.</p>
Lean and Motivation2012-07-18T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/z-lean-and-motivation<p>I read <a href="http://www.amazon.com/gp/product/0307887898/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=0307887898&linkCode=as2&tag=bradford07-20">The Lean Startup</a> by Eric Ries. It was pretty good.</p>
<p>I thought his points on minimizing the time spent between learning something
about your business is very apt. In manufacturing, you want to minimize rework
of goods, or downtime of your line. In information work, you want to minimize
wasted effort, or situations where customers won’t be wanting to give you
money. It’s all the same, really. And it makes sense. Build the minimum
viable product (MVP) to see if you’re on the right track, spending as little
time as you can doing the building so that you can learn if you’re doing it
right. If not, change things and try again.</p>
<p>But one thing not really talked about is motivation of workers.</p>
<p>At big companies, or at small ones acting like big companies, projects take a
long time to go from a concept to a product. Sometimes years. And during this
time, the workers on the project have varying motivation. At the beginning,
everyone’s excited! Something new! But if there’s setbacks, or rumors, or
concerns about the business, that excitement wanes. As workers can’t see the
impact of the work they are doing (no customers, yet, since the product’s not
done, yet), it’s hard to get re-excited about the great work they’ve been
doing.</p>
<p>And I think this is a travesty. Workers who aren’t excited don’t work as hard.
Workers who don’t work as hard don’t produce as good of results. But with
lean, those workers get to see the results of their work sooner. And with
lean, when a learning opportunity comes along and the product has to change,
they can all get excited again, the business is moving in the right direction,
towards the product that customers want.</p>
<p>When you slog away at something for a year, never seeing if what you’re making
is what the (or any) customer wants, you get demotivated. Lean can help!
Improve your worker motivation by adopting lean practices, by learning more
often what the customers really want to have. Do it by iterating, fast, and in
very small steps. Put out a product that has just 1 feature, see if customers
would pay for it. If not, go back to finding what 1 feature they will pay for,
and iterate on that. Quickly. The workers will be more engaged, they’ll see
the impact of the work they’re doing on the customer, and you’ll get better
product, faster.</p>
<p>Now, that’s the best of both worlds.</p>
What Are You Hiding?2012-07-18T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/what-are-you-hiding<p>Are you the guy who has an answer to every question? Are you the guy who always
prefaces a piece of advice with, “When I was at…”? Are you the guy who will
get into a screaming match rather than look wrong?</p>
<p>Don’t be that guy. That guy’s hiding something. That guy’s afraid people will
find something out about him. The thing people might find out, it’s not really
that big of a deal, but that guy thinks it is.</p>
<p>Also, no one likes that guy. No one wants to work with that guy. And especially, no
one wants to <strong>work for</strong> that guy.</p>
<p>If you’re that guy, say “I don’t know.” Say “I’m wrong.” Stop telling war stories 10 times a day.
Just try it, if even only for 1 day. You’ll live. And people might like you
better. We know you’re reasonably smart, that’s why you’re here.
If you weren’t reasonably smart, you’d
have been fired or laid off by now. But we also know you’re not <strong>real smart</strong>,
because if you were, you wouldn’t still be here on your own volition.</p>
<p><strong><em>So, shut up and get back to work.</em></strong></p>
<p>Or, go work at a big company, there’s lots of <strong>that guy</strong>s there.</p>
Twitter Pro2012-07-17T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/twitter-pro<p>Seth Godin <a href="http://www.avc.com/a_vc/2012/07/in-defense-of-free.html#comment-588538899">commented on AVC</a> in response to <a href="http://www.avc.com/a_vc/2012/07/in-defense-of-free.html">In Defense of Free</a> mentioning
Twitter, and that if they had a Twitter Pro account for $2 / month, that it
could generate upwards of $240 million per year for 10 million subscribers.</p>
<p>That’s the idea behind the <a href="http://www.avc.com/a_vc/2006/03/the_freemium_bu.html">Freemium</a> business model. And it’s a really
good one. Especially for Twitter’s situation.</p>
<p>Make there be a Pro account, don’t show Pro’s any promoted content, allow any
and all API access, and give them other tools to curate their Twitter use.</p>
<p>I can’t see how this would be a bad thing. I’m not sure that 10 million users
would want to sign up for it, but I’d bet tens of thousands would. It may not
be the revenue leader for the company, but I think it would pay for itself.
And if $2 / month isn’t enough, have tiers with different prices where each
tier gets certain other abilities.</p>
<p>Then Twitter could cut off API access for everyone except “official” apps and
the web interface. Free users would be forced to experience Twitter the way
Twitter wants but those that pay could experience it any way they please.
Twitter can do this, they already have an established service, they’d still be
offering a free version, and those that whine and complain about the change
could make all their own misery go away for just a few bucks per month.</p>
<p>No downside.</p>
<p>I wonder how many people at Twitter had even considered this before Seth said
it…</p>
<p>It’d be just like LinkedIn and the fact that everybody in marketing, sales, and
recruiting pays for a more advanced LinkedIn account. The free service
specifically doesn’t have certain features and the paid accounts are reasonably
priced. Bam! Profit. (or at least revenues)</p>
LAN Parties2012-07-17T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/lan-parties<p>In high school, my friends and I held a few LAN parties. They were quite fun.
I haven’t been to one since, and I don’t intend to change that, but it’s fun to
reminisce and <a href="http://www.kungfugrippe.com/post/169873399/clackity-noise">make the clackity noise</a>.</p>
<p>I had an <a href="http://www.lowendmac.com/ppc/yikes-power-mac-g4-pci.html">Apple Yikes! G4</a>, the 400 MHz version that launched before they
changed the low end model to be 350 MHz. It had a decent (for the time) ATI
Rage 128 graphics card. But my friend John had a <a href="https://en.wikipedia.org/wiki/Voodoo3">Voodoo3</a> PCI card that I
was able to flash firmware on in order to make it work in my Mac. (I
subsequently deleted the backup of the original firmware, thus leaving me no
way to reflash it back to work on an x86 machine, and thus forcing John to give
me an ultimatum that I’d have to buy the card from him. Luckily I found
another friend who would, I didn’t have a spare $100 in high school!)</p>
<p>John and I bought 1000 feet of solid wire CAT5, a cheap crimper, and a bag full
of RJ-45 ends. We built Ethernet cables (for fun at first) for all our
friends. (And I eventually wired my parents’ house by running 100 feet of it
through the attic, via holes in the ceilings of my bedroom and out by the
family computer in the living room. Mom wasn’t impressed with my ceiling
holes…)</p>
<p>We had the LAN parties in a barn my friend Matt’s dad owned. Random Ethernet
switches and hubs strewn around, interspersed with outlet strips plugged into
outlet strips plugged into multiple hundred foot extension cords (kids: don’t
do that with your power cords!).</p>
<p>We’d play Unreal Tournament and Starcraft. It was awesome. (Although I have
no recollection of why we didn’t play Quake, maybe it wasn’t out for the Mac?)
We’d order pizza. We’d spend all day (from 8 in the morning till 8 at night)
in the barn just playing games (well, we’d wait quite a bit, too, for things to
load over the poorly wired network, mods for UT seemed to want to be on
everyone’s machine even if we weren’t playing a game that needed it).</p>
<p>Those were fun days. Playing with computer with friends.</p>
<p>Even though I’m not really excited by computer gaming now, I do miss playing
with computers with friends. That’s something I should do more often.
That’s as good a reason as any to go to a meetup or join a hackerspace.
Timely, as my plan is to head to the <a href="http://meetup.coworkingrochester.com/events/70270192/">Ruby Meetup</a> held at <a href="http://www.coworkingrochester.com/">Coworking
Rochester</a> next month.</p>
OUYA2012-07-12T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/ouya<p><a href="http://ouya.tv">OUYA</a> is a proposed gaming console being funded by a KickStarter campaign.
You can pledge $99 and your reward will be a console with one controller,
delivered in March 2013.</p>
<p>OUYA is to be powered by an ARM processor and will run Android. As of this morning,
almost $4 million has been pledged. It’s now KickStarter’s fastest funded past
$1 million project (measured in hours) and is on track to be the largest funded
project ever (if it’s not already). It sounds pretty cool.</p>
<p>I won’t be pledging. I think it’s going to fail.</p>
<p>It’s really hard to make a game console. Getting the hardware right on the
first try is not easy. With a gaming console, changing the hardware after
launch is almost impossible without causing incompatibilities (game designers
love to do unexpected tricks because they know the hardware is stable).
OUYA’s play mostly seems to be that they will offer fixed hardware, running
Android, and a stable SDK that’s really cheap (it’s included with every one).
GREAT! But it’s really hard to do this right.</p>
<p>Pricing the hardware appropriately to both make money and draw consumers is
very difficult. Nothing is stopping Microsoft from dropping the Xbox
down to $99 just before OUYA’s launch. Microsoft has a history of losing money
on Xbox hardware sales, they seem to have no hesitation about doing it in order
to gain market share. I don’t think they’d hesitate to do it again if OUYA
is a real threat. Which console is going to sell well at Wal-Mart? The Xbox
at $99 with hundreds of games and Kinect, or this new thing? It doesn’t even
matter what the features of the new thing are, the Xbox is established and
kids won’t be disappointed getting one. Until OUYA is established, it’s a
risky purchase for parents wanting to get their kid something, Xbox isn’t.</p>
<p>OUYA is going to miss the holiday season for 2012. I’m not a game industry
insider, but I bet over half of the game industry’s revenues come in the 4th
quarter. Missing that will hurt them. Launching in March is bad timing.
Launching in the first week of November would be awesome timing. They’ll
delay millions of dollars in revenues by a full year, if OUYA could pull
in their schedule by 5 months, I think they’d have a chance. If they don’t,
Microsoft, Nintendo and Sony are going to eat their lunch.</p>
<p>Selling an SDK with the console price is stupid. Sell the console. Also sell
the SDK, but price it reasonably. Apple gives away the SDK for iOS, but you
have to pony up $100 in order to sell in their store. Fair. If OUYA doesn’t
do this same type of agreement, they’re leaving $100+ per game developer on the
table. $100 is reasonable, $1k wouldn’t be unreasonable in order to get into
their store. Giving the SDK away for free is a good idea, though, just make
sure to charge at some point in order to be a developer.</p>
<p>OUYA won’t have amazing graphics. That will hurt. Xbox 360 was launched in
2005! That’s 7 years ago. New games, coming out now, probably will have equal
graphics on Xbox as they would on OUYA. It’ll take developers a year or two
to really understand what can be done with OUYA. I can only assume Microsoft
is working on the next gen Xbox. OUYA, if it’s really a threat, may push
Microsoft’s schedule up a little bit.</p>
<p>OUYA is trying to disrupt the game industry. They will only be successful if
one of the entrenched players doesn’t try to do the same thing. Nintendo,
Sony, and Microsoft all have great brands in gaming. They are ripe for
disruption, and it’s good that OUYA is bringing that fact to light. But if
any one of those 3 entrenched companies realizes that they too could disrupt
the industry the same way OUYA is trying to, OUYA will fail. I’m confident
that all 3 entrenched companies will see what’s happening and try to perform
their own disruption, they can’t afford not to, especially Microsoft, who’s
going to need the revenue when <a href="/2012/07/Windows-8/">Windows 8 sucks</a>.</p>
<p>It’s awesome that OUYA is trying to do this. I think it’d be tons of fun to
work there and be a part of making their console. But, they’re going to fail.</p>
Bad Coffee2012-07-11T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/bad-coffee<p>I started drinking coffee regularly about a year ago, when my daughter was
born. I drink 1 cup each morning, when I get to the office.</p>
<p>I always added cream and sugar, one of each when they came in standard
packet sizes. But last week, I started drinking coffee black.</p>
<p>Oh. My. God.</p>
<p>That cream and sugar really covers up a lot of the variation
in coffee taste. Some coffee is downright horrible tasting, some has almost
no taste at all, and some is good tasting. Drinking it black has helped me
realize this.</p>
<p>Folgers sucks. It’s bitter, then has no taste. Why is it one of the most
popular coffees? Marco and Dan talk a little (a lot?) about coffee in
<a href="http://5by5.tv/buildanalyze/10">Build and Analyze #10</a> and why people seem to keep buying the horrible
kinds.</p>
<p>At work, we have a <a href="http://blog.tonx.org/2011/12/your-parents-make-lousy-coffee/">$200+ abomination of ridiculously chromed plastic
festooned with blue LEDs</a>. Usually we stock the <a href="http://www.greenmountaincoffee.com/Coffee/K-Cup-Colombian-Fair-Trade-Select">Columbian</a>.
It’s actually pretty decent tasting, much much better than the decaf or
Folgers stuff. The <a href="http://www.greenmountaincoffee.com/Coffee/K-Cup-Breakfast-Blend-Decaf">decaf</a> sucks, it just tastes like burnt.</p>
<p>Now, I’m becoming a coffee snob. I’m not a full-on snob, I still think that
the Keurig makes an OK cup of the Columbian, but an <a href="http://www.amazon.com/gp/product/B001HBCVX0/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B001HBCVX0&linkCode=as2&tag=bradford07-20">AeroPress</a> is now on
<a href="http://amzn.com/w/1DQZXRJNNYGKK">my Amazon Christmas wish list</a>.</p>
<p>If anyone wants to buy me a <a href="https://tonx.org/offer">Tonx subscription</a>, I’d be more than happy
to drink all of it!</p>
Windows 82012-07-10T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/Windows-8<p>Microsoft is coming out with <a href="http://www.codinghorror.com/blog/2012/07/betting-the-company-on-windows-8.html">Windows 8</a> soon. They want it to compete
with both tablet and desktop / laptop offerings from other vendors (mainly
Apple, and less so, Google).</p>
<p>Microsoft is going to lose the tablet and mobile war. They’re too far
behind, in exactly
the same way Apple got too far behind Microsoft after Windows 95 came out.</p>
<p>Apple has the installed base of hundreds of millions of iOS devices. Apple
has that “cool factor” where everyone wants an i device. <strong>Apple has
developers making great apps that sell well</strong>. Apple isn’t going
to slow down (heck, they’re probably going to introduce a <a href="/2012/07/7-inch-ipad/">low cost 7 inch
iPad</a>). Apple is out in the lead and Microsoft is playing catch-up.</p>
<p>I had an interesting conversation with a friend last night. He thinks Apple
would be foolish to not release a 7 inch iPad to capture the lower purchase
price market that they’re currently ignoring. But he also thinks that Windows
8 will be a big hit, most likely first in the “enterprise.”</p>
<p>Which seems reasonable, on desktops and laptops. Microsoft already owns that
market, making a dent there with Windows 8 won’t really change much. It’s the
tablet and phone market where I think Microsoft will fail (since they have
been failing successfully so far for years in these markets). Apple and
Android have too much of a lead.</p>
<p>Microsoft made a dent in the gaming industry with the Xbox by spending obscene
amounts of money (losing tons on each unit sold initially) in order to gain
market share. But Nintendo was trying to differentiate itself from Sony and
left just Sony to compete with Microsoft when they introduced the first Xbox.
Sony screwed the pooch, it was their game to lose and they successfully lost
it. There was a market opening and Microsoft filled it, with bags of money,
and that was good enough.</p>
<p>Apple and Google aren’t leaving any market openings. Even bags of money can’t
fill something that isn’t there.</p>
<p>Having a pretty UI, a functioning application store, integration with cloud
services, and a low purchase price won’t change things. People still will
want iPads, iPhones, and iPods. If they can’t afford those, they’ll buy an
Android device, or maybe they’ll consider a Microsoft device. But the first
consumers of Microsoft devices won’t be early adopters, they’ve already all
bought iOS and Android devices. The first consumers of Microsoft devices
will be the general public and the general public has no idea how to use
anything but an iOS or Android device. The general public expects their
favorite applications (not just angry birds and Word) to be available.
The general public is going to be pissed, and they’re going to tell their
friends, and Consumer Reports will get angry mail, and then Microsoft will
have to come out with Windows 9 where they say (yet again) that they have
fixed everything. It’s not going to work, Microsoft. You’re going to lose.</p>
<p>So, why play the game? If you know you’re going to lose, why not change
the game? Why not do what Google and Amazon are doing? Why not force the
incumbent to admit there’s a market they’re not serving at a price that requires
bags of money to achieve? Why not do a Kinect, game changing (no pun intended),
thing instead of a “me too!” thing?</p>
<p>I don’t know. Apparently, neither does Microsoft.</p>
7 inch iPad2012-07-09T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/7-inch-ipad<p>Apple made the front page of the physical USA Today paper on Saturday. But
they didn’t actually <strong><em>DO</em></strong> anything.</p>
<p>Some guy, at some investment place, <em>said Apple might make a 7 inch iPad</em>.</p>
<p><strong>A 7 inch iPad is a stupid idea for Apple.</strong></p>
<p>The Google Nexus 7 is $199. The Amazon Fire is $199. Both are 7 inches.
The Apple iPad is $499 and 10 inches.</p>
<p>Currently, there’s no tablet market, there’s an iPad market. People don’t
want “tablet computers,” they want iPads. But, if Apple admits that there
might be anything other than an iPad market, by selling a low cost 7 inch iPad, that invites
competition in the form of the Nexus and Fire.</p>
<p>I can’t see why Apple would want that. The iPad is <strong>the only 10 inch tablet
computer that has any significant market share</strong>. Apple owns the market.
Google and Amazon see the only way to break into the tablet market is to try
to redefine it, where there’s 2 classes of tablets:</p>
<ol>
<li>10 inch iPads for $499</li>
<li>7 inch tablets for $199</li>
</ol>
<p>If Apple enters the second category, they admit that there’s 2 classes and
they’ll have to compete on price. This would be stupid for Apple.</p>
<p>Customers who can’t justify $499 for an iPad aren’t good customers to have.
<strong>They won’t pay</strong> for apps. <strong>They won’t pay</strong> for accessories.
But they sure as hell <strong>will whine</strong> when things aren’t perfect. And the
whining is what will end up in the media and is what will hurt Apple in
relation to its competition.</p>
<p>Google and Amazon don’t care about the whining, they have nothing to lose.
Apple has everything to lose. Making a 7 inch iPad has huge potential
downsides but almost no upside. There’s nothing revolutionary about making
a 7 inch iPad.</p>
<p>I really hope they don’t do it.</p>
Procrastination2012-07-06T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/procrastination<p>Sometimes I have a hard time getting around procrastination. I don’t think
I’m alone in this.</p>
<p>I’ve been listening to Merlin Mann and Dan Benjamin’s podcast <a href="http://5by5.tv/b2w">Back to Work</a>
lately and procrastination is one thing they address multiple times.
Merlin is of the opinion that procrastination partly stems from having
an item on your to-do list but not having a defined set of steps spelled out
yet (in your head, on paper, whatever) for how to accomplish the task.
You may not even know what the next thing you have to do in order to accomplish
the task is.</p>
<p><strong>So make a freaking list of actions!</strong></p>
<p>The items on the list should be actions. Verbs. Things like:</p>
<ol>
<li><em>Drive</em> to the mall</li>
<li><em>Walk</em> to shoe store</li>
<li><em>Try</em> on 5 pairs of running shoes in size 12</li>
<li><em>Select</em> 1 pair that fits the best</li>
<li><em>Pay</em> for shoes</li>
<li><em>Drive</em> home</li>
</ol>
<p>or</p>
<ol>
<li><em>Launch</em> web browser</li>
<li><em>Navigate</em> to Zappos</li>
<li><em>Locate</em> 5 pairs of running shoes in size 12</li>
<li><em>Pay</em> for shoes</li>
<li><em>Try</em> on all 5 pairs</li>
<li><em>Select</em> 1 pair that fits the best</li>
<li><em>Request</em> to return other 4 to Zappos</li>
<li><em>Pack</em> 4 pairs of shoes into box</li>
<li><em>Print</em> return shipping label</li>
<li><em>Apply</em> shipping label</li>
<li><em>Drive</em> box to UPS store</li>
<li><em>Leave</em> box at UPS store</li>
<li><em>Drive</em> home</li>
</ol>
<p>All the italic words are verbs. Now you have a list. This makes buying shoes
brain dead simple. Execute the list, check off each step. Now you have
no reason for not buying shoes, unless…</p>
<p>If buying shoes is really that important to you, execute the list. If buying
shoes isn’t that important, <strong>WHY THE HELL IS IT ON YOUR LIST?!?!</strong></p>
<p>Buying shoes doesn’t sound like a big deal. But I’ve had “buy new shoes” on my
to-do list for at least 2 years. It’s clearly not that important to me.</p>
<p><strong>Time to either execute or remove it from my list!</strong></p>
<p>(Yes, this also relates to <a href="/2012/06/planning-specifying-and-documenting/">Planning, Specifying and Documenting</a> work
projects. No one at big companies likes doing their work, that’s why they
write big lists of actions that have to happen. You should, too.)</p>
Job Security2012-07-06T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/job-security<blockquote>
<p>Peter Gibbons: What if we’re still doin’ this when we’re 50?</p>
<p>Samir: It would be nice to have that kind of job security.</p>
</blockquote>
<p>That’s the <strong>wrong kind of job security</strong>. If what you consider job security
is the ability to keep your current job, you’re doing it wrong.</p>
<p>Lots of people at large companies have this mentality about job security. They
view layoffs and the potential of losing their jobs as a very big deal. Now,
I’m not saying it’s not some kind of deal, it’s just not that big of a deal,
<strong>if</strong> you’ve been building a career and not just working to <em>keep</em> a job.
There’s a huge difference between building a career and keeping a job.</p>
<p>Building a career means always learning. It means minimizing the time spent
doing things you’ve done before, automating them away if possible. It means
always signing up for things that you don’t feel comfortable with. It means
doing <strong>a lot of reading</strong>. It means asking stupid questions of smart people,
being told that you’re asking stupid questions, and getting pointed in the right
direction. Then working your butt off.</p>
<p>Building a career means not standing still. It means moving with the flows of
new technology, of new practices, of new techniques. It means always increasing
your efficiency.</p>
<p>Working a job means doing just enough to not get fired.</p>
<blockquote>
<p>Peter Gibbons: That’s my only real motivation is not to be hassled, that and the fear of losing my job. But you know, Bob, that will only make someone work just hard enough not to get fired.</p>
</blockquote>
<p>I love the movie <a href="http://www.amazon.com/gp/product/6305508550/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=6305508550&linkCode=as2&tag=bradford07-20">Office Space</a>. But I want to be building a career, not working a
job. I’m not Peter, as much as I may have dreamed of acting like him when I
was at Xerox.</p>
<p>In the past, I had wasted time and brain power worrying about keeping a job.
That’s stupid. Instead, I should have been building a career.</p>
<p><strong>Now, I am.</strong></p>
Faucet Mount Water Filters2012-07-05T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/faucet-mount-water-filters<p>At work, Kodak (our landlord) requires us to have a water filter in our little
kitchen area (don’t ask why). Initially we had a <a href="http://www.amazon.com/gp/product/B000EOOQPW/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B000EOOQPW&linkCode=as2&tag=bradford07-20">Brita</a> faucet mount filter,
but once we used up the spare filters for it, we threw it out and bought a
<a href="http://www.amazon.com/gp/product/B0009CEKY6/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B0009CEKY6&linkCode=as2&tag=bradford07-20">Pur</a> unit to replace it.</p>
<p>As per many of the reviews on the Brita unit, our switch became very very difficult
to turn between the filtered and non-filtered settings. It was so bad that
pretty much everyone simply left it on the filtered mode all the time. Even
when running hot water through it (which you’re not supposed to do). We also
had more than one instance of someone moving the faucet (it rotates) by grabbing
the Brita filter unit and accidentally pushing the filter release button.
This isn’t a huge deal when there’s no water pressure, but when the faucet’s on,
water goes everywhere! Regardless of if water’s running, removing the filter
will reset the timer function that tells you when to put a new filter in.
We resorted to writing the install date of each filter with a Sharpie on the
top of it (which is ugly, but effective) and replacing the filter once a month.</p>
<p>I had seen the Pur unit at a store and commented how the lever was very easy
to switch between filtered and non-filtered, so we bought a Pur unit. It is
indeed very easy to switch between filtered and non-filtered modes, much much
easier than the Brita. Now, most people switch to non-filtered mode for hot
water, since switching is so easy. The filter is a cartridge that goes inside
of a screw on cap, so there’s no easy way to accidentally remove the filter
and spray yourself or reset the life counter. Overall, the Pur is simply a
better unit than the Brita. The Pur does seem to flow a little slower than the
Brita.</p>
<p>Amazon reviews seem to indicate that older Pur models had some issues, but
comments from Pur people about changes they’ve made, plus my experiences, seem
to moot the problems. We have no issues with our Pur popping off, leaking, or
cracking. We don’t have any special mounting system, just what came in the box.</p>
<p>Overall, I’m happy with the Pur. The next time I need to buy a faucet mount
filter, I’ll be buying a Pur.</p>
<p>(If you haven’t read my stance on <a href="/2012/06/amazon-affiliates-hate/">Amazon Affiliates</a>, you should.)</p>
Talking to Myself2012-07-03T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/talk-to-myself<p>I like to talk to myself.</p>
<p>Maybe I’m crazy. I don’t know.</p>
<p>But I find it easier to put pieces of new concepts together when I’m able to
talk to myself out loud. When I’m programming or conceptualizing ideas,
talking out loud makes things flow easier for me.</p>
<p>It’s hard to talk to myself, out loud, in an open office or cube farm. Mostly
because I don’t want to disturb everyone else (nor do I want them to talk
out loud and disturb me) but also because I don’t want everyone to think I’m
the crazy guy who talks to himself.</p>
<p>It feels less like I’m a crazy person when I write about the fact that I talk to myself
on the Internet rather than when I say it to someone in person. Thanks, Internet.</p>
Keyboards2012-07-02T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/07/keyboards<p>I have an <a href="https://en.wikipedia.org/wiki/Model_M_keyboard">IBM Model M</a> (PS/2) at home and a <a href="http://www.leopold.co.kr/?doc=cart/item.php&it_id=1301969977">Leopold FC500R</a> with Cherry
brown switches at work. I like the Model M better than the Leopold, but both
are very nice keyboards.</p>
<p>In college, my roommate Steve bought me a used Model M from Goodwill for like
$8. It was missing a few key caps and was a bit worn, but it still worked quite
well and I loved it. It had an AT style keyboard connector (rather than PS/2
or USB) but with a passive PS/2 converter, worked great. I stupidly gave it
away when I sold my desktop shortly after graduation, thinking I could easily
find another keyboard I really liked and because I was mostly using an Apple
G4 PowerBook for my day-to-day computing.</p>
<p>I wish I hadn’t.</p>
<p>Luckily, there’s great resources online for keyboard aficionados (of which
I’m probably one). <a href="http://elitekeyboards.com/">EliteKeyboards</a> sell a few different varieties of
Cherry switch keyboards, <a href="http://www.pckeyboard.com/">Unicomp</a> still make brand-new buckling spring keyboards
(apparently they bought the rights from IBM along with the manufacturing
equipment), and <a href="http://www.clickykeyboards.com/index.cfm">ClickyKeyboards</a> sell new and used IBM Model Ms.</p>
<p>My Model M that I have now I purchased new from ClickyKeyboards in 2006. Since
then, they seem to have almost 0 new Model Ms in stock. It seems the IBM
Model Ms are starting to run out, people who have them keep them for decades
(no joke). So really, the only choice is to either buy a used Model M or to
buy a new one from Unicomp, if you want true buckling spring action.</p>
<p>Personally, I enjoy the buckling spring feel more than the Cherry switch feel.
My Cherry brown Leopold is nicer than a soft dome keyboard, but the key travel
is longer than I’d like and the force over the travel doesn’t change in as
nice of a way as the Model M does. It’s quieter, but the keys bottoming out
still make some noise (so it’s not “silent” as some might suggest).</p>
<p>If you’ve never tried a buckling spring or Cherry switch keyboard, you should.
They are much nicer to type on than cheap dome keyboards. And once you’ve
gotten used to them (assuming you like them), going back is hard.
I’ve found I don’t like the hand positioning on the Microsoft ergonomic
keyboards, and although the dome switches on those are rather decent, I still
prefer a Model M or Cherry switch ‘board.</p>
<p>One thing to notice when buying either a Model M or a Cherry switch ‘board is
the shape of the key caps and the sculpting of the keys in relation to each other.
The Model M has a nice sculpting of the key positions relative to each other
such that from the side, the keys seem to wrap around an imaginary cylinder.
Many of the Cherry switch keyboards don’t do this. My Leopold does, and that’s
why I picked it over the others. If this is something you care about, be sure
to research the configuration of the keyboard you’re interested in before buying.</p>
<p>Regardless of which keyboard you choose, if it’s a Model M or Cherry switch one,
it should last decades. You’ll pay more for it (budget around $100 or so) but
in 10 years, when it’s still working, you won’t have spent a cent more and
you’ll still have one of the nicest keyboards available.</p>
A Few Debian Notes (live, wicd, and encfs)2012-06-28T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/06/a-few-debian-notes<p>I installed Debian Squeeze this morning and used a few resources I hadn’t
known of before. The install went well and I’d recommend taking a look at
these:</p>
<p>The <a href="http://live.debian.net/">Debian Live Project</a> has rather nice live CDs for Debian. I tried
out the LXDE desktop one and found I rather like LXDE, it runs rather quick
on my really old Dell Inspiron 9300 laptop. The ISO won’t fit on a CD (boo!)
but DVD-Rs or USB flash drives are rather easy to come by.</p>
<p><a href="http://wiki.debian.org/WiFi/HowToUse#Wicd">wicd</a>, the wireless interface connection daemon, is a very nice replacement
for network-manager. wicd will do basically the same thing as network-manager
but without needing Gnome.</p>
<p>And lastly, <a href="http://wiki.debian.org/TransparentEncryptionForHomeFolder">encfs</a>. Encrypt your home directory and have it decrypt
automatically on login. Not super hard to setup, just be sure to read the
instructions carefully. I prefer encfs over full lvm encryption as when
installing on flash memory media (like an ssd or USB flash drive), you can
simply create normal partitions aligned to the erase block size. Also, you
can have a setup where you only have one giant root partition but want some
directories within it to be encrypted, removing the need for lvm in order
to get encryption.</p>
Amazon Affiliates Hate2012-06-26T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/06/amazon-affiliates-hate<p>Yesterday there was a post to <a href="https://news.ycombinator.com/item?id=4156414">Hacker News</a> about a blog post written by
Antonio Cangiano (the author of the Prag Prog <a href="http://www.amazon.com/gp/product/1934356883?tag=bradford07-20">Technical Blogging</a> book)
<a href="http://technicalblogging.com/6-reasons-why-the-amazon-associates-affiliate-program-is-highly-underrated/">about Amazon Associates</a> and how to effectively use the program to make
some money with your blog. It drew quite a bit of derision from a few HN
commenters.</p>
<p>Do you see what I did there? :)</p>
<p>Here’s my stance:</p>
<p>Amazon Affiliates is awesome. If you have a blog, you should be an affiliate.</p>
<p>I’m an affiliate, but so far I’ve made a sum total of about $2.50 (not enough
for Amazon to even pay-out).</p>
<p>People like <a href="http://www.codinghorror.com/blog/">Jeff Atwood</a> and <a href="http://www.marco.org/">Marco Arment</a> (I read both of their blogs)
often post Amazon Affiliate links. Jeff gets called out quite harshly in the
HN comments about his “spamming” of affiliate links, but Marco’s just as
guilty (although he may have a smaller readership). But <strong>I like that Jeff
and Marco use affiliate links!</strong></p>
<p>Marco does reviews of sometimes rather esoteric things that can be bought on
Amazon (<a href="http://www.marco.org/2012/05/06/bathroom-fan-timer-switches">bathroom fan times, anyone?</a>) which are entertaining to read but
also could be very useful for that time (which doesn’t come very often, I’ll
admit) where you just <em>need</em> a new bathroom fan timer. Before reading about
Marco’s experience with bathroom fan timers, I would never have known that
the type he likes even existed, let alone how it works and why I should pick
the 15 or 30 minute kind. Now I do. <strong>And!</strong> If I go buy one though his
affiliate link, he gets a little kickback but I pay the same price as if I
had not clicked through the link.</p>
<p><strong>How is that hurting me?</strong></p>
<p>I’m not Marco’s friend. I read his blog, am a subscriber to <a href="https://www.instapaper.com">Instapaper</a>,
and listen to his podcast, but I do these things because I think he has
interesting things to say, not because I know him personally. So it’s not like
I’m over at his house and he’s recommending I buy some fan timers from this
guy he knows who will give him a kickback, I’m some random guy on the Internet
reading what Marco’s experience was with buying quite a few different fan
timers and trying them out. Why shouldn’t he get some money for that?</p>
<p>In the same way, Jeff posted about <a href="http://www.codinghorror.com/blog/2012/06/because-everyone-still-needs-a-router.html">wifi routers</a> recently. His post has
a few affiliates links in it. OK. But if he actually did a bunch of research
about these routers and can condense it down to 1 article that I can read in
5 minutes to gain the same information, that’s <em>saving me time</em> and effort
the next time I’m in the market for a new wifi router.</p>
<p>I think Amazon Affiliate links are great. I wish I had time to review more
things I buy (or to just have more time to buy things) and provide links to
them so I can make a few dollars from my blog.</p>
<p>Where’s the harm?</p>
Sales Leads are Junk2012-06-25T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/06/sales-leads-are-junk<p>Sales leads are junk. Leads aren’t a valuable business metric. They don’t mean
anything. So, don’t give them any weight in decision making.</p>
<p>I was in a car crash. My car was so badly damaged that the insurance company
considered it a total loss. I can’t live my life (without major change)
without a car. So, I needed to buy a new one.</p>
<p>My family and I decided on a budget and a feature list. We were in the market
for something like a Honda Accord or Toyota Camry. This is a huge market
segment, literally every major auto manufacturer sells a car that’s similar.</p>
<p>What does this have to do with sales leads?</p>
<p>I went to 9 different car dealerships shopping for a car. I was only
intending to buy 1 car. 8 of these dealerships got a sales lead that didn’t
go anywhere. 1 got a sales lead, a sale, and revenue.</p>
<p>1 dealer got value, the other 8 got a junk metric, the “lead.”</p>
<p>Sales leads are important, you need them in order to eventually make a sale.
But if you’re making decisions based on the number of leads, or some perceived
quality of leads, you’re doing yourself a disservice. Leads only matter if
they can be converted into sales. If you have a horrible conversion ratio,
adding 1 or 10 or 100 more leads doesn’t really mean anything.</p>
<p>You have a horrible conversion ratio! You do. Everyone does.</p>
<p>People love to window shop. They love to dream of buying things. It’s fun.
All these people are sales leads. Most of them won’t turn into sales, just
like I visited 8 different car dealerships and didn’t buy their cars <strong>even
though I needed to buy a car!</strong></p>
<p>Many people visiting a car dealership may not <strong>need</strong> to buy a car. They’d
<strong>like</strong> to buy a car. These leads most likely will have even worse conversion
ratios than I was when shopping. Most people have a car that already “works”
and buying a new one is a luxury.</p>
<p>So if a business boasts about their sales lead numbers, tell them to shut up.
Tell them to show you revenues, profits, conversion ratios, or any other
metric. Sales leads are junk. Everybody has leads, what matters is converting
them into money. That’s where the value is.</p>
Planning, Specifying, and Documenting2012-06-22T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/06/planning-specifying-and-documenting<p>Big companies are well known for putting out
complex products and completing complex projects, but they have a secret:</p>
<p><strong>Planning and documentation</strong>.</p>
<p>Big companies love to plan. They love to document. They love to specify.
When you work at one, it seems like that’s all they want to do: plan, document,
and specify. But there’s a really good reason for it all: that’s the only way
huge complex projects get completed.</p>
<p>Planning, documenting, and specifying should serve one purpose: to break things
into small enough chunks that the complexity is removed. Once complexity is
removed, then implementing becomes possible. Getting to the point where
complexity is removed should take as long, or longer, than the implementation.</p>
<p>That sounds crazy, that planning should take longer than doing. But for
complex projects, that’s the best way to assure success.</p>
<p>And in software development techniques like agile and scrum, that’s exactly
what happens. At the beginning of a sprint, there’s supposed to be a planning
phase to decide, “what’s going to get done?” But even before it can be decided
“what” will get done, specifications on exactly “what” is need to be in place.
Once that’s decided, then the
things that make the list can get implemented. At the beginning of a project,
most of the implemented items should be documentation and specifications,
even (or especially) in scrum / agile.</p>
<p>Without quality specifications, how do you know if you’ve implemented something
correctly?</p>
<p><strong>You don’t.</strong></p>
<p>But on small teams, or in small companies, often it seems like complex things just
get winged. No documentation. No specifications. Just “make it work!”
That’s a death knell. If you’re winging it, how do you know when it actually
works? How do you know when to stop implementing? How do you plan your time
or say no to additional features? It gets really hard.</p>
<p>External contractors never accept contracts that aren’t well defined. Why
would they? How could they get paid if the customer can always say things
aren’t done yet?</p>
<p>Why do small companies and small teams think they can treat their work
any differently than a big company or an external contractor?</p>
Music2012-06-21T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/06/music<p>Today, on his Coding Horror blog, Jeff Atwood writes about <a href="http://www.codinghorror.com/blog/2012/06/the-great-mp3-bitrate-experiment.html">“The Great MP3
Bitrate Experiment”</a> and how no one can tell the difference between 192kbps
encoded MP3s and raw CD audio.</p>
<p>That’s nice.</p>
<p>Jeff talks about how he still listens to his CD collection, and that he’s
paying a relative to rip it all into decent quality MP3s as a summer
project.</p>
<p>That’s nice, too.</p>
<p>But Jeff’s a dying breed. Not only are CDs dead (really, they are! I promise)
but so are MP3s or any other personal music collection.</p>
<p>Until about 2008, I had a CD collection that I always liked to have ripped
to MP3 or FLAC files (it just felt cool to have FLAC, I could never tell the
difference in the audio, and I didn’t have a mobile player). I enjoyed having
my entire CD collection available with my player set to random. I thought
it was the bee’s knees.</p>
<p>But then I got married, had a baby (well, my wife did the hard part there),
stopped using my computer at home much, and <a href="http://www.pandora.com/">Pandora</a> was invented.
I haven’t listened to any of my CD/MP3/FLAC collection in at least a year.
<strong>Not one single song.</strong></p>
<p>I’m a subscriber to Pandora, $3 a month is pretty decent, and my family uses
it pretty much every day. We have about 20 different “channels” that we like,
and I’ve not even thought about my CD collection (which is in the basement
inside a foot locker) until reading Jeff’s post.</p>
<p>Personal music collections are a thing of the past. Whether they be CD, MP3,
or any other format based. The cloud is winning here and will only get better
over time. Why limit yourself to the 1000 (or how ever many you have) albums
when you can effectively have unlimited albums? And all for the same price
as buying 3 albums per year?</p>
<p>Apple has been killing it with iTunes, they are “the place to buy digital
music online.” But they have to see the writing on the wall. Personal
music collections are going away. I’m curious to see how they deal with
that. Also, in the same vein, “apps” are going to go away, too. But that’s
for another post!</p>
<p>I’m with <a href="http://www.avc.com/a_vc/my_music/">Fred Wilson on this one</a>, music in the cloud is the only way forward.
Probably worth taking my CD collection to a used music store and getting a few
bucks for it, before no one wants CDs anymore…</p>
Posture2012-06-20T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/06/posture<p>Posture is important.</p>
<p>I was in a car crash about a month ago. Since then, I’ve had x-rays of my
back, neck, and wrist. No broken bones, thank goodness, but I do have pain.
After going to the doctor, I’ve been seeing a chiropractor she recommended
along with getting once a week massages. So far, things are improving, but
I’m not yet back (no pun intended) to how I was before the crash.</p>
<p>Today, my masseuse recommended some stretches I can do along with a general
recommendation to be concious of my posture, that posture has a huge impact
on back and neck health and pain levels.</p>
<p>She said that most people sit slouched over and with their shoulders ahead of
their chest. This puts strain on the muscles in the upper back and the back
of the neck while letting the muscles in the upper chest and front of the neck
shorten. Over time, this contributes to back and neck issues, or in my case,
doesn’t help recovery.</p>
<p>My mission for the next week, till I see her again, is to pay closer attention
to my posture (and to do my stretches).</p>
<p>I’ve written about <a href="/2012/01/justifying-good-chairs/">chairs</a>, before, and I’m thinking about them again.
We have crap chairs at work. We also have a crap “policy” (in quotes because
we don’t have a HR guide or anything, we just have what those with the ability to
reimburse my expenses say) about getting good chairs.</p>
<p>But even if I could get an expensive chair, I’m now going to be even more picky
about it. Especially in the width of the back of the chair. This caught my
attention today, just now. My chair has metal supports running along the rim
of the chair-back. They’re hard. My elbows hit them and they prevent my
elbows from going to a position that would be comfortable when typing. I don’t
like the design of the chair-back. It’s preventing me from keeping my shoulders
back the way I want to in order to maintain good posture, or what I feel would
be good posture (this is purely subjective and has not been evaluated by any
kind of ergonomics professional).</p>
<p>I’ve sat in a Steelcase Leap. It has a similar problem, since in order for it
to have the fancy lower back support system, it needs support on the outside
rim of the chair-back. So, even the chair I used to think was what I wanted
isn’t really any better than my crap chair I have now (in this one way).</p>
<p>But if I (or my employer) is going to drop hundreds or thousands of dollars
on a new chair, it had better be exactly what I want.</p>
<p>I still do the <a href="/2011/06/the-standing-desk/">“ghetto standing desk”</a> with my HP monitor box on top of my
desk. But even when standing, posture is still very important, and I need to
keep a better check on myself to make sure I’m not slouching when standing.
This is harder than I thought.</p>
<p>I’ve read that putting a 6 inch box on the floor, when standing, so that I can
rest one foot on it could be helpful. Alternating feet on the box with both
feet on the floor can give a few different poses to keep things interesting
and not set in just one standing pose. I’m going to give that a try, I just
need to find something sturdy enough to use. Different heights will be worth
trying, to find the best one. And I need to keep my neck up and shoulders back.</p>
<p>When standing, the one nice thing is there’s no chair to get in my way.
The bad thing is that there’s no chair to support me when I’m tired of standing.
Alternating between sitting in a good chair and standing is probably my ideal.
I have my eye on a <a href="http://www.geekdesk.com/default.asp?contentID=622">GeekDesk Max</a>.</p>
<p>I wonder if there’s a business in reselling standing desks, like GeekDesks,
along with other standing desk accessories… The biggest obstacle to wide
market sales is the price, most people are going to balk at a grand for a desk.
There’s not much competition in electric standing desks, that could be driving
the prices up, but now I’ve digressed…</p>
<p>Posture. It’s important!</p>
Restrooms and Offices2012-06-19T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/06/restrooms-and-offices<p>Restrooms and offices are interesting to compare. There’s generally 3 kinds
of offices and 3 kinds of restroom areas (for the men’s room side of things).</p>
<p>In an office, the 3 settings are:</p>
<ol>
<li>Open, with no or very low walls between desks</li>
<li>Cubes, with mid-height walls giving some privacy</li>
<li>Walls with doors, with actual privacy</li>
</ol>
<p>There’s 3 restroom settings, too:</p>
<ol>
<li>Open, such as the urinals, sometimes there’s little walls between each one</li>
<li>Stalls, where a wall goes neither to the ceiling or floor</li>
<li>Private, like in most homes, where there’s a door and real walls</li>
</ol>
<p>It’s interesting to compare each of these for social norms:</p>
<p>In an open office, it’s generally accepted to talk to the people around you, even
if those people don’t agree. In an open restroom setting, it’s usually awkward,
but again, some people didn’t get the memo and will gab away while relieving
themselves. In an open office, it’s also generally odd when someone stares at
you, but there’s nothing stopping them, while in the restroom, depending on who’s
doing the staring or being stared at, it’s possible a fight breaks out (this is
MINE, you have your own!).</p>
<p>In cubes or stalls, again, most people understand that talking between them is
not OK, but then there’s the contingent that doesn’t understand. There’s always
that one guy who pops his head over, just to see what’s going on, and in both
situations, <strong>no one likes that guy!</strong></p>
<p>In either the open restroom or stalls, talking on your cell phone feels weird,
and for good reason. No one wants to know that the person on the other end of
the phone is “doing their business.” But in any office, gabbing away on the
phone is seen as normal. In both cases, everyone else nearby has to listen to
the conversation too, and sometimes the listeners can’t tell if the person is
really on the phone or not and things feel really awkward.</p>
<p>At home, when using the restroom, you close the door. Others can’t see you, they
can’t hear you, and best of all, they can’t smell you. These are all good
things. It also means that you can’t see or hear the others. It’s generally
accepted that private bathrooms are the nicest, at least in the USA.</p>
<p>So why is the open or cube office plan so popular?</p>
<p>Managers want to watch, hear, and smell the minions “doing their business.”
That’s why.</p>
<p>Kind of gross, eh?</p>
Laptops Aren't Desktops2012-06-12T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/06/laptops-arent-desktops<p>The premise must seem like a stupid statement. Of course laptops aren’t
desktops, why would someone think otherwise?</p>
<p>There’s a whole industry out there making laptop computers, many of the
companies making laptops also make desktops. And there’s a class of consumer
who wants the power of a desktop but the portability of a laptop, and they’re
apparently not willing to buy 2 computers.</p>
<p>The compromise that this consumer is asking for is being given to the market
by the computer making companies, but it’s a horrible compromise, getting none
of the good things from either class of computer.</p>
<p>Let’s use the HP Elitebook 8560w as an example.
I happen to have one handy on my desk and
have been using it as my only work computer since December 2011. I’ve
<a href="/2011/11/debian-6-amd64-on-hp-elitebook-workstation-8560w">written about it before</a>. I run Debian on it, quite successfully.</p>
<p>My Elitebook has a dual core Intel Core-i7 (with HT, so 4 cores, sort of), 8GB
of RAM, a solid state disk, NVIDIA graphics, and a 24” HP IPS monitor via
DisplayPort. It’s a decent little machine and gets the job done quite well for
me. But here’s the catch, for the same money, I could have gotten a desktop
that would be even faster, have more of the ports I wanted (dual Ethernet, real
serial port, and PCI-e expansion), and had less configuration issues (wifi not
needed on a desktop, and the internal monitor is lower in res than my external
but apparently X doesn’t understand that I’m not using the internal monitor so
when I maximize things they don’t actually fill the external monitor). AND!
It’s a 15 inch laptop that’s almost 1.5 inches
thick and weighs in right around 7 pounds. To say the least, a MacBook Air
is a dream in comparison (the 11 inch Air is about as thick as just the screen
on the 8560w!).</p>
<p>To fix the lack of dual Ethernet, I have a <a href="http://www.amazon.com/gp/product/B000GB0N14/ref=as_li_ss_tl?ie=UTF8&tag=bradford07-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B000GB0N14">Belkin ExpressCard Gigabit Ethernet
card</a>, but it’s a 34 mm ExpressCard and the 8560w has a 54 mm wide slot, and
since there’s no support built into the 8560w for the skinnier cards, if I so
much as move the 8560w slightly, the ExpressCard loses contact and I lose my
second Ethernet port (which no longer causes Linux to lock up hard since I’ve
enabled PCI hotplug, but is still annoying since the DHCP server goes down and
if I have iptables routing going really pisses off the kernel).</p>
<p>So, I’ve got a laptop that’s not easily portable and a desktop that lacks some
of the speed and ports that I really want. Never mind that ECC memory or more
than 4 real cores wasn’t even an option on the 8560w (it is on many desktops I
would consider buying).</p>
<p>I thought that I only needed one machine. I was wrong. The 8560w I got is
not stellar at being either role that I want. The cost was lower than
purchasing two different machines, but not by that much. And I may not have
even needed a powerful laptop, something portable for $500 would cut the
mustard, especially if I could SSH into my desktop to do “real work”.</p>
<p>After having used the 8560w for almost 8 months, I’ve come to the conclusion
that I won’t be buying a “desktop replacement” laptop ever again. They don’t
replace a desktop, and they aren’t very portable laptops. Something like a
MacBook Air and a decent mid-range workstation would make me much happier and
probably more productive.</p>
<p>I will say, though, that since having a solid state disk, I’ll never go back
to having spinning platters for any machine that real work gets done on.
No. Way. Working with the Linux kernel git tree on spinning disks is a good
way to practice patience. Working with the Linux kernel git tree on a solid
state disk is at least bearable, and not that slow with a hot cache.</p>
Structured Learning2012-06-06T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/06/structured-learning<p>Structured learning is important. Hacking and slashing through being a n00b
is useful in many situations, but being able to find a person, book, or other
piece of documentation that will structure the learning for you makes for
faster and more thorough learning in many cases.</p>
<p>I liked college, I got to take lots of challenging classes that involved quite
a large amount of structured learning. Within the structured learning of class
time was often lab time which was much less structured except for the
requirements and expected result (if provided). But the labs were based on the
structured part of the course and so I could leverage that learning in the lab
environment in order to put together disparate bits of information to
accomplish the lab goal.</p>
<p>I feel I learn very quickly and thoroughly in this way. Structured learning
interspersed with labs.</p>
<p>But in the real world, once you have a job, there’s a lot less structured
learning and a lot more hacking and slashing as a n00b. There’s an impression
that there isn’t time to create proper structured learning activities or
documentation since there’s products to build or money to make. But I think
this may be a shortsighted view.</p>
<p>Granted, for-profit companies can’t be expected to prioritize structured
learning over profits, the shareholders wouldn’t allow that. But they should
provide some time and resources where structured learning can take place,
either by having experienced people lead a course or by providing time and
resources for individuals to engage in their own version. Some companies do
a great job of this, others don’t.</p>
<p>When I first started at Xerox, I got the impression that they did a decent job
of providing structured learning opportunities. There was a graduate studies
program where you could attend <a href="http://www.rit.edu">RIT</a> or the <a href="http://www.rochester.edu">U of R</a>, I went to company
provided training on things like Lean Six Sigma, and about once per year there
was some kind of “class” being offered (I remember taking a “class” on digital
control systems around 2006). It wasn’t the best environment for structured
learning, but Xerox didn’t hold people back from participating. Lately it
seems like Xerox isn’t as good as it used to be at providing these
opportunities, but then again, I left in November 2011 so I’m not the best
judge.</p>
<p>But what if your company doesn’t provide these types of opportunities? What
if you’re unemployed? What if you’re at a school taking classes that don’t
challenge you?</p>
<p>Well, then you should make your own structured learning opportunities.</p>
<p>There’s a really cool market developing online where people can take college
type courses for rather low costs. <a href="http://ocw.mit.edu/index.htm">MIT</a> has had this for a while, but
it was a bit hit or miss for each course. But now there’s things like
<a href="http://www.udacity.com/">Udacity</a>, <a href="http://news.stanford.edu/news/2011/august/online-computer-science-081611.html">Stanford’s online experiements</a>, and <a href="http://www.udemy.com/">Udemy</a>.
You can take college type courses for free or rather low cost ($100 per type
“rather-low”). This brings education to a whole new group of people, which is
awesome!</p>
<p>But what if you don’t have the ability to follow a time structured course like
these? I’m somewhat in that camp, as demands in my life and work probably
won’t let me commit to a multi-week (or multi-month) course schedule. I failed
at taking the Stanford database online course in 2011, I just couldn’t commit
to spending enough time each week due to a fluctuating schedule, so I just
stopped.</p>
<p>I’ve found that getting a book with exercises can provide a similar experience
and I can set my own schedule. I’m currently working through the exercises
in <a href="http://www.amazon.com/gp/product/0131103628/ref=as_li_ss_tl?ie=UTF8&tag=bradford07-20">K & R</a>. I’ve programmed in C for close to a decade, but never done
all of the exercises in K & R, so, now this is my structured learning. In
the mornings, when I’m able, I read a few pages, then do a few exercises. I
figure I can always come back after a week off and no one is holding me to a
schedule.</p>
<p>The schedule part will be my biggest challenge. If no one is holding me to a
schedule, how will I prevent myself from lapsing and stopping because I’m able
to come up with excuses? I don’t know, yet. I’m going to try this and see
how it goes. My goal is to do all of the exercises in K & R, to gain a better
understanding of C. If this goes well and I’m able to keep up with it on a
regular basis, then I’ll use the amount of time I was able to spend (rather
easy to calculate based on git commit timestamps) to see if I can commit to
taking an online course with a more fixed schedule.</p>
Why Mainline Matters2012-05-18T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/05/why-mainline-matters<p>Yesterday, I wrote a rant about why <a href="/2012/05/it-shouldnt-take-a-team">it shouldn’t take a team</a> of engineers
to buy a dev kit and get mainline Linux running. A great discussion ensued on
Google+, which was a great reason for me to log into Google+ for the first time
in 3 months, but that’s another story…</p>
<p>Mainline matters. Full stop.</p>
<p>Mainline matters because the people who have to approve patches or pulls in
order for code to make it into mainline are smarter than you. They’re smarter
than me. They’re smarter than most engineers at every ARM SoC vendor.
The people who need to approve code in order for it to make mainline bleed this
stuff, they’ve been around for decades, hacking Linux. They’ve seen almost
all of the tricks in the book. They will tell you when your code does stupid
things. Your code will do stupid things.</p>
<p>If I run TI’s kernel (that happens to be hosted on Arago, thanks for the
correction) and I run into a problem, how to I send a patch? How do I share my
problem with others who are running that kernel? How do I know it gets fixed
correctly? If I fix it myself, how do I know I’m not doing something stupid
that was tried 5 years ago and failed miserably?</p>
<p>Short answer, I don’t. I’m trusting TI and myself. TI’s great, but they make mistakes
and without the huge amount of experience that exists on mainline mailing
lists taking a look at the code, I don’t have huge faith. Look through a TI
repo, read all the “fixes foo” followed by “actually fixes foo” followed by
“fixes foo again for special case bar” (modified comment from Koen) commits.</p>
<p>I’m putting out a product. My value add is the software that runs on top of
Linux and the interfaces to the devices outside the SoC. I want my value adding
to stay at those levels. If I’m adding value to my company by making Linux
work enough that I can do these other tasks, that’s a failure of the
market. I don’t want to worry about the Linux kernel core functionality.</p>
<p>I’m running Debian on my Beaglebones. I use an old compiler. Both for the
same exact reason:</p>
<p><strong>My value add isn’t core Linux, it’s everything on top and around the core.</strong></p>
It Shouldn't Take a Team2012-05-17T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/05/it-shouldnt-take-a-team<p>It shouldn’t take a team of engineers to build a functional embedded GNU/Linux
system on a dev kit. Often, it does. The embedded dev kit industry is broken.</p>
<p>The folks over at <a href="http://beagleboard.org/">Beagleboard.org</a> are hard at work fixing this, but it
takes time, and they only offer solutions based on TI SoCs. They’re doing an
awesome job, please keep it up!</p>
<p>But, unless you want to base an embedded design on a part that requires package
on package memory, the Beagles aren’t quite there yet in terms of an open dev
kit that’s as easy to use as it should be. If you’re building a multimedia
system and want a dev kit, pick a Beagleboard-xM or a Pandaboard. Both are
great systems and have awesome support from all required software that you’ll
want to run.</p>
<p>But if you’re not trying to build a multimedia system, the Beagleboard.org team
only has the Beaglebone to offer. Don’t get me wrong, the Beaglebone is great.
However, there are currently huge nits to pick with it, and TI in general.</p>
<p>The <a href="http://www.ti.com/product/am3359">AM335x</a> SoC used on the Beaglebone is not well supported by any
mainline Linux kernel. Even the <a href="https://git.kernel.org/?p=linux/kernel/git/tmlind/linux-omap.git">linux-omap</a> repo can’t yet build a kernel
for an AM335x system that will enable MMC, which makes running the Beaglebone
away from an NFS root system rather difficult. TI offers their <a href="http://arago-project.org/git/projects/?p=linux-am33x.git;a=shortlog;h=refs/heads/v3.2-staging">Arago kernel</a>,
but until about 2 weeks ago, not even all of the interrupts were enabled in that
codebase. And this is the TI kernel tree that forms the basis of TI’s official
SDK releases! To top it all off, TI has missed the Linux 3.5 window in the
linux-omap repo to get any more changes in before Linus’ 3.5 merge window opens.
So, Linux 3.5 very likely won’t support the AM335x very well, if at all.</p>
<p>Hopefully TI will be able to get code into linux-omap for the 3.6 merge, but
with that kind of timeline, AM335x won’t run a mainline Linux kernel until
almost the end of 2012 (3.6 will probably be released in the 4th quarter, if
past release performance indicates future success). The Beaglebone was on sale
in the 4th quarter of 2011. If 1 year between release of dev kits and mainline
Linux support is considered good, that’s a horrible state of “good.”</p>
<p>AM335x silicon is due to be finalized and parts actually available for general
consumption “real soon now.” TI doesn’t seem to have intentions of moving their
Arago kernel repo past a Linux 3.2 base. Which is OK, Ben Hutchings is keeping
the v3.2.y stable series around for a while (Ubuntu and Debian rely on it), but
the Arago repo isn’t run by many people, so it doesn’t get very wide spread
testing. This leads to things like <a href="https://groups.google.com/forum/#!msg/beagleboard/A5Pmw94kFfo/nS9wVMPG4zEJ">PREEMPT breaking Ethernet!</a> Which is
reported to be in the stages of being fixed, but may or may not yet be complete.</p>
<p>TI’s ability to push code into mainline Linux is so bad that even the
official <a href="https://github.com/koenkooi/linux/tree/linux-ti33x-psp-3.2.16-r11l+gitr720e07b4c1f687b61b147b31c698cb6816d72f01">Beaglebone kernel</a>, maintained by Koen, is diverging from TI’s Arago
repo. Granted, a good bit of Koen’s changes are to support capes, but there
are fundamental changes to core AM335x code in Koen’s tree that are different
than what’s in Arago’s.</p>
<p>TI needs to fix their development process, they need to have support in mainline
Linux ASAP after announcement of new SoCs. Anything else is unacceptable.</p>
<p>To pick on TI even more, there’s the <a href="http://designsomething.org/craneboard/w/overview/default.aspx/">Craneboard</a>. Oh, my, goodness!
The Craneboard uses TI’s AM3517 processor. The AM3517 was released about
TWO YEARS AGO. You cannot currently build a mainline Linux kernel for the
AM3517! Even the linux-omap tree couldn’t do it well, until recently, but I
haven’t personally tried very hard, so it may not even do it well. TI’s
official SDK supplies a 2.6.37 kernel, which is not a long term supported
kernel. The <a href="https://github.com/craneboard/craneboard-kernel">Craneboard Github repo</a> is woefully out of date.</p>
<p>So now that I’ve picked on TI quite a bit, here’s my stance:</p>
<p>I want to build a product that uses an ARM SoC and runs Linux. I want to have
one, yes, that’s right ONE single, engineer who spends only part of their time
getting cross compilers, boot loaders, Linux kernel, and the rest of the
required userspace set up. I want that engineer to be able to receive a dev
kit from the UPS person and have it running GNU/Linux within 1 week. Not just
running the provided SD card or what-have-you, but building all the components
from supported mainline repositories and understanding the build systems.
Documentation is a huge part of this, but core code support needs to come first.
Without mainline, whether it be Linux, u-boot, or anything else, I have nothing.</p>
<p>Things in this relm are getting better, but too slowly. I put this blame
entirely on the SoC vendors, TI, Freescale, Samsung, ST, etc. All aren’t to
blame in the same amount, but all are guilty. The pay-for Linux vendors aren’t
helping, they have no incentive to make things easier for people who aren’t
their customers, so I’m not looking for them to make strides here. The SoC
vendors do have a very high vested interest in making things better, <a href="http://www.linaro.org/">Linaro</a>
is on the right track.</p>
<p>Groups like <a href="http://beagleboard.org/">Beagleboard.org</a> are making things better, but they are a tiny
team taking on the world and supporting not only documentation and code, but
trying to sell boards and do so on a shoe-string budget, all while making a
profit and not stepping on too many TI toes. There needs to be more
organizations like <a href="http://beagleboard.org/">Beagleboard.org</a>, putting out low cost dev kits and
pushing the SoC vendors to get their kernel code mainlined.</p>
<p>It shouldn’t take a team of engineers to get Linux running on a dev kit.
If your company has a team, or pays a team, that’s a sign that the dev kit
and SoC vendor industries are broken. Let’s fix that.</p>
I Want to Learn More About...2012-05-15T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/05/i-want-to-learn-more-about<p>Here are some things I want to learn more about:</p>
<ul>
<li>The Linux kernel</li>
<li>Microprocessors: ARM, MIPS, x86, and OpenRISC</li>
<li>Ruby and Rails</li>
<li>git</li>
<li>Flash memory. Mainstream use is approaching fast.</li>
<li>Compilers and software build tools</li>
</ul>
<p>And some things I want to start learning about:</p>
<ul>
<li>Heroku, EC2 / AWS, Engine Yard, or some other cloud technology.</li>
<li>Networking. IPv6 isn’t a big deal yet but it will be eventually.</li>
<li>Verilog / VHDL. I’ve done a tiny bit but I won’t be contributing to
<a href="http://opencores.org/">OpenCores</a> any time soon.</li>
<li>JavaScript and AJAX. The interactive web is a big deal.</li>
</ul>
Building the Arago Linux Kernel for Beaglebone2012-05-08T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/05/building-arago-kernel-for-beaglebone<p>Things you’ll need:</p>
<ol>
<li>The TI <a href="http://software-dl.ti.com/dsps/dsps_public_sw/am_bu/sdk/AM335xSDK/latest/index_FDS.html">am335x-evm-sdk sources</a></li>
<li>A git clone of the <a href="http://arago-project.org/git/projects/?p=linux-am33x.git">Arago am33xx v3.2-staging branch Linux source</a></li>
<li>Some x86 to <a href="http://wiki.debian.org/EmdebianToolchain">ARM cross compiler (I like Debian)</a></li>
<li>Patience</li>
</ol>
<p>First clone the Arago am33xx kernel tree, or if you already have a kernel tree
locally, add the Arago repo as a remote and check out the <code class="language-plaintext highlighter-rouge">v3.2-staging</code>
branch. Then unpack the TI am335x-evm-sdk sources, of which you’ll then again
need to unpack the Linux kernel sources inside, so that you can get the
Cortex-M3 firmware needed for proper power management. The firmware is located
at <code class="language-plaintext highlighter-rouge">firmware/am335x-pm-firmware.bin</code>. Copy it to the <code class="language-plaintext highlighter-rouge">firmware/</code> directory in
the Arago kernel tree you’ve checked out.</p>
<p>Now, within the Arago Linux tree, you’ll execute all the rest of the commands.</p>
<p>First, make sure everything’s clean:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>make mrproper
make ARCH=arm clean
</code></pre></div></div>
<p>Then load the <code class="language-plaintext highlighter-rouge">am335x_evm_defconfig</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- am335x_evm_defconfig
</code></pre></div></div>
<p>If you’d like to make any changes, you may use menuconfig. I personally like
to enable DEVTMPFS and automatically mount it at boot time since I don’t create
any entries in <code class="language-plaintext highlighter-rouge">/dev</code> on my root file system (I’m lazy, <a href="https://gist.github.com/2634388">patch available in
a gist</a>, apply the patch before loading the defconfig):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- menuconfig
</code></pre></div></div>
<p>Then build the uImage (replace the -j5 with an apropriate value for your build
machine):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- -j5 uImage
</code></pre></div></div>
<p>Copy the resulting uImage to your SD card and boot up your Beaglebone!</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cp arch/arm/boot/uImage /path/to/sdcard/
</code></pre></div></div>
<p>Make sure when unmounting your SD card that you allow the unmount to return
before physically removing it from your PC. A <code class="language-plaintext highlighter-rouge">sync</code> before the <code class="language-plaintext highlighter-rouge">umount</code>
command is my usual operation, to ensure all data has been written to the SD
card before I pull it. Linux on your PC will buffer data being written, so the
actual <code class="language-plaintext highlighter-rouge">cp</code> command will return before data has been fully written to the card.</p>
Open Office Plans Suck2012-03-28T00:00:00-04:00hhttp://www.bradfordembedded.com/2012/03/open-offices-suck<p>Open office plans suck.</p>
<p>Cubes are only slightly better.</p>
<p>Private offices with doors are best.</p>
<p>In an open office plan or in a cube, I put my headphones on when I want to get
real work done. I don’t work optimally with music but I sure as heck work
better with music than with hearing other people talking about things that
aren’t relevant to me.</p>
<p>If I had a real office with a door that closed and walls that went up to the
ceiling, I could close my door when I want to get real work done. Then I won’t
need my headphones and I can work optimally. Other conversations will be
muffled to the point where I can’t understand any talking I can hear and I
won’t get distracted.</p>
<p>The closed door is also a sign. It says, “I’m doing real work now. Don’t
bother me unless it’s urgent.” If I’m not doing real work, I’ll leave the
door open, feel free to interrupt me.</p>
<p>Non-real work includes answering email, searching the web, or
reading blogs related to something I’m interested in. Real work includes
writing code, thinking about hard problems, reading technical papers or data
sheets, writing important documents, or testing things that might not work
correctly. I don’t want to be interrupted when I’m doing real work.</p>
<p>It’s hard to put a price on the ability to do real work with a closed door.
Quantifying $X per year of output in order to justify the cost of building
real offices with real walls and real doors is difficult. You can’t do a
direct comparison with two different people, their abilities won’t match. You
can’t stick me in an office for a year, measure my output, then stick me in an
open office plan and measure again and produce any kind of reasonable
comparison. I won’t be doing the same work (otherwise the second year better
be way more productive since I’ve already done it once!).</p>
<p>Real walls and a door can’t cost more than a few thousand dollars. Cube walls
will also cost a few thousand dollars, but possibly slightly less. Yes, cube
walls are reconfigurable, but how often has anyone actually seen cubes get
reconfigured? I’ve seen it once in the past 7 years, and if they hadn’t been
reconfigured, no one would have cared (we moved 3 semi-cubes from one end of
the office to the other because they matched the furniture over there better).</p>
<p>If you’re planning on rearranging your office, either give everyone real
offices with walls and doors or don’t rearrange your office. Save your money.
Nothing but offices with walls will be better than what you currently have,
so either build real offices or don’t do anything and keep the money in the
bank.</p>
Iterate Hardware Like Software2012-03-05T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/03/iterate-hardware-like-software<p>Engineering companies don’t iterate on hardware like they do with software.
This is a <strong>BAD THING!</strong></p>
<p>Software is developed in iterations (agile, lean, and a million other buzz
words made up every year). Write some code, compile, run, test, repeat. When
it’s good enough, ship it. Continue iterating to add features and squash bugs.
Job done. That’s software.</p>
<p>Hardware is developed with a mindset that, “It sure as hell better work the
first time!” This is wrong on so many levels. It takes a heck of a lot longer
to make something work on the first try, so the engineer becomes very risk averse.
Anything that might not work gets outrageous attention, which takes time.</p>
<p>Even simple things get reviewed once, twice, three times, and by a committee.
There’s schematic reviews, layout reviews, and BOM reviews. All have to be
attended by a huge number of people. God forbid one little thing is wrong,
because then <strong>IT WON’T WORK AND THE SKY WILL FALL AND OUR FIRST PROTOTYPE
WON’T BE LOVED BY ITS MOMMY!</strong></p>
<p><strong>NO!</strong> This is wrong!</p>
<p>Iterate on hardware like your software team iterates on software. Constantly.
Build early, build often, fail constantly! It’s only through failure that the
weaknesses are found and can be removed. If the first prototype shows up and
it doesn’t work at all, find out why, fix it (physically if possible) and update
the design documents. Then build another. Find its failings, fix them. And
repeat. When a prototype works well enough, <strong>SELL IT!</strong> Then keep iterating
to make it cheaper, easier to manufacture, etc, all of which <strong>MAKE MORE MONEY!</strong></p>
<p>In complete tandem to this, if your hardware has software or firmware that’s
expected to run on it (let’s be honest, it does, everything does), your
software team can’t figure out if their software works until the hardware
arrives. If, when the hardware arrives, it’s different than the software team
thought, there’s a huge effort to correct things in the software. This may
take weeks. You don’t want that.</p>
<p>If you have hardware prototypes showing up every week, in various states of
working-ness, the software people can start trying stuff out. They’ll even be
able to work around some of the hardware bugs with their code, getting you to
a sellable product even faster. There won’t be any week or month long project
for the software team because the hardware works differently than they expected,
they’ll be involved the entire time though the hardware’s development.</p>
<p>And! Once the hardware is “good enough,” your software will likely be as well.
Then you can <strong>SHIP IT AND MAKE MONEY!</strong> which is why we’re all here in
the first place.</p>
<p>You should expect that the hardware design files on the hardware engineer’s
computer will be at least 1 or 2 versions newer than the hardware that the
software team is working with. Every week, new prototypes should be arriving.
Every week, incremental gains can be had in the hardware design. Every week
you <strong>will be paying for new prototypes</strong>, but only a hand full.
If a feature doesn’t make it into a prototype build, <strong>THAT’S OK</strong>
because it will make it in the next build, which is only 1 week away.</p>
<p>Oh yeah, and your contract manufacturing house will already have 90% of the
parts you need on-hand once your first prototype is built, which saves you
even more time for each successive iteration. Making the leap to designing
hardware this way is hard, it’s different, and that first time, you will order
thousands of dollars of boards that <strong>won’t work</strong>. But that’s the point!
The next batch you order, in 1 week, <strong>WILL WORK</strong> and your software team will
start coding on them, directly, getting you ever closer to a product you can
sell.</p>
<p>Apple does this with their hardware. Why don’t you?</p>
BeagleBone LEDs with Arago Kernel v3.22012-03-01T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/03/beaglebone-leds-with-arago-kernel-v3.2<p>If you boot your BeagleBone using the supplied micro SD card image, there are
two LEDs (near the Ethernet jack) that will blink. D2 (USR0 on the schematic)
will blink a heart beat, and D3 (USR1 on the schematic) will blink on SD card
access.</p>
<p>This is really handy. Since the Linux kernel is actually performing the blink
operations, as long as the heart beat is going, the kernel is running. And if
your BeagleBone is not responding well, but the SD card access LED is flashing
like there’s no tomorrow, it’s just because SD cards are slow and a lot of disk
access is going on.</p>
<p>But if you want to run TI’s <a href="http://arago-project.org/git/projects/?p=linux-am33x.git;a=shortlog;h=refs/heads/AM335XPSP_04.06.00.06">AM335XPSP_04.06.00.06 kernel</a> from the
<a href="http://arago-project.org/wiki/index.php/Main_Page">Arago Project</a>, these LED actions aren’t enabled by default.</p>
<p>To enable them, you’ll need a <a href="https://gist.github.com/1950437">small patch</a>. This is cobbled together from
code I found from the <a href="http://code.google.com/p/beagle-borg/source/browse/kernel/arch/arm/mach-omap2/board-am335xevm.c?spec=svn24c36edc3e53fb59cb703b08be8d25cecf548413&r=24c36edc3e53fb59cb703b08be8d25cecf548413">beagle-borg</a> team, but with portions not applicable
to the base BeagleBone removed.</p>
<p>I’m running Debian 6 Squeeze armel on my BeagleBone. I now have <strong>blinkin’
lights!</strong></p>
Smells Like Electrolytic Capacitors2012-02-24T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/02/smells-like-electrolytic-capacitors<p>Waking up at 2 am yesterday morning, I smelled something funny. My house was
not burning down, which was good, but something didn’t smell right.</p>
<p>Opening the door to my home office, the smell got much, much worse.</p>
<p>My wife had been using the office that day and had left her laptop on the desk.
It was still on and so became the prime suspect in my search for the smell
generator. We shut everything down, pulled all the power cords for all devices
in the office out of their
outlets, opened the windows (it was about 30 degrees F outside, thankfully not
any colder), and started to air out the room. Airing out lasted about an hour
before we both were fed up waiting, but the smell had reduced to a much lower
level and still nothing was on fire, so we closed everything up, put her laptop in
another room and went back to sleep.</p>
<p>If the laptop was the root of the smell, the smell should follow it. The smell
did not follow it.</p>
<p>So that left my desktop computer…</p>
<p>I didn’t have time to investigate in the morning, but the smell was not as bad
as it was in the middle of the night. Keeping the door and all heating vents
in the room closed, I went off to work. Upon my return, yesterday evening, the
smell was still present, but at a reduced level. Since I was now awake (2 am
isn’t my brain’s best hour), I tried powering on my desktop.</p>
<p>No dice. My raid card usually beeps every time the PCI bus reset gets pulled
(which happens twice at normal power up), it made the most sad sound a RAID card
with a buzzer can make. Like it had lost its soul. And then no boot. So,
seeing as the machine wouldn’t boot, the first thing that came to mind was
either the power supply or motherboard.</p>
<p>A quick bit of looking at my desktop’s motherboard showed no signs of dying
electrolytic capacitors or other obvious signs of death, so the power supply
became the prime suspect. Pulling the power supply and looking closely though
the fan grates yielded the culprit: electrolytic capacitors with their
electrolyte popping out of the top of the can… That’s not good…</p>
<p>The power supply has since been relegated to the garage, to wait until the next
time I recycle some electronics. I’m now waiting patiently for the UPS guy to
show up with a new Seasonic 520 Watt unit. It can’t come fast enough! :)</p>
<p>Last year, about this time, I was experiencing some RAM troubles. When ever
I would plug in an SD card reader, the kernel would complain about ECC errors.
If I let it go long enough, I’d get to enjoy a fun crash. Performing a bunch
of memory tests yielded no errors when running memtest86+. Regardless, every
time I plugged that SD card reader in, ECC errors would arise and shortly
thereafter, a crash. I narrowed the errors down to 4 of my 8 sticks of DDR
memory, after removing them, the ECC errors and crashes went away.</p>
<p>I suspect that the power supply had been dying for quite a while, and that it
may have actually been the root cause of my ECC errors. Since memtest86+
could not detect any problems, even when run for a full day, things just weren’t
adding up. Maybe I’ll get to reinstall the other half of my DDR once the new
power supply arrives, that’d be a nice bonus.</p>
<p>If you’re wondering why I didn’t just buy new DDR, you should price out
registered ECC DDR400. It’s not cheap, but I am. One 1GB stick from Crucial
costs as much as the power supply I just bought. My desktop is a 5+ year old dual
processor (physical ones, not cores) AMD Opteron box, and memory must be
installed equally between the two processors and in pairs (to get dual-channel
operation), so the only choices are 4 or 8 sticks of DDR if I want reasonably
good performance. I was too cheap to try and buy another stick of DDR since
my computer still worked and the impact of half as much memory really wasn’t
that bad. Although the prospect of getting it all back would be nice…</p>
Triggers and Whining2012-02-10T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/02/triggers_and_whining<p>I’m reading <a href="http://www.amazon.com/gp/product/1439127662/ref=as_li_ss_tl?ie=UTF8&tag=bradford07-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=1439127662">The Way We’re Working Isn’t Working</a> by Tony Schwartz.
In it, Tony talks about triggers: things that get a negative reaction. Be
they actions, words, or world events, triggers are things that get our blood
pressure up and make us act in fight or flight mode. Triggers happen to
everyone, it’s how we each deal with the triggers in our lives that sets us up
to have a better or worse life.</p>
<p>The Way We’re Working recommends taking a step back when confronted with a
trigger. Ask yourself what are the facts and what are the stories you’re
telling yourself. There’s a big difference! Facts are facts. Actual things
that happened are facts. Stories are what our brains tell us in order to make
sense of facts.</p>
<p>I’m not very good at dealing with triggers. The best evidence I have is posts
I’ve made recently about <a href="/2012/02/atlassian-vs-github">Atlassian</a> and <a href="/2012/01/fuck-you-windows">Windows</a>. This
is something I need to personally work on. I had facts in both cases, but I told
myself (and then my blog) stories in order to rationalize those facts in a way
that made sense to me in my upset state. Once I calmed down, my stories
changed, that’s a good indicator that my stories were leaving out critical facts
or I was telling them in a way that made me feel good about myself by being
negative about something else. That’s not healthy.</p>
<p>I’ve also recently read an interesting essay on value from <a href="http://www.theajinetwork.com/index.php">The Aji Network</a>.
(Sorry, I’m not allowed to link to the essay.) In it, there’s mention about the fact
that the market does not respond to whining, only to actual action. People who
whine and don’t take any action aren’t really of concern.</p>
<p>For example, I was whining about Atlassian and Windows, but I’m not in a place
to actually take action to stop using those things. For one, I’m a member of
a company that’s using JIRA and tools that only run on Windows (no, VisualStudio
isn’t a candidate for <a href="http://www.winehq.org/">WINE</a>). I’m just whining. The market doesn’t
really care because I’m not actually going to take action on my whines.</p>
<p>Atlassian has actually gotten in contact with me (complaining on twitter really
does work!) and they explained why I had to enter credit card details. They’re
working to explain that better in their purchasing pages and they’re looking
into why I wasn’t notified when our account actually got upgraded. Turns out
that most account upgrades do happen in minutes.</p>
<p>In all, I’m going to work on my triggers and try to stop whining. It’s not good
for my health and the market doesn’t care.</p>
Atlassian vs GitHub2012-02-09T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/02/atlassian-vs-github<p>At work, we use both Atlassian’s OnDemand JIRA tool for tracking bugs and tasks,
and we use GitHub for version control. Both our accounts have everything
private, we’re not running an open source project.</p>
<p>I’ve personally not been that fond of the JIRA tool. I’m sure it’s good for
some instances, but either the way it’s intended to work or the limitations
put on our OnDemand instance cause it to be less than stellar in my eyes. It
could just be that the point where I am in bug and feature tracking doesn’t
need the complexity that JIRA brings to the table. I’ve switched over to using
GitHub issues for my day-to-day and my workflow is smoother.</p>
<p>Also, the way that Atlassian and GitHub deal with payments is rather different.
Both keep our (well, our CTO’s) credit card on file and bill it monthly. But
if you want to upgrade your level of service (adding users to Atlassian / adding
repos to GitHub), the method of doing that is drastically different for the
internet age.</p>
<p>GitHub has a listing of their different offering levels, the number of private
repos you can have, and you push the “Upgrade” or “Downgrade” button to move
your account up or down in cost and available private repos. After pushing
the button, you’re asked to confirm, and then <strong>WHAM</strong> your account changes to
reflect your desires. No fluff. I like it.</p>
<p>Atlassian on the other hand, requires you to enter credit card details when
changing your account level. You can’t select to just use the card already on
file. Yesterday, I wanted to move our OnDemand instance
from 10 users to 15 (we now have 11 developers who need access).
The CTO was not around, so I couldn’t use his credit card, which meant I had
to use my own. I thought, “No big deal, right?” Well, sort of. I now have to
submit an expense report, and since Atlassian says that my credit card is now
going to be the one billed monthly, rather than the one previously on file, I
have to get the CTO’s card from him when he’s back in town and change our
billing preferences, again! To top it all off, after I go through all of this,
Atlassian tells me, “our sales team member will review and process your order
within the next 1 - 3 business days.” <strong>ARE YOU SERIOUS?!?!</strong></p>
<p>Shipping physical things might take a day or two to grab, pack, and slap a label
onto. Charging a credit card and changing a variable from 10 to 15 shouldn’t!
A few hours, that’s reasonable. A few minutes, that’s the expectation.
Days just don’t exist on the internet.</p>
<p>I’ve gotten word back from Atlassian sales. Turns out our account got upgraded
overnight, which is good. But their response to my complaint about entering
credit card details <strong>again</strong> is that they do this on purpose to ensure that
the credit card owner isn’t surprised by changes to the billing amount. I can
only assume they’ve had problems before. I’m confused by this, since only those
who can manage the account can upgrade it. I’d think the solution here is to
tell customers not to let incompetent people manage an account where billing
decisions can be made.</p>
Moving Day is Coming!2012-01-18T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/01/moving-day-is-coming<div class='post'>
<p>Shortly, Bradford Embedded will be moving away from Blogger!</p>
<p>The exact timing isn't set yet but I will be sure to post the new home prominently when it happens. I'm moving to a Jekyll generated blog hosted on GitHub pages. The goal is to have the transition go as smoothly as possible, so I'm taking it slow. Until I've fully switched over, all posts will show up on both platforms.</p>
<p>To see my progress - and the full archive of posts imported from Blogger - head on over to the new <a href="http://www.bradfordembedded.com">BradfordEmbedded.com</a>.</p>
</div>
Open Hardware Innovation - It's Coming...2012-01-16T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/01/open-hardware-innovation-it-s-coming<div class='post'>
I predict that within the next 2 years, <a href="http://www.openhardwaresummit.org/">open hardware</a> designs really start giving semi-commodity hardware a run for its money.<br /><br />Things like wifi routers, Ethernet switches, add-in cards, and input devices. The time of the somewhat specialty non-open hardware design is coming to an end.<br /><br />Given the capabilities of the BeagleBoards and the <a href="http://www.bunniestudios.com/blog/?p=2117">neTV</a>, <a href="http://www.shapeways.com/">Shapeways</a>, lower cost FPGAs, open source micro controller development environments, and lower cost hardware design tools: there's going to be a really fun shake up the hardware landscape. Combine this low cost (to develop) tech with <a href="http://www.kickstarter.com/">KickStarter</a> type funding and I expect a large number of open hardware companies will emerge in the next few years.<br /><br />The early adopters will be power users who don't mind a few inconveniences at first in order to support awesome products (just like on the web or with software). After about a year of an open hardware product being out, it'll start leaking into the masses. Then it will all be over. There will be a commoditization of hardware in the same way open / free software has commoditized a huge swath of the software landscape (open / free web servers run a HUGE portion of the web). Traditional vendors of hardware will start using the open / free hardware as a basis for their designs because it will be cost prohibitive to do otherwise.<br /><br />We've seen this before with software. It's to the point now where if you're building a product with an embedded computer in it, you pick GNU/Linux (or Android which uses Linux, the kernel at least). It's too expensive to do otherwise, there's just such an awesome set of base open / free software available.<br /><br />I'd love to be on the forefront of the open hardware wave. I'm not, yet. It's just starting now. I better get crackin'...</div>
<h2>Comments</h2>
<div class='comments'>
</div>
More BeagleBone USB Based Serial Port Rambling2012-01-13T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/01/more-beaglebone-usb-based-serial-port-rambling<div class='post'>
I'm somewhat disappointed with the BeagleBone's USB based serial port setup to connect to a host PC. I've written about some of my <a href="http://bradfordembedded.blogspot.com/2011/12/beaglebone-hardware-desire-usb-ftdi.html">frustration before</a>, but it's only getting worse for me.<br /><br />Now, sometimes (although not that often) when I plug the mini-USB cable into the BeagleBone while holding the reset button (to prevent booting) and open my serial terminal, when I release the reset I get garbage on the terminal and the Bone won't boot. Resetting the board a few times often fixes this, but is very annoying. Also, sometimes if I already have the Bone running with external power and then attempt to connect the mini-USB cable, I still sometimes get garbage in the terminal.<br /><br />I'm not sure of the root cause, but it affects at least one of the BeagleBones I have on my desk (I have 3 total). It could very well be that minicom is somewhat to blame, I'm switching to picocom to see if that helps. Regardless, I'm now very much desiring a real serial port that avoids this whole FTDI chip business. It'd be nice to have a "real serial port cape" that provides DB9 connectors for one or two of the serial ports that are exposed through the expansion headers. Modifying the SPL, u-boot, and kernel to use a different UART isn't hard...<br /><br />The main issue would be getting the boot loaders to read the cape I2C memory and realize that it should change the default UART output automatically. That's doable if people are willing to rebuild the bootloader (or download a precompiled version) and use it. It would be handy if the default BeagleBone bootloader supplied with the kit did this, but maybe that's asking a bit much. I should probably write up this idea on the BeagleBoard mailing list...<br /><br />I think the goal of having one connection for USB, JTAG, and power was a good goal for the BeagleBone team. It's just that I don't really want that. Apparently I'm special :). Otherwise I do really like the BeagleBone. I'd personally prefer a real RS232 levels serial port, external power adapter (or PoE!), and a JTAG header (dedicated FTDI JTAG devices are under $100 from many vendors).</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
I'm personally not a fan of screen for serial terminals. The scrollback isn't intuitive for me (my problem, not necessarily screen's as it can be changed in the config). I will give it a try the next time minicom or picocom barf.<br /><br />I've never had any issues using minicom for real serial ports (think /dev/ttyS0 and friends) or dedicated USB to serial port devices that have a DB9 connector. I've been having issues here and there with the integrated system on the BeagleBone. Because of my past good experiences, it makes me think the Bone is the problem, not me (but I'm slightly biased).</div>
</div>
<div class='comment'>
<div class='author'>Philip</div>
<div class='content'>
Try:<br /><br />sudo screen /dev/ttyUSB1 115200,cs8,-ixon,-ixoff</div>
</div>
</div>
Justifying Good Chairs2012-01-12T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/01/justifying-good-chairs<div class='post'>
<a href="http://www.codinghorror.com/blog/2008/07/investing-in-a-quality-programming-chair.html">Good chairs</a> aren't cheap. We're talking $800 for a high quality, new, warrantied, chair from a good supplier like <a href="http://www.steelcase.com">Steelcase</a> or <a href="http://www.hermanmiller.com/">Herman Miller</a>.<br /><br />But those same chairs, in great condition, on the used market generally can be had for $400 to $500. $400 isn't that expensive for a chair when you think about it.<br /><br />Let's assume a quality engineer costs $100,000 per year (pay, benefits, taxes, and insurance) for a business. With 260 working days in a year (5 days per week, 52 weeks per year), that's $385 per day that the business is paying for the engineer's time.<br /><br />What does that mean for chairs? It means that over the course of a 3 year span where a quality engineer works for you, it's cheaper to buy a $400 chair than it is to have that engineer miss <b>1/3 of one day of work</b> (that's 3 hours) per year.<br /><br />Or think of it this way, if having a good $400 chair makes your engineer 0.13% more efficient over 3 years, the chair pays for itself. That's <b>3 minutes</b> more output <b>per week!</b> <b>3 minutes!</b><br /><br />Or, brand new $800 chairs mean you need to get 6 minutes more work out of your 3 year engineer per week. Still not crazy, just over 1 minute per day more work to break even.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
HP Letting Us Down2012-01-11T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/01/hp-letting-us-down<div class='post'>
I've written about my experiences with my EliteBook 8560w a few times. Two other guys in the office also got 8560ws around the same time I got mine. They both run Windows 7 (I know! The horror! :) and have the AMD graphics option. Overall, nice machines, but there's one little gotcha:<br /><br />HP's 27" IPS monitor, the ZR2740w, refuses to be driven by the AMD chipset, with Windows 7 Pro 64 bit, over DisplayPort. We and HP have no idea why. I can plug my 8560w with NVIDIA graphics and Debian 6 into the ZR2740w, restart X (so it detects the monitor correctly instead of thinking it's my ZR2440w), and BAM!, I get the full 2560x1440 pixel glory.<br /><br />My office mate has been on the phone with HP support for hours trying to get this fixed. So far, no dice. And we're <a href="https://www.google.com/search?q=zr2740w+8560w">not the only ones with issues</a>.<br /><br />It boggles my mind why this isn't a plug-n-play type operation. Especially if an NVIDIA card, on 64 bit Linux (Debian Stable no less!), can drive the monitor just the way it is supposed to be driven. HP's really disappointing with this one.<br /><br />Side note: My ZR2440w and the ZR2740w are both "anti-glare" type monitors. There's a coating on the screen to disperse the light and make it matte. My ZR2440w has a pretty nice coating, I don't really notice it. The ZR2740w coating is much easier to notice, solid colors on the display look off, just not the solid single color I'd expect, almost like they're shimmering. I'm not a huge fan, but it's probably just something to get used to. I wonder, are other 27" panels with anti-glare the same? Or is HP's anti-glare somehow different than, say, Dell's?<br /><br />UPDATE 20120111 4:00pm - HP's now saying it's an ATI driver issue and that ATI will have an updated driver by the end of January.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Business Secrecy2012-01-09T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/01/business-secrecy<div class='post'>
There's lots of talk about "stealth" start-ups that don't publicize their market segment. In any company, there's usually also talk about not tipping off the competition about the next big thing. Basically, the idea of secrecy in business is normal and accepted.<br /><br />I (personally) think business secrecy is over rated.<br /><br />Granted, I've never owned my own business and I'm not a marketing mastermind, but I get the impression that keeping things secret really doesn't get you much. It's execution that matters.<br /><br />Look at Apple, everyone knows what they are working on at least 6 months in advance. iPhones leak. The rumors sites get insider information. Apple may even leak info out of its PR departments on purpose (but that's a rumor). It's free press for Apple, they don't pay a dime for this advertising. Apple didn't release an MP3 player till years after the first ones went to market. Even with everyone knowing basically what Apple are working on months before it comes out, they still completely dominate their markets.<br /><br />Look at your local pizza joint. Most make the pizza in view of customers at the counter. The ingredients are all known, just watch the delivery guy drop them off. The economics are easy to figure if you're in a competing pizza business. Not much is secret, yet there's probably hundreds of thousands of pizza joints doing very well around the world.<br /><br />It all comes back to execution.<br /><br />Execution on an idea, a product, or a service. All of the best performing companies have huge numbers of competitors all making very close to the same thing and selling to the same potential customers. It's execution that sets the best apart from the rest.<br /><br />Customer service, design, capability, and features. That's execution. Not basic ideas or secrecy.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Things That Matter in an Office2012-01-05T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/01/things-that-matter-in-an-office<div class='post'>
I'm now working at my second "real job." The two environments are very different but I've grown to appreciate a few office things:<br /><br /><ul><li>Free decent coffee</li><li>A small kitchen area</li><li>Offices with walls and doors, just like in <a href="http://www.amazon.com/gp/product/0932633439/ref=as_li_ss_tl?ie=UTF8&tag=bradford07-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0932633439">Peopleware</a><img src="http://www.assoc-amazon.com/e/ir?t=bradford07-20&l=as2&o=1&a=0932633439" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" /></li><li>Startups don't have open office environments because they like them, real estate and renovations are a pain and that effort would be better put to making product</li><li>Parking shouldn't require a 10 minute walk, in the cold, in Rochester</li><li>No one should be using a monitor smaller than 24 inches</li><li>Laptops, get them</li><li>Solid State Disks, get them for the laptops</li><li>Wifi, gotta have</li><li>Fast, symmetric net connection for the office, at least 10/10 Mbit for a few people, more for more</li><li>Don't block any of the web. Trust your employees not to surf porn at work and they won't.</li><li>Big desks with lots of space are very handy, reduces the need for lab time with hardware</li><li>GOOD CHAIRS! They don't have to be Aerons (my present chair leaves something to be desired)</li><li>Open 24/7 for use</li></ul></div>
<h2>Comments</h2>
<div class='comments'>
</div>
Load Average - 10.082012-01-04T00:00:00-05:00hhttp://www.bradfordembedded.com/2012/01/load-average-10-08<div class='post'>
I like my HP 8560w laptop :)<br /><br />I currently have a 1 minute load average of 10.08 and the thing is still fairly responsive. I'm doing an rsnapshot backup of the main SSD with an encrypted ext4 disk to an external USB 3.0 encrypted ext4 disk while cross compiling Linux 3.1 for the BeagleBone and running a Windows 7 VM in VirtualBox. I'm going to credit about half of this ability to the Core-i7 and the other half to the SSD, the 8GB of ram helps I'm sure (look ma, no swap!).<br /><br />Patiently awaiting Linux 3.1 or 3.2 to show up in Squeeze backports (if it ever will) so I can enable discards for the SSD and punch through the encryption layer. Should give a little better response from the SSD but even after 2 months of decent usage, the SSD is still much faster than a traditional hard disk. I'm not sure how important TRIM really is...</div>
<h2>Comments</h2>
<div class='comments'>
</div>
NameCheap and SOPA2011-12-29T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/12/namecheap-and-sopa<div class='post'>
I'm now a fan (and customer) of <a href="http://www.namecheap.com?aff=26161">NameCheap</a> (that's an affiliate link, if you're interested in supporting Bradford Embedded).<br /><br />NameCheap is running a special today on any domain transfers. Use the coupon code "<a href="https://www.namecheap.com/moveyourdomainday.aspx">SOPAsucks</a>" (without the quotes) and transfers are only $6.99, plus NameCheap will donate $1 to the EFF for every transfer! Transfers include 1 year of renewal.<br /><br />The target appears to be GoDaddy, who were supporting SOPA. I dislike GoDaddy (hence, no link) as well, but I had that opinion long before SOPA came about. NameCheap has been a fairly popular topic over on <a href="http://news.ycombinator.com/">HN</a> recently. That's where I heard about them. If you use the coupon code and my affiliate link, I don't get any money but that's OK (I encourage you to use the coupon! SOPA does suck, and supporting the EFF is way more important than supporting me).<br /><br />NameCheap's also running a special where if you register a new domain (in at least .com and .net, and possibly others), you get the .org domain registration for free. No coupons required. The affiliate link should still pay me if you use this deal, and you won't pay a higher price because of it.<br /><br />I've transferred my domains to NameCheap from <a href="http://dyn.com/">Dyn</a>. I liked Dyn, I have nothing against them, but NameCheap lives up to their name, they're cheaper, and I like supporting the EFF and companies who take stands.<br /><br />A short while ago, <a href="http://bradfordembedded.blogspot.com/2011/11/google-adwords-gone.html">I removed ads from my blog</a>. I'm still against ads, even though this post may sound like one. It is, but only because I'm a very happy customer, wanted to share my experience, and if it can net me a few bucks, all the better. I'm also in a transparent mood today ;)</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Set The World On Fire2011-12-27T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/12/set-the-world-on-fire<div class='post'>
I like <a href="https://twitter.com/FAKEGRIMLOCK">FAKEGRIMLOCK</a>. You should get in on the <a href="http://www.kickstarter.com/projects/531215105/make-fakegrimlock-posters">KickStarter</a>. I also like <a href="http://sethgodin.typepad.com">Seth Godin</a>.<br /><br />Put the two together:<br /><br />If you want to set the world on fire, first, <a href="http://www.feld.com/wp/archives/2011/10/be-on-fire.html">you must burn</a>!<br />If you're not setting the world on fire, your house is burning. <a href="http://sethgodin.typepad.com/seths_blog/2011/12/firemen-donuts-and-meetings.html">Do something about it, NOW!</a><br /><br />I like these analogies for business.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
BeagleBone Debian Squeeze armel Multistrap Config2011-12-27T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/12/beaglebone-debian-squeeze-armel-multistrap-config<div class='post'>
For those interested, here's my current Debian Squeeze armel <a href="http://wiki.debian.org/Multistrap">multistrap</a> configuration file that I'm using to build a Debian file system for my BeagleBones:<br /><br /><script src="https://gist.github.com/1523416.js?file=multistrap-armel.conf"></script><br /><br />It's going to be a little <a href="http://rubyonrails.org/">rails</a> server :)<br /><br />EDIT: You'll probably want to use the <a href="http://packages.debian.org/squeeze-backports/multistrap">backports version of multistrap</a> if you want to verify package signatures and you have fakeroot installed. There's no support for this in the standard Squeeze multistrap version.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
BeagleBone Boot Time2011-12-27T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/12/beaglebone-boot-time<div class='post'>
Apparently a lot of people want to know things about BeagleBone boot time...<br />Let me know if this is covering it or if you'd like more info. Most of my work is with <a href="http://trac.cross-lfs.org/">CLFS</a> and <a href="http://www.debian.org/">Debian</a>.<br /><br />I'm seeing about 8.5 seconds from kernel start to login prompt with a rather pared down Debian Sqeueze armel configuration and Linux 3.1. Add about two seconds for SPL, u-boot, and kernel decompression. Then add on the boot delay countdown (3 seconds in my case). That gets us to about 13.5 seconds to boot.<br /><br />Let's be conservative and say 15 seconds. Completely respectable. Not a 1 second boot, but that's not my goal, anything under a minute is pretty good in my book.<br /><br />To give you a sense of what I'm starting at boot time for services, here's some output:<br /><br /><script src="https://gist.github.com/1523407.js?file=gistfile1.txt"></script></div>
<h2>Comments</h2>
<div class='comments'>
</div>
BeagleBone Booting Debian Squeeze!2011-12-19T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/12/beaglebone-booting-debian-squeeze<div class='post'>
I've got my BeagleBone booting Debian 6 Squeeze!<br />(The real Squeeze, not Emdebian)<br /><br />I've used the <a href="http://arago-project.org/git/projects/?p=u-boot-am33x.git;a=summary">u-boot</a> and <a href="http://arago-project.org/git/projects/?p=linux-am33x.git;a=summary">kernel</a> sources (with only a few tiny modifications from the am335x_evm_defconfig) from the <a href="http://arago-project.org/wiki/index.php/Main_Page">Arago project</a> git repos. I'm using <a href="http://wiki.debian.org/Multistrap">multistrap</a> and qemu-static to build and configure Debian on my amd64 host, then copying over the files to the microSD card.<br /><br />I'll be sure to post more info on steps in the near future. The partitioning and formatting of the microSD card is the first step.<br /><br />Click through the jump to see output in a Gist.<br /><br /><a name='more'></a><br /><br /><script src="https://gist.github.com/1497690.js?file=gistfile1.txt"></script></div>
<h2>Comments</h2>
<div class='comments'>
</div>
Format an SD Card with 8 MiB Aligned Partitions2011-12-14T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/12/format-an-sd-card-with-8-mib-aligned-partitions<div class='post'>
SD cards generally have 2, 4, or 8 MiB (MiB = 1024 * 1024 bytes) erase blocks and 8 KiB write pages. Because of this, your partitions on an SD card should be aligned so that they begin and end at the edges of the erase blocks. This, of course, assumes the SD card isn't doing address translation and sticking data where ever it wants (which the low priced ones probably aren't, the high priced ones might be). So take this with a grain of salt :)<br /><br />If you want to create a microSD card for use with the BeagleBone that uses partitions aligned to 8MiB bounds, you can use the below script. It's adapted from the <a href="http://elinux.org/Panda_How_to_MLO_%26_u-boot">eLinux.org Panda U-Boot instructions</a>. It assumes you are using a microSD card that shows up as an mmc device where partitions are named /dev/mmcblk0p1 (and not a SCSI device, like some USB based SD readers do, where partitions are named /dev/sdb1). You'll destroy all data on the microSD card and you'll end up with a 64MiB FAT partition and the rest of the device as an ext3 partition.<br /><br /><script src="https://gist.github.com/1477240.js?file=format_sd.sh"></script><br /><br />Copy your MLO and u-boot.img onto the FAT partition and boot it up! The BeagleBone will boot this partition mapping, even though its different from TI's recommendations. I'm using the SPL and u-boot from the <a href="http://arago-project.org/git/projects/?p=u-boot-am33x.git">Arago repo</a>.<br /><br />I'd like to build in support for determining the partition type (mmc versus SCSI) automatically, but that's not done yet. My HP 8560w has a built in SD card reader that shows up as an mmc device.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
ROC The Day!2011-12-08T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/12/roc-the-day<div class='post'>
If you live near Rochester, NY, or if you know someone who does, please take a few minutes to visit <a href="http://roctheday.org">ROCtheday.org</a> and donate to one of our local not for profits. This is a one day (today!) only fundraising event for a huge number of Rochester groups.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
SanDisk iNAND2011-12-07T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/12/sandisk-inand<div class='post'>
I stumbled upon <a href="http://www.sandisk.com/business-solutions/inand-embedded-flash-drives">SanDisk's iNAND</a> products today while doing some searching about SD cards. The iNAND idea looks very appealing to me compared to raw flash from a board design perspective. Since iNAND looks just like a MMC device, you can hook up to it with a 4 or 8 bit data bus and your total useful pin count can stay around 11 pins, not counting power, ground, and decoupling caps for internally derived voltages. In terms of keeping pin count down, that's awesome!<br /><br />Then, looking what appears to have been a leaked preliminary data sheet for an older version of the current iNAND devices, the speed is pretty good, and internally there's 512 byte pages, wear leveling, TRIM support, and a host of other features required in demanding storage applications (like industrial or smart phones).<br /><br />I assume the controller internals are very similar to those found in SanDisk's high end SD cards, and so using iNAND may not give much of a performance difference. But in some cases, a removable SD card is a liability and soldered down memory would be preferred, such as if people might steal it or in environments with high vibration.<br /><br />The only drawback I see is with the lack of removability. Either SanDisk needs to program your desired initial data into the iNAND at time of purchase, or you need a fixture to interface the iNAND on your own manufacturing line and engineering work bench. Not a big deal if you have the resources and design the board it mounts to correctly. And then if you create a situation where the board is unbootable (I'm looking at you, in the field software upgrades), you'll need either the fixture or a JTAG device, just like raw flash.<br /><br />I don't know the costs associated with each iNAND family but I do have a request into SanDisk sales to get more info and data sheets. Not sure I'll be able to share that data, but if I am, I will.<br /><br />UPDATE 20111208 1:45pm - After more research, the iNAND family of devices are a type of <a href="http://en.wikipedia.org/wiki/MultiMediaCard#eMMC">eMMC device</a>. Basically it's MMC but in a soldered down package, usually BGA. Another handy bit is when developing hardware that will eventually use eMMC, you can lay down the footprint for both the eMMC device and a normal microSD card connector and only populate the one you wish to use (eMMC in production, microSD in development).</div>
<h2>Comments</h2>
<div class='comments'>
</div>
BeagleBone Hardware Desire - USB and FTDI Power Independence2011-12-02T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/12/beaglebone-hardware-desire-usb-ftdi-power-independence<div class='post'>
Now that I've got my BeagleBone, I'm still not happy :)<br /><br />I'd like to be able to plug the USB cable into my host PC but not have the board power up. Since the USB cable is now also the serial cable, I'd like to have a few seconds after plugging the USB cable to get my serial port terminal up and running so I can watch the x-loader and u-boot data from a cold boot. Based on my read through the schematics, this isn't possible by just removing a resistor or a simple hack. I could probably cut some traces and solder up some jumpers, but it will look messy and I'm not sure it's worth the time, yet.<br /><br />It would be nice to have the option of the micro USB connector's power only going to the USB hub and FTDI chip. That way, when USB is plugged to the host PC, the serial port can be set up before the ARM core boots. Then when DC power is applied to the 5V input, the ARM core and all other circuits would be powered and boot would occur. This would be behavior similar to how the BeagleBoard and BeagleBoard-xM work with their "real" serial ports. I could connect my serial term before boot.<br /><br />I realize I can just push the rest button to get a glimpse of the x-loader and u-boot messages, but that's not what I want. The things I'm going to be building on my BeagleBone will have a rather quick boot (although nothing like the 1 second stuff) and while doing development, I'd like to be able to see the messages scroll. It's also not always a good idea to hit the hard reset button once Linux has mounted the file systems and started services (ext3/4 have journaling but it's still not a good thing to do often).<br /><br />I'm not sure of the official procedure for requesting hardware changes for the BeagleBoards, but I'll probably stop by the Google Group and post this request there, as well.<br /><br />EDIT 20111208 6:40am: I sent in a <a href="http://groups.google.com/group/beagleboard/browse_thread/thread/328a9c693b646816#">message to the BeagleBoard Google Group</a> yesterday describing this.</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
Jadon, will do. Thanks!<br /><br />Based on my reading of the schematics, it looks like the main power controller (TPS65217) takes input both from the USB 5V and the external adapter 5V. Output is the 3.3V used for both the ARM SoC as well as the FTDI. On my quick look, my power scheme would require an additional DC-DC converter to drop the USB 5V to 3.3V for the FTDI in order to get independence.<br /><br />The interesting part, it seems, from looking at the schematics is that the USB hub is powered from the mini-USB connector's 5V and never from the external adapter's 5V (which makes sense since it's only used when mini-USB is connected). It occurred to me (before your comment) that my desire most likely was taken into consideration but was dropped because the normal use case couldn't justify the extra cost (both board space and $).</div>
</div>
<div class='comment'>
<div class='author'>Jadon</div>
<div class='content'>
Some thought was put into doing exactly what you are suggesting, where there was a way to keep the FTDI portion powered separately and to enable the rest to be powered by a USB controlled switch or the 5V adapter. I know it seems a bit nickel and dime, but it was seen as being too expensive/complex to implement effectively.<br /><br />What is there is:<br />a) the ability to control the RESET line via the USB and <br />b) you can hold the reset button until the USB serial comes up.<br /><br />We wanted to initially hold the chip in reset until the serial port became active when powered by USB, but it didn't work out either. I understand this is a care-about and you might be able to propose a solution that is suitable. It is just that we didn't want any solution that was complex or interfered with a more typical, less-experienced user. Of course, without users like yourself, we wouldn't be able to address the needs of the less-experienced users, so please do share on the Google Group your suggestion.</div>
</div>
</div>
BeagleBones Arrived!2011-12-01T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/12/beaglebones-arrived<div class='post'>
2 <a href="http://beagleboard.org/bone">BeagleBones</a> were delivered by the friendly people at FedEx this morning!<br /><br />I've got one of them booted up with the provided microSD card that came already inserted into the Bone. Oddly, there's another microSD card that was included in a separate packaging inside the BeagleBone box. I'm not yet sure what the difference is or why 2 were included, but both Bones came this way.<br /><br />FedEx also delivered a pair of new SanDisk Mobile Ultra 8GB microSD cards this morning. I've run flashbench on them and <a href="http://lists.linaro.org/pipermail/flashbench-results/2011-December/000240.html">mailed in my results</a>. These look like pretty nice cards for Linux given my understanding.<br /><br />Next up is building x-loader, u-boot, a kernel, and an Emdebian image for a Bone.<br /><br />EDIT 20111201 4:30pm: On my machine, the FTDI serial port driver wasn't working without some help and being that I'm an engineer, I didn't read the directions :)<br />There's good info on making sure your virtual serial port is working on the <a href="http://beagleboard.org/static/beaglebone/a3/README.htm">BeagleBone info page</a>.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Dithering, NVIDIA Quadro 1000M, and HP ZR2440w2011-11-30T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/11/dithering-nvidia-quadro-1000m-and-hp-zr2440w<div class='post'>
My new laptop is an HP Elitebook Workstation 8560w. It's sweet and <a href="http://bradfordembedded.blogspot.com/2011/11/debian-6-amd64-on-hp-elitebook.html">well supported by Debian 6</a>. I also have a HP ZR2440w monitor, also sweet.<br /><br />For some reason, with the NVIDIA proprietary drivers, version 290.10 with a 2.6.38 backports kernel on Debian Squeeze, when ever I have the monitor connected via DisplayPort, the NVIDIA driver turns on dithering for the external display. This is not what I want.<br /><br />The HP ZR2440w is an 8 bit monitor, it doesn't need dithering, and with dithering enabled some colors flicker which is very annoying. The built in LCD on the Elitebook is only 6 bits and so dithering can help make it look like there's a wider color gamut.<br /><br />To disable dithering on my ZR2440w but keep it enabled on the built-in LCD, my "Screen" section of my /etc/X11/xorg.conf looks like this (DFP-0 is the internal LCD, DFP-6 is the ZR2440w, and I have them mirrored so I only use the ZR2440w when it's connected):<br /><br /><script src="https://gist.github.com/1409045.js?file=xorg.conf"></script><br /><br />With this config, now my ZR2440w is always non-dithered while the internal LCD always is dithered.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Google's New Look - It Sucks2011-11-26T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/11/google-s-new-look-it-sucks<div class='post'>
I'm not a fan of Google's new look for GMail or Blogger. I use GMail on a daily basis, many times per day. I use Blogger enough, almost daily, to at least check if any new comments came up on this blog or to write up some draft ideas rumbling around in my head.<br /><br />Both GMail and Blogger complain now when I log in using Firefox / Iceweasel 3.5, which is what ships with Debian Squeeze. They tell me my browser is out of date and that my experience won't work well. This is for a browser that's not that old and is still supported with security updates from Debian. GMail also reverts to the old school plain HTML version which isn't pretty and for some reason keyboard shortcuts and priority inbox are disabled (very annoyed at that).<br /><br />I'm taking my email to IMAP, goodbye GMail web interface, and I've rolled back Blogger's new look to the older version. If I'm on someone else's computer or using my wife's iPhone, I'll deal with GMail's web access. The worst part of this is before the new GMail and Blogger interfaces went live, neither complained about using Firefox / Iceweasel 3.5 and everything worked as I liked.<br /><br />I'm seriously considering taking my email business elsewhere, possibly to <a href="http://www.fastmail.fm/">FastMail</a> where I can avoid ads and get my own domain for a very reasonable price. For blogging, in the past I had tried Tumblr but wasn't impressed. I'm going to take a serious look at <a href="https://posterous.com/">Posterous</a>. I do all my blog editing in the HTML mode on Blogger anyway, losing some editing features isn't a concern.<br /><br />I'd love to say goodbye to Google. It's looking like it will be easier every day. I don't think that's a good thing.<br /><br />PS: Google AdWords, not worth it. So far, with thousands of ad impressions on my blog, I've make a nice $2.50 over 6 months. Not even enough that Google will pay me (gotta hit $10 for that). Don't blog for the money, blog for the reputation, skills, to help others, and because you want to improve your writing. Future post on that ;)</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Jizzy Gillespie</div>
<div class='content'>
And don't forget Google's "Buzz" in gmail, which went over like a fart in church and was quietly removed after virtually no one either used it or saw any reason to use it. Every time Google tries to be cool they end up looking more like Microsoft than Apple.</div>
</div>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
Michael, thanks for your (kind?) words. Sadly, I've not had time, due to the holidays, to really investigate moving off Blogger or GMail any further. But it's definitely on my to-do list.</div>
</div>
<div class='comment'>
<div class='author'>Michael Casher</div>
<div class='content'>
Google does suck. Everything Google does sucks. They either have no clue or else they don't care what their users think. That's why their user forums suck, too. You'll get no help there anymore.<br /><br />I used to display Google AdSense ads on my blogs until they sucked so bad I had to pull them all. The Ads for Content are almost uncontrollable and yet Google thinks we have control over them. Google AdSense ads are distasteful and even hideous. Google also "updated" its interface for AdSense users, too, making the process of controlling your ads even harder than before.<br /><br />I've been blogging at Blogger since 2005 and I write ten blogs there but I'm just about ready to give it up. When Google took over Blogger it went steadily downhill.<br /><br />Gmail sucks big time,too. I stayed with the old interface, knowing darn well that, if it's new at Google, it will absolutely suck. Just like Google Profiles. As soon as you get used to how it works, Google changes it and makes it worse. <br /><br />By the way, I have to use Firefox to edit my Blogger posts in "Compose". The new interface at Blogger is so complicated and so cheesy looking, I can't get it to work for me. It makes you rethink the WordPress experience, which I used to think was a nightmare for bloggers.<br /><br />Google now runs YouTube and YouTube now sucks big time. Especially YouTube's new, cheesy, generic look. Way to go, Google. You're rapidly ruining the Internet, website by website.<br /><br />Picasa still works for me, despite the fact that Google now runs it. But, given Google's track record, it's only a matter of time before Picasa will suck as bad as Blogger, AdSense, Google Profiles and Gmail. <br /><br />Sorry you're having a rough go of it with Blogger. It shouldn't be that way. Thanks for posting.</div>
</div>
</div>
Google AdWords - Gone2011-11-26T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/11/google-adwords-gone<div class='post'>
I've removed Google AdWords from my blog and RSS feed. So not worth it to have ads. I've not yet made enough in ad impressions to get paid by Google and I've had ads up for almost 6 months.<br /><br />For those of you using AdBlock, you shouldn't notice any difference. And yes, there were quite a few visitors blocking ads. For those of you not using AdBlock, the blog should load a little faster and there won't be any more ads.<br /><br />Enjoy!</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Debian 6 amd64 on HP Elitebook Workstation 8560w2011-11-23T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/11/debian-6-amd64-on-hp-elitebook-workstation-8560w<div class='post'>
I got my new laptop this week, it's a spiffy HP Elitebook Workstation 8560w. Dual core i7 at 2.7 GHz, 8 GB ram, SSD, NVIDIA Quadro 1000M, all good stuff.<br /><br />I've installed Debian Squeeze and found that I needed to do a few tweaks beyond what I expected in order to get the graphics and wifi working. First, get the wifi working, then deal with graphics.<br /><br />You'll need to enable squeeze-backports. Follow <a href="http://http://backports-master.debian.org/Instructions/">the directions</a>. Install the linux-kernel-2.6.38, linux-headers-2.6.38, and firmware-iwlwifi packages. Then modify your /etc/default/grub file, the 7th line should look like this:<br /><br />GRUB_CMDLINE_LINUX_DEFAULT="nouveau.modeset=0 quiet"<br /><br />Run update-grub2 to make the change permanent.<br /><br />Now you can reboot and you should be running the 2.6.38 kernel. If you don't pass the nouveau.modeset line to your kernel when booting, it will hang. Mine likes to hang just after "hp_accel: driver loaded" which isn't very helpful.<br /><br />Now, go grab the NVIDIA drivers from nvidia.com. Make sure you have gcc-4.4 installed. Go to a terminal with cmd-alt-F1 and log in as root. "killall gdm3" to shutdown X. Now run the NVIDIA installer.<br /><br />Whamo! After another reboot you should be up and running with both wifi and graphics!<br />Sadly, with the stock kernel, the webcam worked awesomely. With 2.6.38, it's broken. If I determine the root cause I'll update. <br /><br />UPDATE 20111123 5:30pm: To get webcam working again, don't use Camorama. Use something like cheese. Camorama wants the camera to be located at /dev/video0, it's not there any more. Things like Skype and cheese find the webcam with no issue. I'm happy! :) <br /><br />UPDATE 20111126 10:00am: In order to get suspend working again with kernels newer than 2.6.35, you may need to blacklist a few modules, namely "firewire_ohci" and "firewire_core". Put the below in a file named /etc/modprobe.d/8560w-blacklist.conf, run and give the laptop a reboot. I've included here the pcspkr module as well to make that stupid beep go away ;) Now I have (I believe) all the hardware working on my 8560w except Firewire (I don't have any Firewire devices so I'm not that upset). Thanks go out to an Ubuntu <a href="https://bugs.launchpad.net/ubuntu/+source/linux/+bug/748994">bug report</a>, scroll down to <a href="https://bugs.launchpad.net/ubuntu/+source/linux/+bug/748994/comments/50">#50</a>. <br /><br /><script src="https://gist.github.com/1395828.js?file=8560w-blacklist.conf"></script><br /><br />UPDATE 20111129 3:00pm: If you'd like ExpressCard hot plugging to work, you need to add "pciehp" to your /etc/modules and "options pciehp pciehp_force=1" to a new file called /etc/modprobe.d/pciehp. Both those are to be added without the quotes. Hat tip to Mark Lord on the LKML, <a href="https://lkml.org/lkml/2009/8/27/158">part 1</a> and <a href="https://lkml.org/lkml/2009/8/27/165">part 2</a>.<br /><br />UPDATE 20111208 7:00am: I'm going to recommend against encrypted LVM on an SSD with Debian Squeeze. The SSD in my 8560w says it supports TRIM but I'm having a hard time verifying that it's working, either automatically or manually. I have "discard" in my fstab and the 2.6.38 kernel supports it.<br />I'm thinking the way to go about encrypting important info is to just encrypt the directories I care about rather than the whole file system. I might do a backup, reformat, and restore, then enable just directory level encryption where needed (for example, I don't really care if my /usr directory is encrypted since it just has Debian sourced programs installed).</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
Neal,<br />Sorry, but I don't have much experience running Ubuntu on my 8560w, I've stuck with Debian and run Ubuntu and Windows in VMs. I know there have been other complaints of similar issues (including one Ubuntu bug report where it looks like you've already mentioned this). I wouldn't think SATA issues would be related to the graphics driver, but those messages might just be the last things written to the screen and not be indicative of the actual issue.<br />Can you try installing Ubuntu Server and booting it? Server shouldn't install a graphical desktop by default. At least if you can get that far, it's not a core issue but can be narrowed down to the graphics or other sub-system.<br />Good luck!</div>
</div>
<div class='comment'>
<div class='author'>NealTNJ</div>
<div class='content'>
Hi, i recently brought a HP Elitebook 8560w and its awesome. However, every time i try to boot linux ubuntu and backtrack 5 off a USB external hard drive . I receive this error <br />ata 4: sata link down sstatus 0 scontrol 300<br />ata 5: sata link down sstatus 0 scontrol 300<br />ata 6: sata link down sstatus 0 scontrol 300. This error still persists even after updating grub.cfg with the commands nouveau.modeset=0 quiet. After reading your blog, I feel that your the person to ask.</div>
</div>
</div>
Embedded Linux Long Term - Part 32011-11-21T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/11/embedded-linux-long-term-part-3<div class='post'>
I wrote about long term support for embedded Linux <a href="http://bradfordembedded.blogspot.com/2011/10/embedded-linux-and-long-term-support_21.html">before</a>, <a href="http://bradfordembedded.blogspot.com/2011/10/embedded-linux-and-long-term-support.html">a few times</a>, but I'm back for more!<br /><br />Recently in LWN, there have was another <a href="http://lwn.net/Articles/464834/">article</a> about various embedded groups, like Linaro and LTSI, maintaining an official policy on long term support kernels. That's cool and all, if you're developing consumer electronics that will get replaced in 2 years (like cell phones in the US), but it's not that helpful for embedded industrial systems that will have lifetimes measured in decades. Especially industrial systems where the customer is scared to death of software upgrades because they've been burned before by past providers that assured them, "Nothing will go wrong." (It did)<br /><br />Upgrading a kernel at one of these customers' sites will not happen. Security updates for the same kernel version, maybe. Beyond that, NO WAY!<br /><br />So how does the embedded developer deal with this?<br /><br />The best way I've thought up so far is to ride coat-tails for as long as possible and do extensive testing when absolutely forced to move to a new kernel version. That means grabbing sources from something like Red Hat or Ubuntu LTS at the beginning of a long term support cycle and riding that as far as it will go (usually 5 years). For example, Ubuntu 12.04 LTS is most likely going to have Linux 3.2 as the kernel for both the desktop and server versions. If your embedded system is supported by 3.2 (most established, non-cutting edge designs will be), grab that and let the Canonical team deal with the security updates for you. Be a good community supporter and send anything you can up-stream but otherwise just ride their coat-tail.<br /><br />Another thing to keep in mind is your distribution. Although most embedded developers already keep installed packages to a minimum, you'll want to take this as far as you can. Once who ever's coat-tails you're riding for all your software, other than the kernel, goes end-of-life (i.e.: no more security updates for free), you've got two choices: 1) Ask your customer to upgrade (probably not going to happen), or 2) Backport security fixes yourself. Neither is much fun. Minimize this!<br /><br />Don't pick something targeted towards mobile phones. Historically, everyone making mobile phones (ironically, except Apple) sucks at providing software updates <a href="http://theunderstatement.com/post/11982112928/android-orphans-visualizing-a-sad-history-of-support">beyond a year</a> or two. The embedded Linux teams working to provide systems to compete with Android aren't much better. You're going to get stuck taking a server distribution and making it work on your embedded system (becoming less of an issue but still not as straight forward as something like <a href="http://www.openembedded.org/wiki/Main_Page">OpenEmbedded</a>).<br /><br />Or you can <a href="http://windriver.com/products/linux/">pay</a>... But that's no fun and you'll still deal with these issues down the road in about the same time frames.<br /><br />It'd be cool to see a long term support version of Emdebian. Emdebian already has way less packages than real Debian so it's less daunting to apply security fixes. Debian has fairly long release cycles and is community driven. Lenny's already <a href="http://wiki.debian.org/DebianLenny">been around</a> for a while and will be supported at least 1 year beyond Wheezy's release. Having a community support project to keep Squeeze and beyond supported for 5+ years would be cool...</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Build the PandaBoard or BeagleBoard-xM x-loader on Debian Squeeze2011-11-16T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/11/build-the-pandaboard-or-beagleboard-xm-x-loader-on-debian-squeeze<div class='post'>
EDIT 20111116 15:37: This post and the git repo have been updated to include info on the BeagleBoard-xM.<br /><br />I wrote up some quick instructions on building the PandaBoard and BeagleBoard-xM x-loader on a Debian Squeeze host this morning. You can find them below, but also in my GitHub <a href="https://github.com/bradfa/x-loader">x-loader repo</a>. My x-loader repo is only slightly modified from <a href="http://gitorious.org/x-loader">mainline</a>. Enjoy!<br /><br /><br />To build x-loader on Debian:<br /><br />Install the emdebian-archive-keyring:<br />$ sudo aptitude install emdebian-archive-keyring<br /><br />Add to your apt sources:<br />deb http://www.emdebian.org/debian/ squeeze main<br /><br />Perform an aptitude update:<br />$ sudo aptitude update<br /><br />Install the Emdebian cross compilers:<br />$ sudo aptitude install gcc-4.4-arm-linux-gnueabi g++-4.4-arm-linux-gnueabi<br /><br />Install git and partitioning tools:<br />$ sudo aptitude install git parted<br /><br />Clone the repo from github, this will create a directory called "x-loader":<br />$ git clone git://github.com/bradfa/x-loader.git<br /><br />Enter the directory, make the configuration you desire, then compile the MLO<br />file (actual x-loader). For the PandaBoard use "omap4430panda_config" and<br />for BeagleBoard-xM use "omap3530beagle_config" in the second step<br />(example uses Panda):<br />$ cd x-loader<br />$ make CROSS_COMPILE=arm-linux-gnueabi- distclean<br />$ make CROSS_COMPILE=arm-linux-gnueabi- omap4430panda_config<br />$ make CROSS_COMPILE=arm-linux-gnueabi-<br /><br />Now we'll partition and format an SDcard.<br /><br />***************************************<br />* THIS PROCESS WILL DESTROY ALL DATA! *<br />***************************************<br /><br />Make sure your SDcard is not mounted. Then run the partitioning script to<br />create a 64 MB FAT partition followed by an ext3 partition taking up the rest<br />of the SDcard space (SDcards are slow, this may take some time). The arugment<br />passed to the script is the device node of the SDcard itself (/dev/sdb in our <br />example), not a partition!:<br />$ sudo ./format_sd.sh /dev/sdb<br /><br />Some systems may automatically mount the newly created file systems. If not,<br />mount the boot file system:<br />$ sudo mkdir -v /mnt/boot<br />$ sudo mount /dev/sdb1 /mnt/boot<br /><br />The _VERY_FIRST_ file written to the FAT partition should be the MLO.<br />$ sudo cp -v MLO /mnt/boot/<br /><br />Unmount the FAT partition and boot the board to verify operation!</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
Israr,<br /><br />X-loader is the MLO file. The boot process is that internal to the ARM chip there is some ROM that executes as first stage boot laoder, it goes and grabs the MLO (second stage), runs it, and the MLO grabs and runs u-boot (third stage). U-boot then grabs and runs the kernel and points the kernel to the file system.<br /><br />In more recent versions of u-boot (at least in TI's dev kits and the BeagleBone), the SPL is now built directly out of u-boot sources, x-loader is being depreciated.<br /><br />This sounds confusing, sorry.</div>
</div>
<div class='comment'>
<div class='author'>israr</div>
<div class='content'>
hi andrew!!<br /> is x-loader is replacement of Uboot in BeagleBoard or <br />it sits above Uboot ? plz explain, thanks<br /><br />Israr</div>
</div>
</div>
The BeagleBone!2011-11-12T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/11/the-beaglebone<div class='post'>
Somehow I missed the introduction of the <a href="http://beagleboard.org/bone">BeagleBone</a>! I don't know how that could be!<br /><br />The BeagleBone looks like a really exciting little ARM board. It's a single core 720 MHz Cortex-A8 with 256 MB of RAM, gigabit Ethernet MAC built into the SoC with 10/100 PHY external, some very nifty USB on-the-go capabilities, and headers for expansion similar to Arduino (but not pin compatible). All for $89!<br /><br />Hopefully the BeagleBone will be available for purchase later this month or, at the latest, early in December.<br /><br />TI seems to be targeting a very wide audience with the BeagleBone this time. The expansion connectors pit it against some higher end applications for hobbyists where an Arduino may not have had enough power but where the expansion provided by Arduino is really handy. <br /><br />The <a href="http://www.ti.com/product/am3359">AM335x</a> ARM SoC, however, throws the BeagleBone right up the alley of quite a few companies who are producing single board computers for various markets. The AM335x was also announced just over a week ago and TI already has high quality datasheets and reference manuals up online for free. [Side note, TI, you're awesome about this! Thanks!] The AM335x also has a reasonable 0.8mm ball <a href="http://bradfordembedded.blogspot.com/2011/04/tuxedoboard-bgas-and-minimum-ball-pitch.html">pitch BGA</a> package and is fairly low in pin count (around 300 pins depending on version). This makes designing systems with the SoC very easy for those with access to professional level schematic and layout tools (and even doable for those using Eagle, GEDA, or KiCad!).<br /><br />I should be receiving at least one BeagleBone in the not-too-distant future. It's an exciting time!</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Calendar Interface2011-11-03T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/11/calendar-interface<div class='post'>
Scott Adams recently wrote about <a href="http://dilbert.com/blog/entry/calendar_interface/">calendar interfaces</a>. Short version: Outlook and Google Calendar suck, they're too complex and require you to change your mental reference too many times when setting up an appointment. I got about half way though and lost interest in the article. Even his idea is too complex.<br /><br />When I type something, anything, into Google, it corrects my spelling, recommends choices, and interprets my shorthand into useful search results. It does this across things like maps, email, web, patents, etc. I can help Google out by giving hints, like going to the maps interface when I'm looking for a location but even if I don't, often Google knows that's what I really want when searching for a business or address and the maps results come up first.<br /><br />Why can't I just select the day for an appointment and type in "Doctor's appointment at 3pm." The calendar will parse that, pull out the time, and make the appointment in my calendar. If I've got my doctor's office in my address book, the calendar will even link that data along with doing a map lookup on my doctor's office phone number to find the location. The calendar can even ask me, after I enter this, if it got the choices right for location and other info with a listing like a Google search. Then, most likely, I'll just pick the first result (since it will be correct) and that confirms the calendar event and sticks it in my calendar. Done!<br /><br />No, I can't write code to do this (not yet at least), but the infrastructure already in existence in Google's searching products, when combined, can almost do exactly this!<br /><br />So, Google, why is your calendar interface so complex?</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Bootstrapping Fedora for ARMv7 Hard Float2011-11-03T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/11/bootstrapping-fedora-for-armv7-hard-float<div class='post'>
There's a great set of articles (<a href="http://lwn.net/Articles/463506/">part 1</a>, <a href="http://lwn.net/Articles/463507/">part 2</a>) from LWN discussing some of the processes and issues that were used / encountered when attempting to bootstrap a build environment for ARMv7-a hard float. Turns out, Fedora didn't really have a good set of infrastructure to do this and a lot of creativity was involved (including a 4.6GB git repo).<br /><br />After a bit of reading, even Debian <a href="http://penta.debconf.org/dc11_schedule/events/745.en.html">doesn't have a great set of infrastructure</a> for bootstrapping onto a new architecture or ABI, although the Emdebian team is working on it.<br /><br />This is awesome to see from both the Fedora and Debian teams. As much as Cross Linux From Scratch is a neat exercise, having the ability to bootstrap a "real" Linux distribution is very handy. Being able to base an embedded system on a well supported community distribution that can be customized through a well documented bootstrap process is an awesome thing. I'm looking forward to learning more about these methods.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
New Beginnings2011-10-27T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/10/new-beginnings<div class='post'>
I attended a retirement lunch today for a former coworker. Previously in my life I've attended a reasonable number of similar events, either people retiring or people leaving for other opportunities. Sometimes the people leaving get emotional and I had a hard time understanding why. I always thought, "It's just a job." Today, it's hitting me as to why.<br /><br />Friday November 4, 2011 will be my last day at my current job. The following Monday I'll be starting at a much smaller company with huge aspirations to do amazing things for their customers. I'll be building embedded Linux systems and developing the software that runs on them. It's going to be an awesome challenge. I'm very excited.<br /><br />I spent most of today organizing my notes and cleaning out my office. Over the next week, I'll be transferring my notes, responsibilities, and everything I've worked on to the rest of my group. Looking at things I've done, it's astounding. I've gotten a lot accomplished. I didn't realize how much work I've actually done in the past six and a half years and three different roles within the company. The amount of time, effort, and personal investment I've put in astounds me. Others feel the same way when they leave a company, that's what drives the emotions.<br /><br />Looking at my resume, sure, I've done a lot. But when I found a directory full of minutes from meetings I ran 5 years ago and I can remember the details surrounding decisions that were made, problems that were solved, and bureaucracy that was dealt with, it's way more impressive. I've had quite a few other experiences like this, just today.<br /><br />People spend a huge portion of their non-sleeping life at work. The relationships that are developed, the work that's accomplished, the situations that are dealt with, and the impact on the world that is made is non-trivial. It's a big deal to stop working somewhere that you've been for a significant amount of time. Work reflects a good chunk of life, leaving it, even for something better, is a big deal and shouldn't be taken lightly.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Management and Leadership - Startup versus Established?2011-10-24T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/10/management-and-leadership-startup-versus-established<div class='post'>
I like <a href="http://sethgodin.typepad.com/seths_blog/">Seth</a>, his blog is in my Google Reader. His <a href="http://www.amazon.com/gp/entity/Seth-Godin/B000AP9EH0?ie=UTF8&ref_=sr_ntt_srch_lnk_1&qid=1319458162&sr=8-1&ie=UTF8&tag=bradford07-20&linkCode=ur2&camp=1789&creative=390957">books are good</a><img src="https://www.assoc-amazon.com/e/ir?t=bradford07-20&l=ur2&o=1" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" />, too. He recently wrote about <a href="http://sethgodin.typepad.com/seths_blog/2011/10/the-difference-between-management-and-leadership.html">leadership and management</a>. It inspired me to come up with a question:<br /><br />Does the relationship between leadership/management and startup/established business exists in a meaningful and measurable way?<br /><br />I think it exists, and I'm sure it's meaningful, but I'm not sure it's measurable. How can you measure leadership or management in the way that Seth describes them? There's no easy metrics that spring to my mind. It's mostly one of those, "You'll know it when you see it" kind of things. That's too bad.<br /><br />There's probably a whole industry out there claiming that they can offer insight into how to perform this measurement. These consultants probably make good money. Sadly, it's most likely all for naught. Unless you can see it plain as day, the person you're looking at isn't a leader.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Embedded Linux and Long Term Support / Updates - Part 22011-10-21T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/10/embedded-linux-and-long-term-support-updates-part-2<div class='post'>
In my <a href="http://bradfordembedded.blogspot.com/2011/10/embedded-linux-and-long-term-support.html">previous post</a> about embedded Linux long term support, I neglected Ubuntu. I had not realized how much effort Canonical are putting into their ARM platform products. After doing a little reading today, it appears that within the next year, Ubuntu's long term support server distribution should be a very high quality product on ARM platforms. That's awesome!<br /><br />Ubuntu LTS server releases are already supported for 5 years. Starting with version 12.04, the desktop LTS release will also be <a href="http://www.canonical.com/content/ubuntu-1204-feature-extended-support-period-desktop-users">supported for 5 years</a>. Combined with<a href="https://wiki.ubuntu.com/ARM/OMAP"> prebuilt images</a> for various ARM development kits, that's an awesome starting point for higher powered, ARM based, embedded systems.<br /><br />Ubuntu seems primed to completely take over the Linux distro landscape. I think that's awesome if you're looking to develop any new Linux based device on a platform they support. It'll make your life that much easier. Thanks, Canonical!</div>
<h2>Comments</h2>
<div class='comments'>
</div>
An Unknown Error Has Occurred2011-10-17T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/10/an-unknown-error-has-occurred<div class='post'>
In software, don't ever tell a user that an "<b>unknown error</b>" has occurred. It does nothing but hurt the relationship between user and software.<br /><br />This weekend I upgraded our iPad and my wife's iPhone4 to iOS5. The iPad went super smooth other than taking forever to download the update. The iPhone update did not go nearly as smooth. After downloading the update, backing up the iPhone, and erasing the flash, iTunes errored out with an "unknown (14) error." There was a nice link provided to an Apple knowledge base article telling me to try another USB port, reboot, or update iTunes (again? I just did!).<br /><br />Lacking from all of this was any indication of what the heck caused the error!<br /><br />Clearly, iTunes knows what caused the error, it raised an error message because of some event happening in a way that wasn't expected. But I have no idea what that event was so that I can try to prevent it from happening again or so I can write a bug report. It also left the iPhone in an uninitialized state, requiring a restore (which in itself was over an hour in duration).<br /><br />Don't ever tell me that an "unknown" error has occurred. Either just tell me that "an error has occurred preventing" a task from completing or tell me what the heck the error was! The "UNKNOWN" part of the error is the most frustrating part, then Apple kicked me when I was down by providing a knowledge base article with stupid answers that didn't provide any insight into what the problem actually was. Yeah, I've used Windows before, rebooting is the knee jerk reflex, I don't need to be told that.<br /><br />And to top it all off, iOS5 is noticeably slower than iOS4 was on both the iPad and iPhone4. The updated features are almost unnoticeable but the slowness is easily seen. Overall, I'm unimpressed.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Embedded Linux and Long Term Support / Updates2011-10-14T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/10/embedded-linux-and-long-term-support-updates<div class='post'>
If you are building a non-consumer commercial system that uses embedded Linux, you will probably be interested in long term support. Now, what I mean by long term support is: Two (or five) years from now, will someone be providing me with security updates to the same version of software that I use today?<br /><br />For example, if you run RedHat Enterprise on a desktop or server, you can be sure that 5 years from when a new release comes out, RedHat will still be providing you with security updates for the exact same versions of all the software you run. You won't be forced to upgrade to some new version of software provided by RHEL in order to continue getting security updates. Your kernel will stay the same version, your Apache will stay the same version, your Bash will stay the same version, and you'll only get little tiny changes that 99% of the time are due to security issues being fixed. The other 1% of the time is things that are actually seriously broken and need to be fixed. But overall, NO NEW VERSIONS OR FEATURES!<br /><br />For desktop and server distributions, there's a lot of choice between paid for setups like this and community supported setups. You've got RedHat and Suse as the major players in the paid for sector, and Debian stable, Ubuntu LTS, and Scientific / CentOS in the community sector.<br /><br />In the embedded landscape, there's companies like MontaVista and Wind River that supply longish term supported embedded Linuxes, much the same way RedHat does. But the on the community side, other than Emdebian, there's not a whole lot (that I've heard of) that provide a long term support system for embedded.<br /><br />Many of the embedded distributions are focused on cutting edge stuff. Cutting edge is cool, it's hip, and it's where all the neat stuff happens. But if you're deploying a real product that's going to have to function for years and years at a customer site, you're not going to want to have to keep sending them software updates having cutting edge versions. Cutting edge versions means things break. Stable old stuff being updated only with tiny security fixes means things don't break, at least not usually more than they were before. If you're looking to make real money selling embedded non-consumer Linux systems, you're going to pick old stable stuff getting security updates, and if you need fancy new stuff, you'll become an expert in just that new fancy stuff, and you'll do your own security updates just for that one thing you need to be newer.<br /><br />I understand that having long term support be a community activity is hard. Especially when there's a low number of developers. Small projects can't afford to spend developer time supporting old stuff if they want to continue moving forward. Bigger companies who get paid (like Wind River and MontaVista) can. And Debian can, but only because they've built up the support systems, platforms, and methods of working over a huge amount of time and with a huge community that's dedicated to just that. It's very hard to do.<br /><br />Linux kernel has long term supported versions, but those only get updates for 2 years, then they assume the distributions will take care of further updates. This is awesome for short life commercial projects, but some commercial projects need to last longer than that. For those projects, your choices are really limited if you don't want to employ a huge number of experts to keep things stable and secure. For those projects, you're going to buy a long term support system from a vendor or need the support of a high quality community. Because of this, you're either going to pay, or you're going to use Emdebian.<br /><br />What else is there for community support?</div>
<h2>Comments</h2>
<div class='comments'>
</div>
More Raspberry Pi2011-10-04T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/10/more-raspberry-pi<div class='post'>
When I wrote my <a href="http://bradfordembedded.blogspot.com/2011/09/some-raspberry-pi.html">blog entry last week</a> about the <a href="http://www.raspberrypi.org/">Raspberry Pi</a> project, I was being negative. That was wrong of me.<br /><br />I was being negative about Raspberry Pi because they're doing an awesome job of putting out a low cost embedded system and I'm a bit jealous. The TuxedoBoard won't be nearly as low of cost. When I first thought up the TuxedoBoard, most ARM based single board computers were in the $100 to $250 range (think <a href="http://beagleboard.org/">BeagleBoards</a> and <a href="https://www.gumstix.com/">Gumstix</a>). Pricing my less capable but much more open TuxedoBoard at $100 would fit in nicely with that marketplace. But with the Raspberry Pi being $25 and $35, that's going to ruin my potential customers' expectations on price. Even though I'm not trying to compete with the Raspberry Pi, just the fact that they're putting out an ARM based system (with very nice stats) for such a low price will hurt my chances of selling TuxedoBoards.<br /><br />Raspberry Pi project, I'm sorry. I hope you succeed. I'm just jealous. Heck, I want to buy one of the $35 models. Good luck!</div>
<h2>Comments</h2>
<div class='comments'>
</div>
TI AM170x Booting Annoyances - Take 22011-10-01T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/10/ti-am170x-booting-annoyances-take-2<div class='post'>
I was incorrect in some of the statements I made <a href="http://bradfordembedded.blogspot.com/2011/09/ti-am170x-booting-annoyances.html">before</a> when discussing the TI AM170x booting. Daniel pointed out a few of my mistakes in the comments (thanks!). This is a quick follow up. I have quite a lot more reading and learning to do.<br /><br />The AM170x chips require an external memory attached either to EMIFA, I2C, SPI, UART, or the HPI (host port interface). Since I want to be able to boot without requiring the UART connected, that boot method is most likely ruled out. The HPI sounds interesting but the impression I get is that usually another processor of some sort is connected there, which the TuxedoBoard won't have. So it's down to NOR or NAND flash on EMIFA, I2C, or SPI. Any of those will probably work for me, even though I wanted to avoid having an external memory device.<br /><br />It turns out that the internal ROM is hard coded and not changeable. I was under the incorrect impression that it was an EEPROM and that using Code Composer Studio that one could change it. This is not the case. The internal ROM is what gets executed after reset, it sets up some basic configuration registers and checks the boot mode pins. Based on the boot mode pins, the ROM will configure the processor to access the appropriate external memory interface and load up the AIS (or non AIS boot mode) executable. This executable holds the device configuration information along with some other setup data such as clock speeds, pin mux settings, and various other configuration registers.<br /><br />It seems like one could create an AIS (or non AIS executable) that would contain arbitrary code to be executed as a first stage bootloader but I'm not entirely clear on that yet. If so, or if it could fetch a first stage bootloader from the external memory that could contain a custom written binary that can access the SD/MMC card. All this together might be an end-around for booting off SD/MMC. Uboot would then be the second stage bootloader.<br /><br />Daniel pointed out that there's a TI BSD licensed version of the <a href="http://sourceforge.net/projects/dvflashutils/">AIS generator</a> and other core boot code written in C# which should execute under <a href="http://www.mono-project.com/Main_Page">Mono</a> in Linux. These utilities also should not require Code Composer Studio, which is handy.</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
Brian, you're welcome :)<br /><br />Not locked into the TI AM170x parts but they are the lowest cost in the 1 to 100 quantity range that meet my (currently mostly unwritten) requirements. Maybe I should write my requirements first... (there's an idea!)<br />Thanks for the info on the <a href="http://www.st.com/internet/mcu/product/247246.jsp" rel="nofollow">ST parts</a>.<br /><br />If I was planning on buying in the 1,000 to 10,000 quantity range, I think that'd increase the number of processors that would be cost effective to include a lot of Cortex-A8 parts from many vendors. But I'm not looking at those quantities.</div>
</div>
<div class='comment'>
<div class='author'>brian</div>
<div class='content'>
Wow thanks for the compliment. Are you locked into using the AM170x micro? I met with ST a few weeks ago and they have a series of nice embedded low cost processors that might be an option. Although I am not sure what is available for open source tools. We can talk about it next time we meet. I will send you some info on the processor.</div>
</div>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
Bill, thanks! :)<br /><br />I saw a little about the UBL in the link that Daniel provided but I haven't yet had time to delve into it too far. I'll also read through your wiki link.<br /><br />Thanks for the advice on SPI. I've been pondering using the EMIFA bus and either NOR or NAND flash but will definitely do some research on SPI parts. SPI will probably be less complex than NOR or NAND flash.<br /><br />I'm pretty new to a lot of this although I do have decent experience with embedded Linux. I've never designed a board from scratch nor built a bootloader. It's all stuff I want to learn and I'm trying to keep the complexity to a level that's reasonable and I figure if I share my thoughts as I go, others can learn too. I have a friend (hi Brian!) that's experienced with board design (think 16 layer boards, 1000+ pin BGAs, and PCI-Express type experience [he's thorough and good at writing documentation, too]) giving me some advice, which is a huge help.</div>
</div>
<div class='comment'>
<div class='author'>Bill M.</div>
<div class='content'>
> It seems like one could create an AIS (or non AIS executable) that would contain arbitrary code to be executed as a first stage bootloader but I'm not entirely clear on that yet. <br /><br />Yes that is pretty standard practice: use the ROM bootloader to load a small intermediate bootloader that then loads the real bootloader (u-boot). Bootloaders are the turtles of the embedded world; Its pretty much bootloaders all the way down.<br /><br />On OMAP the intermediate bootloader is xloader. If you checkout the links Daniel provided I think you will find another intermediate bootloader called UBL that was used on the Davinci family of parts. <br /><br />http://processors.wiki.ti.com/index.php/RBL_UBL_and_host_program<br /><br />You could start with any of these to make your intermediate loader that would load u-boot from MMC/SD. I would look to SPI to hold your intermediate loader as it is cheap, simple, and much faster than I2C.</div>
</div>
</div>
TI AM170x Booting Annoyances2011-09-30T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/09/ti-am170x-booting-annoyances<div class='post'>
Although I haven't spent much time working on the TuxedoBoard in the past few months (newborn daughter in July), I've been thinking about it again recently as my sleep schedule is starting to normalize a bit.<br /><br />In reading through the info from TI about <a href="http://www.ti.com/lit/an/spraba4b/spraba4b.pdf">boot modes on the AM17xx</a> processors, a lot of time is spent describing the AIS (Application Image Script) proprietary boot script system. It's proprietary to TI, the script generator only runs on Windows, and it requires that you pay for and use <a href="http://www.ti.com/tool/ccstudio">Code Composer Studio</a>. None of that sounds exciting to someone interested in open platforms.<br /><br />So, if the AM170x processor is going to go onto the TuxedoBoard, there's going to have to be a custom bootloader.<br /><br />Ideally, I would want a bootloader that doesn't require an external memory interface (either EMIFA, SPI, or I2C) due to the added cost and complexity. I'm imagining that there will be an MMC/SD card on the TuxedoBoard in order to hold the Linux filesystem, so booting directly off of that would be best. The BeagleBoard-xM boots off the MMC/SD card but its processor has internal firmware (not sure on the details) that is smart enough to find a FAT partition (if its the first one) and grab the <a href="http://groups.google.com/group/beagleboard/browse_thread/thread/33e7b0fd61a313f0?pli=1">MLO (x-loader)</a> file (if its the first file loaded onto the partition).<br /><br />I'd like it if my bootloader can setup the external SDRAM, set up all registers properly, find the MMC/SD card, transfer uboot (or other second stage bootloader) into RAM and jump to it. Kind of like the BeagleBoard-xM does it. There's 64kB of ROM and 8kB of RAM inside the AM170x ARM core. I'm not sure if that's enough but coming from a 8/16-bit microcontroller perspective, that's decent for doing quite a lot. It seems possible (even if it will be a lot of work).<br /><br />It'd be awesome if the first partition on the MMC/SD card could be ext2 (or other open and less complex filesystem) and have only the uboot executable. The second partition (in your favorite, supported by uboot, format) would hold the actual Linux filesystem.<br /><br />UPDATE 1 October 2011: I'm incorrect in some of the things stated in this blog post. I've not changed them but I have written <a href="http://bradfordembedded.blogspot.com/2011/10/ti-am170x-booting-annoyances-take-2.html">another blog post</a>.</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
Bill, thanks for the info, I'll take a further look at the XDS100 emulator. I also did not know that CCS runs on Linux, that's very helpful. I'll be sure to check that out.<br /><br />I've also updated some of the statements I made in this post in my follow on update (see link at bottom of post) but I've not changed my misunderstandings in this post because it's useful for others to see how I'm wrong and read the comments and my second post.</div>
</div>
<div class='comment'>
<div class='author'>Bill M.</div>
<div class='content'>
> and it requires that you pay for and use Code Composer Studio. <br /><br />Daniel already pointed out that Code Composer Studio is not required to use the boot code and AIS. I would also like to point out that CCS is free (as in Beer) to use with an XDS100 emulator. These are available from TI for $79 and other vendors sell even cheaper clones. There is an XDS100 reference design so you can even build your own if you are so inclined.<br /><br />An XDS100 is not blazing fast but it is useful to have JTAG debug when bringing up a new board and this is a pretty cheap way to go. Best of all CCSv5 now runs on Linux in addition to Windows.</div>
</div>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
Daniel, interesting. Thanks for the info! I'll have to check out the sourceforge project you linked to.<br /><br />I was slightly confused about the 64kB of ROM, since I was under the impression that the AIS went into, but now that you mention it... So I guess I need to do more reading. :)</div>
</div>
<div class='comment'>
<div class='author'>Daniel</div>
<div class='content'>
>> It's proprietary to TI, the script generator only runs on Windows, and it requires that you pay for and use Code Composer Studio.<br /><br />True, it is proprietary (but fully described in the mentioned doc). The GUI script generator should run on Linux under Mono (it's a .Net app). I haven't tried it in a while, but it should. Plus there is a command line version, also written in C# .Net, with full source available (BSD licensed) as part of the flash and boot utilities package: http://sourceforge.net/projects/dvflashutils/files/OMAP-L137/<br /><br />And code composer studio is not required to create AIS boot images. The executables you feed the AIS generator could be created using TI's codegen tools, or they could be created using GCC.<br /><br />>> There's 64kB of ROM and 8kB of RAM inside the AM170x ARM core. I'm not sure if that's enough but coming from a 8/16-bit microcontroller perspective, that's decent for doing quite a lot.<br /><br />The 64KB ROM is not user programmable - that ROM contains the primary boot loader that can load the AIS images from various interfaces. Unfortunately, one interface that is not natively supported is SD/MMC. So a secondary loader on some natively supported interface (like a SPI flash) will be required to fetch uboot from an SD/MMC card (unless TI updates the ROM and releases a new version of the chip with this support).</div>
</div>
</div>
SD Card Depot Why Don't You Exist?2011-09-30T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/09/sd-card-depot-why-don-t-you-exist<div class='post'>
Trying to buy SD cards sucks. <a href="http://www.newegg.com/">Newegg</a> is way better than Amazon for finding an SD card that you want, but it still sucks.<br /><br />I wish there was something like an SD Card Depot that just sold SD cards. They'd only sell high quality parts, test the things they sell, write reviews of them, sell consistent product (SD card <a href="http://www.bunniestudios.com/blog/?page_id=1022">quality</a> <a href="http://grigio.org/microsd_class_6_performance_benchmarks">varies</a>. <a href="http://www.sakoman.com/OMAP/microsd-card-perfomance-test-results.html">A lot!</a>), provide guidance on what kind of cards work best for what kind of uses (do I really need a class 10 card for HD camcorder?), and make finding the SD card I want easy. Oh yea, and cheap shipping (hello Amazon sellers with $1 4GB cards and $9.95 shipping!!).<br /><br />They'd be the (when they were just shoes) <a href="http://www.zappos.com/">Zappos</a> of SD cards. Awesome website, awesome service, dedicated to just one product and killing the act of selling it.<br /><br />Makes me think there's a market opportunity here...</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Some Raspberry Pi2011-09-29T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/09/some-raspberry-pi<div class='post'>
I'm torn about the <a href="http://www.raspberrypi.org/">Raspberry Pi</a> project. On the one hand, it's awesome that the group is developing a very low cost ($25 and $35 goal price points) ARM based computer with some rather nice specifications (clock speed, RAM, USB, Ethernet [on $35 model], 3d accel, etc) but the way they're going about it somewhat rubs me the wrong way.<br /><br />Broadcom makes the ARM processor (BCM2835) but hasn't released any type of datasheet. It uses package-on-package memory, which is rather difficult for many assembly houses to deal with and requires rather large orders in order to obtain memory from suppliers due to constrained supply lines. The Broadcom GPU that's integrated uses closed source drivers and in order to get legal access to the multimedia hardware IP blocks you are required to buy a license (RasPi says they'll include a few of the most often used codec licenses with purchase, but not all). The boot process is yet another convoluted weird way of doing things: the GPU runs first, executes a first stage bootloader, then dumps you over to the actual ARM core for booting Linux.<br /><br />In order to manufacture their 6 layer (originally they said 4 but that's not appearing possible) board, RasPi is using blind, buried, or partial vias which raises the cost of the bare board and limits who can manufacture it. They're also not even connecting all pins on the processor, some GPIO won't be connected to anything and thus will be unused. These tiny pitch BGA parts are awesome if you've got constrained spaces to fit in and you have access to seriously advanced manufacturing capability and can use 8+ layer boards. Otherwise, <a href="http://bradfordembedded.blogspot.com/2011/04/tuxedoboard-bgas-and-minimum-ball-pitch.html">they're a pain</a>!<br /><br />The RasPi foundation is getting special pricing on almost all of their parts. They're buying in the 10,000 quantity but (at least on some parts) appear to be getting 1,000,000 quantity pricing. They have publicly said they are not getting any sponsorship or free parts, which is good to hear based on their very low price point goals for the finished product.<br /><br />Although Eben has said (in <a href="http://interviews.slashdot.org/story/11/09/14/1554243/Eben-Upton-Answers-Your-Questions">answers to Slashdot questions</a>) that the layout and schematics would be open with "A qualified yes," that's dependent upon the final board design being able to be exported to something like Eagle. He also throws in some other verbiage that makes me unsure what will really happen. (And yes, I understand I won't be able to build one myself, even if I had mad hot air rework skills, the pitch on some parts are remarkably small and POP is a nightmare even for automated robotic assembly shops.)<br /><br />But in the end, my biggest concern is that the $25 and $35 prices won't really happen. There's been mention of a "give one get one" program at first (so you have to pay for 2 in order to obtain one) where the one you "give" will be provided to school children. But there's not been mention that I've seen of when the one you "give" will be actually given. Mostly, for $25, leaving the RasPi foundation with no profit, they need (my personal <a href="http://en.wiktionary.org/wiki/SWAG">SWAG</a> on cost) about $15 in parts (including circuit board), maybe $8 in labor to assemble (including all tooling costs being rolled into the per unit cost), and that leaves about $2 to cover testing and warranty replacements (2% warranty rate is $0.50, 4% is $1). That's crazy cheap! And it requires a very low failure rate under warranty, let alone the costs associated with fulfilling the warranty replacements (emails, phone calls, shipping, etc aren't included).<br /><br />I don't think $25 and $35 price points are sustainable in a break-even or profit making enterprise based on the goals and current public information for the Raspberry Pi. I do hope I'm wrong.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Google Search Results!2011-09-28T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/09/google-search-results<div class='post'>
Somewhat ironically, if you search Google for: <a href="http://www.google.com/search?q=stanford+database+class">Stanford database class</a>, my blog is showing up (at least some of the time, even when searching when I'm using a different browser and not logged into Google services) in 5th spot, ahead of the actual <a href="http://db-class.com/">db-class.com</a> (which redirects to the .org version) website. I find that intriguing and funny. It also makes me smile.<br /><br />Now to get my actual <a href="http://db-class-notes.blogspot.com/">database class notes blog</a> to show up there as well!<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-DLnFs0Kldqg/ToNaN3Ko2EI/AAAAAAAAAQA/Kqr1st4sqis/s1600/stanford_database_class_google_listing.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-DLnFs0Kldqg/ToNaN3Ko2EI/AAAAAAAAAQA/Kqr1st4sqis/s1600/stanford_database_class_google_listing.png" /></a></div><br /></div>
<h2>Comments</h2>
<div class='comments'>
</div>
Do What You Love, Good Things Will Come2011-09-27T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/09/do-what-you-love-good-things-will-come<div class='post'>
This morning, I found this <a href="http://blogs.hbr.org/bregman/2009/02/need-to-find-a-job-stop-lookin.html">article from the Harvard Business Review</a> blog that's mostly about job hunting but also a few other things. Do what you enjoy, build relationships, and stop stressing out about the state of the economy or your life. You'll find what you are looking for faster, or at least enjoy yourself way more while waiting for opportunity to arrive in the same amount of time.<br /><br />I like the premise. It's part of why I started contributing to the Cross Linux From Scratch project and why I started blogging. My database class notes blog might make me a little money from Google ads but I think the real value is in establishing myself as a resource and opening doors for me. Maybe (but probably not) I'll become a lesser version of Salman Khan and be known for presenting small chunks of learning. Who knows?! Certainly not me, but I am enjoying writing more.<br /><br />In mostly the same vein, <a href="http://www.marco.org/2011/09/26/gruber-merlin-sxsw-2009">Marco</a> linked to <a href="http://www.43folders.com/2009/03/25/blogs-turbocharged">a presentation</a> by John Gruber and Merlin Mann from SXSW 2009 that I really enjoy listening to. The title of their talk is awesome, "HOWTO: 149 Surprising Ways to Turbocharge Your Blog With Credibility!" (note: you will not hear them list 149 things, it's more like 5)</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Database Class Notes2011-09-26T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/09/database-class-notes<div class='post'>
The Stanford online database course's <a href="http://db-class.org">enrollment has opened</a> and I've got my <a href="http://db-class-notes.blogspot.com/">course notes blog</a> started!<br /><br />I'll be blogging my notes and thoughts on the course in as close to real time as I can. I hope to post up my solutions to assignments once allowed on <a href="https://github.com/bradfa">my Github</a> account.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Hewlett-Packard, WebOS... RIP2011-09-23T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/09/hewlett-packard-webos-rip<div class='post'>
Meg Whitman is the new CEO of HP. Leo Apotheker got to be CEO for less than a year and for most of that time Meg Whitman was on the board of directors. The board seemed to back Apotheker when he made decisions that roiled the web press. He killed almost everything HP got from the purchase of Palm, bought Autonomy, said the company is going to transition into "enterprise" software & services, and started discussions that HP was going to sell off their commodity computer business. The board didn't seem to object, at least not publicly. Apotheker was horrible at communicating with the shareholders but that pales in comparison to the rather large changes being made, it's forgivable.<br /><br />But now they've fired Apotheker and put Whitman in his place. HP's not going to recover from Apotheker's decisions if Whitman is in charge.<br /><br />I'm sorry HP shareholders, employees, and customers. You're in for a continued rough ride in the near future.<br /><br />Why should anyone think that Whitman, who didn't have the best record at eBay (even though the major press seems to think otherwise), who failed at becoming the governor of California, and who was on the board of directors for most of the time Apotheker made such "bad" decisions, will do anything "better" than Apotheker?<br /><br />HP used to be an engineer's company. Woz worked there and loved it. If Woz thought HP was an engineer's dream company, I bet it was. HP made THE BEST calculators, THE BEST test equipment, THE BEST mid range laser printers, THE BEST low end ink jet printers, and probably tons of other products that were THE BEST of their time. HP isn't an engineer's company any more. Meg Whitman isn't going to make it back into an engineer's company.<br /><br />Today, what does HP build that's the best?<br /><br />WebOS had a chance to be THE BEST, even though they stumbled out of the starting blocks and were late to the party. Out of all the things that have changed at HP under Apotheker, the loss of WebOS as even a reasonable contender against iOS and Android, is the worst.<br /><br />HP's not THE BEST at anything any more. Rest in peace...</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Stanford Database Class and an Experiment2011-09-22T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/09/stanford-database-class-and-an-experiment<div class='post'>
Stanford is running a free online <a href="http://db-class.org/">databases course</a> this fall. I've signed up to get more info and once that info comes out, I'll be registering for the course. It runs from October 10 till December 12. I assume there will be two or three video lectures per week along with projects or homework.<br /><br />If allowed by the course guidelines, I'm going to blog my notes from lectures and post my projects / homework up on GitHub.<br /><br />I want to become a better writer, learn about databases, and possibly make a little money from Google ads. Taking the database course and blogging my notes and thoughts will help me to accomplish all three.<br /><br />My background is mostly in electrical engineering but I do write software. Although I've been formally programming since my first year of college (2001), I've never taken a formal algorithms, data structures, or database course. These days I write some C# for Windows and C for microcontrollers but I've also worked with Java and C++ and done a tiny bit of Python and Perl. My background probably matches a lot of people out there who are self taught coders and I think that provides me with a good audience desiring to learn about databases and that I can easily connect with.<br /><br />I had wanted to try out Tumblr but after opening an account and playing with their interface, I think I'll stick with Blogger. Tumblr wasn't necessarily bad, but I'm used to Blogger (even with the new interface) and writing in the HTML view is pretty unencumbered for me. I'll post up a link to the database class notes blog shortly.</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
A link to my database class notes blog:<br /><a href="http://db-class-notes.blogspot.com" rel="nofollow">http://db-class-notes.blogspot.com</a></div>
</div>
</div>
Introduction to Verilog2011-08-05T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/08/introduction-to-verilog<div class='post'>
I found a nice series of lectures given at the Indian Institute of Technology at Kharagpur by Prof. Sengupta. They're hosted on YouTube and I've made a playlist for lectures 2 through 7. There are quite a few lectures in this course and I'll be posting playlists for more later. <br /><br />Please be aware, at the start of each lecture video there is a very loud test tone for the first few seconds.<br /><br /><object width="480" height="385"><param name="movie" value="http://www.youtube.com/p/522694885E653BC7?version=3&hl=en_US&fs=1"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/p/522694885E653BC7?version=3&hl=en_US&fs=1" type="application/x-shockwave-flash" width="480" height="385" allowscriptaccess="always" allowfullscreen="true"></embed></object></div>
<h2>Comments</h2>
<div class='comments'>
</div>
Downsides and Upsides of Altera's Configuration Via Protocol2011-07-29T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/07/downsides-and-upsides-of-altera-s-configuration-via-protocol<div class='post'>
Yesterday, I wrote a little about <a href="http://bradfordembedded.blogspot.com/2011/07/modular-fpga-accelerated-computing.html">reconfigurable FPGAs</a> attached to the PCI-Express bus as an addition to the general purpose computer. The idea revolves around being able to reconfigure an FPGA over PCI-Express without requiring a PCI reset (basically a reboot). In this way, as processing demands change on the host computer, the FPGA can be configured to process data in the most effective way.<br /><br />The huge upside of being able to reconfigure an FPGA over PCI-Express is that there's no external programmer (such as Altera's USB Blaster) involved. The hardware needed to perform a configuration is all built into the FPGA and embedded in the operating system drivers running on the computer. This ability really opens up the possibility of having a large number of FPGAs connected over PCI-Express and configuring each one individually, quickly, and when ever you want.<br /><br />On the downside, the initial configuration that is loaded into the FPGA at boot time sets the I/O interfaces up for each pin and can't be changed. This is fine, if you don't care about I/O customization. I'm envisioning a system where this isn't a large issue because the FPGAs would only be used for data processing, not for interfacing to I/O outside of the PCI-Express bus and a minimal set of devices located on the same circuit board.<br /><br />Another downside is that currently only the Cyclone V, Arria V, and Stratix V FPGAs from Altera support configuration via protocol over PCI-Express. Cyclone V is the most attractive as the Cyclone family usually is lower cost than Arria and Stratix. Cyclone V doesn't appear to be available yet for public consumption which makes cost estimation difficult for those who don't have a standing relationship with Altera (myself included). The Cyclone IV has been available for a while, includes hard IP for PCI-Express 1.0, and is reasonably priced but Altera doesn't support configuration via protocol for that family.<br /><br />Overall, for developing a PCI-Express reconfigurable FPGA system, the upsides strongly outweigh the downsides. I'm very excited about configuration via protocol!</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Modular FPGA Accelerated Computing2011-07-28T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/07/modular-fpga-accelerated-computing<div class='post'>
General purpose computers are great at doing a huge variety of things because they are programmable via software. But with this flexibility comes sloth, general purpose computers are slow compared to specialist hardware like ASICs or even FPGAs for a large number of computations.<br /><br />The recent push into <a href="http://en.wikipedia.org/wiki/Gpgpu">GPGPU </a>computing using graphics processors to attack massively parallel problems that involve simple computations is expanding the abilities of general purpose computers. AMD is betting the farm on it with their <a href="http://sites.amd.com/us/fusion/apu/Pages/fusion.aspx">Fusion</a> designs. The capabilities of GPGPU programs are limited but the speed at which they can be executed is approaching that of FPGA hardware. By utilizing GPGPU computing, a rather low cost, somewhat easy to program addition to the general purpose computer can improve processing performance for a subset of computations.<br /><br />But the main drawback to GPGPU computing is power consumption. In order to be this fast and have the ability to be programmed with software, graphics cards draw huge amounts of power compared to dedicated hardware like ASICs or FPGAs.<br /><br />This is where I think reconfigurable FPGAs come into play in the general purpose computer. FPGAs can be configured in many different ways and can execute some algorithms in hardware much faster than a general purpose computer can. In this way, they're a lot like GPGPU computing, except the power consumption is orders of magnitude lower.<br /><br />Pico Computing makes the <a href="http://www.picocomputing.com/e_series.html">Pico E-18</a> ExpressCard/34 FPGA device. It fits in an <a href="http://en.wikipedia.org/wiki/ExpressCard">ExpressCard</a>/34 slot and I assume conforms to the power requirements of ExpressCard/34. That means its power consumption should be about 3W max. Compare this to even mid-level performance GPGPU devices consuming hundreds of Watts of power. Let's assume, for comparison sake, that a GPGPU device consumes 100W of power and that the Pico E-18 or similar FPGA based device consumes 3W. At that ratio, you could have 33 FPGAs or one GPGPU device! Or, even better, you can have one ExpressCard/34 FPGA in your laptop and not impact heat or battery performance dramatically.<br /><br />The main drawback of FPGA based computing is the upfront costs. GPGPU devices are relatively cheap because lots of people buy graphics cards. Very few people are currently buying FPGA devices for their computers. It's a chicken and egg problem but some high profile use cases are tipping the scales towards more pervasive use of FPGAs in general purpose computers, take <a href="http://news.slashdot.org/story/11/07/11/2232253/JPMorgan-Rolls-Out-FPGA-Supercomputer">JPMorgan</a> for example.<br /><br />Along with use cases becoming more visible to the computing world showing what FPGAs can do, the ability to reconfigure an FPGA on the fly is very important. Usually, when an FPGA is powered up, it gets configured. Until the power goes away or a special programmer device is connected, FPGAs rarely get reconfigured while running. That's changing!<br /><br />Xilinx has <a href="http://www.xilinx.com/tools/partial-reconfiguration.htm">Partial Reconfiguration</a> and now Altera has announced <a href="http://www.altera.com/literature/wp/wp-01132-stxv-cvpcie.pdf">Configuration via PCI-Express</a>. Altera's technology basically allows you to only partially configure the FPGA in the usual way, such as with an Active Serial device, and then to configure the rest of the FPGA at a later time over PCI-Express dynamically. This means that you could load up a different FPGA configuration at the start of software execution on your PC. Then when you start a different program, another FPGA configuration is loaded. This provides more general purpose abilities but with the speed of execution that an FPGA provides.<br /><br />This is huge!<br /><br />The main barrier to entry is up-front initial FPGA purchase price. But cost can be reduced by selling smaller modules, like ExpressCard devices, and by ramping up volume. Ramping up volume requires showing people how they can utilize the technology for lots of currently slow, power hungry computations. Showing people how this technology can improve performance requires some very well built software and example FPGA code along with making the sale of these devices more like other general purpose computing hardware (vendors currently make it difficult to buy an FPGA based PCI-Express card). Altera and Xilinx aren't going to enter this market but I bet they'd love to support someone who was. The tools and support, let alone devices to be sold, provides a great reason for Altera and Xilinx to support a company doing this.<br /><br />It'd be a great start-up company and the business side has some awesome potential, too. Think of an app-store for FPGA configurations where the vendor (like Apple does) gets a cut of every sale! Let alone actually selling hardware and support contracts. Plus, there's no big player doing this yet making it easier to enter the market.</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
Xilinx also has a similar technology to Altera's Configuration via PCI-Express. <a href="http://www.xilinx.com/support/documentation/application_notes/xapp883_Fast_Config_PCIe.pdf" rel="nofollow">Nice PDF about it</a>.</div>
</div>
</div>
Function Pointers2011-07-01T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/07/function-pointers<div class='post'>
Function pointers are pretty cool. I learned about them this week, then I solved a problem using them.<br /><br />At work, I'm writing some code for a 16 bit microcontroller. In the system I'm working with, a relatively slow external clock causes an interrupt to be fired off. Inside my code, I have a state machine that executes each time the interrupt fires. The state machine was implemented using a switch statement that contained a reasonable number of cases, one for each possible state.<br /><br />This worked great, up until I continued to add states. Eventually, the time to determine which state should be executed became about half as long as the period of the external clock. This is bad because it means some of my states may not have enough time to execute before the next interrupt comes along. If that happens, bad things occur and the state machine loses place compared to the rest of the world.<br /><br />To solve this, I thought there had to be a way to create an array of pointers to functions. Then as I change state - represented as an index into the array - I could just call what ever function is pointed to in the array. It turns out, that's very possible in C. Here's a <a href="http://www.newty.de/fpt/intro.html">good tutorial</a> I found.<br /><br />The assembly code for using an array of function pointers is much shorter than the switch statement code. This gives me more time to execute code in between interrupts and was not hard to implement. Each of the switch statement's cases (states) became a function and at the start of execution an array of function pointers to each function is populated. Within the states the array index can be changed based on inputs or the result of processing data.<br /><br />Extra documentation can be very helpful when using function pointers. A switch statement is easy to understand but an array of function pointers is a bit more opaque.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Crypto Load Balancer Using Off The Shelf Hardware2011-06-15T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/06/crypto-load-balancer-using-off-the-shelf-hardware<div class='post'>
At my day job, I work a reasonable amount of time with cryptographic and authentication systems. Lately, I've been reading about <a href="http://www.khronos.org/opencl/">OpenCL</a> and <a href="http://www.nvidia.com/object/cuda_home_new.html">CUDA</a>. I'm wondering if buying a high end graphics card to do some brute force number crunching would be worthwhile.<br /><br />Today on <a href="http://news.ycombinator.com/">Hacker News</a>, there was a link to <a href="http://info.iet.unipi.it/~luigi/netmap/">netmap</a>. Netmap looks like a neat way of getting very high network throughput (like saturating a 10G Ethernet line using only 1.66 GHz of processor) using standard hardware (no special ASICs or FPGAs).<br /><br />I've also read in the past about <a href="http://en.wikipedia.org/wiki/Transport_Layer_Security">TLS / SSL</a> and <a href="http://en.wikipedia.org/wiki/Load_balancing_%28computing%29">load balancers</a>. TLS / SSL is what's used to encrypt data going between a server and a client, such as for credit card number or username & password transmission. A traditional load balancer will sit on the network in-line before the cluster of servers that actually serve webpages. As requests come into the load balancer, it distributes the requests to the servers in such a way that no server gets overloaded.<br /><br />I've also read a tiny amount about load balancers that will decrypt and encrypt TLS / SSL traffic such that the webservers don't have to (encryption on general purpose CPUs is expensive). I'd imagine that, for these load balancers to do TLS / SSL inline, this requires very high network throughput as well as very fast number crunching for encryption systems. Traditionally, I'd expect a load balancer such as this would use special hardware (as normal routers do) in order to obtain very high network throughput. I'd also expect custom FPGA code or ASICs would be used to provide high throughput encryption abilities. In both cases, these are low volume, very specialized systems that will be very expensive to create and sell.<br /><br />But what if someone could combine both netmap and OpenCL to perform load balancing and TLS / SSL in one box that uses off the shelf hardware?<br /><br />It probably wouldn't be as capable as the truly high end hardware, but it could probably compete in the mid-range and cost significantly less. As the hardware required would be basically:<br /><ul><li>A fast processor / motherboard / RAM combo</li><li>A large number of PCIe 2.0 slots with a large number of lanes each</li><li>At least 2 10Gb Ethernet PCIe cards</li><li>A few high end ATI graphics cards to execute OpenCL code</li></ul><br />A high end server system with some add-in cards would fit the hardware bill. Then you'd just need a nicely setup OS (to support netmap and the ATI drivers) and some software to load balance and run the encryption. This isn't simple but it's less complex than a dedicated custom hardware system.<br /><br />I think this is a pretty neat idea. Of course, as more processors start to include encryption abilities, the viability of a device like this is reduced. But an advantage of this type of device over built-in encryption abilities in CPUs is that it's easy to update this device, we just write new software and deploy it normally. A CPU can't easily be updated to add additional encryption schemes once it is produced. <br /><br />Another concern would be that the load balancer would need to store the private key and pass it around to the graphics cards, this could be a security issue, but one that could be mitigated by having separate private keys for normal TLS / SSL traffic and traffic where really sensitive data is transmitted (like credit card numbers). The credit card processing server should probably have an <a href="http://en.wikipedia.org/wiki/Hardware_security_module">HSM</a> and that would be OK because traffic would be much lower.<br /><br />Something like this could really accelerate the adoption of HTTPS everywhere in order to prevent <a href="http://codebutler.com/firesheep">FireSheep</a> type attacks.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Conan's Dartmouth 2011 Commencement Address2011-06-14T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/06/conan-s-dartmouth-2011-commencement-address<div class='post'>
Watch Conan O'Brien give the commencement address to the Dartmouth class of 2011. It's funny and there's real life lessons that apply to everyone, not just those graduating.<br /><br /><iframe width="560" height="349" src="http://www.youtube.com/embed/ELC_e2QBQMk" frameborder="0" allowfullscreen></iframe></div>
<h2>Comments</h2>
<div class='comments'>
</div>
The Standing Desk2011-06-12T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/06/the-standing-desk<div class='post'>
At work, I started using a ghetto standing desk with my lab PC about a month ago. At the beginning of the year, I got a new Dell tower and monitor (the U2311, which is really nice). The monitor box standing on top of a few plastic "project boxes" on top of a slightly higher than normal lab workbench is just the right height for my keyboard and trackball. The monitor sits on top of the Dell tower.<br /><br />I like it, but the work I'm doing these days leads to some inefficiencies. I'm working a lot reading reams of documentation, connecting different things to various circuit boards, and running a logic analyzer. All of this stuff happens at the normal lab bench height. Because of this, I sit down about 25% of the time when working at my standing desk. That's OK, but it would be much nicer if I could raise the workbench up to the right height for my keyboard and trackball (it's about a 14" difference, I think).<br /><br />The floor in the lab is 1 foot square industrial stick-on "tiles" (I think they're like linoleum). I don't use any kind of standing mat. My everyday shoes are 4 year old Adidas soccer shoes. The first week was a bit hard on my legs and back, I'd be stiff each morning, but it has gotten better. I still feel like I'm doing more physical work by standing (I've read it burns 3x the calories as sitting) but I'm not uncomfortable standing for 4 hours at a time.<br /><br />The one thing I have noticed is that I'm more focused on my work when standing. Lately I'm writing C code for a PIC24f microcontroller. But I like to sit when I'm doing real hard core thinking, like when figuring out timing of interrupts or reading through assembly code. I'm not sure why this is. Having everything I work with up at the right height might help but I've not yet found the right sized cinder blocks laying around.<br /><br />In my cube I still sit. My chair isn't fancy but it isn't too bad for comfort. At home I sit on a wooden chair when working on my desktop. My main motivation for trying the standing desk was that the chair I had in the lab was uncomfortable and rather than attempt to find a chair that would work, making a standing desk was quick and cheap.<br /><br />So far I like it, although it does have downsides (my hardcore thinking, not everything being at the same level, etc). My coworkers think it's funny that I stand up but my boss has started pondering the idea of trying it since I've been using it successfully for a decent amount of time now.<br /><br />If you're uncomfortable sitting all day, try a standing desk. It's all the rage on <a href="http://news.ycombinator.com/">Hacker News</a> ;)</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Install the HP webOS SDK on Debian 6 Squeeze2011-05-14T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/05/install-the-hp-webos-sdk-on-debian-6-squeeze<div class='post'>
Quick instructions on how to install HP's webOS SDK (software development kit) on Debian 6 Squeeze x86_64. I was a little bored and was reading about webOS...<br /><br />Edit /etc/apt/sources.list to add the VirtualBox repo and enable contrib and non-free for your favorite Debian repo line. The lines we care about look something like this:<br /><br /><code>deb http://mirror.rit.edu/debian/ squeeze main contrib non-free<br />deb http://download.virtualbox.org/virtualbox/debian \<br /> squeeze contrib non-free</code><br /><br />Grab the <a href="http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc">Oracle apt signing key</a> and install it with a:<br /><br /><code>sudo apt-key add oracle_vbox.asc</code><br /><br />Run a quick update for aptitude (or apt-get if you prefer):<br /><br /><code>sudo aptitude update</code><br /><br />Install the needed packages (Java6 JDK [you supposedly only need the JRE but I'm using the JDK as well], VirtualBox 3.2, and ia32 libraries [only on 64 bit]):<br /><br /><code>sudo aptitude install sun-java6-jdk virtualbox-3.2 ia32-libs</code><br /><br />Grab a copy of HP/Palm's <a href="https://cdn.downloads.palm.com/sdkdownloads/2.1.0.519/sdkBinaries/palm-novacom_1.0.64_amd64.deb">novacom</a> and the actual <a href="https://cdn.downloads.palm.com/sdkdownloads/2.1.0.519/sdkBinaries/palm-sdk_2.1.0-svn409992-pho519_i386.deb">SDK</a> (be wary, it's 185MB in size) in deb format.<br /><br />And, finally, install both of those:<br /><br /><code>sudo dpkg -i --force-architecture \<br /> palm-sdk_2.1.0-svn409992-pho519_i386.deb<br />sudo dpkg -i --force-architecture palm-novacom_1.0.64_amd64.deb </code><br /><br />As the novacom package provided by Palm/HP uses Upstart as found on Ubuntu (Debian doesn't), you have to start the novacom daemon yourself. It can be found in /opt/Palm/novacom/novacomd. Novacom should allow you to push apps into the virtual machine for testing, Palm/HP don't want you messing with the file system in weird ways.<br /><br />To launch the emulator, fire up a console and run <code>palm-emulator</code>. It will first build the virtual machine and then launch it. Use the escape button (on your keyboard) to go back, the end button acts like the button on the phone, and home does something (sort of like end, but not sure why it's different). Clicking and dragging are just like your finger would do.<br /><br />Enjoy! :)<br /><br />Pretty picture below:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-_wnUQI-PiX8/Tc7_zHMrsLI/AAAAAAAAANE/JirJK1_Gbgo/s1600/webOS_virtual_machine.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="400" width="238" src="http://4.bp.blogspot.com/-_wnUQI-PiX8/Tc7_zHMrsLI/AAAAAAAAANE/JirJK1_Gbgo/s400/webOS_virtual_machine.png" /></a></div></div>
<h2>Comments</h2>
<div class='comments'>
</div>
Expectations and the i.MX53 QSB "Lab" Session2011-05-10T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/05/expectations-and-the-i-mx53-qsb-lab-session<div class='post'>
My expectations were high for the Freescale i.MX53 lab session that was held this morning. I was excited! The agenda had items like Linux on the i.MX53! LTIB! Android development on the i.MX53! and free lunch!<br /><br />My expectations were not realized.<br />There was free lunch but I left.<br /><br />The morning consisted of a lecture by a very nice and smart guy about general embedded Linux things and the i.MX53 QSB hardware. No fault on him but the material was way too basic. I was expecting to get right into using LTIB and learning about the LTIB configuration and build system and how it applies to the i.MX53. I don't have much experience with LTIB but we didn't really do much with it other than run a script and look at some options in the menuconfig interface.<br /><br />Those who felt comfortable with embedded Linux were given the serial cables and SDcards so they could play during the lecture. I tried to do the Ubuntu demo but just loading it onto the SDcard must have taken a good hour. Then the USB to serial converter decided to not play nice with VMware Workstation (host machines were Ubuntu 10.10 in a VM on top of Windows). And then once the demo finished loading and I booted it... It's Ubuntu! What's the fun of that? What did I learn? Nothing.<br /><br />SDcards are slow. But that's another topic all together.<br /><br />I then played a little with LTIB but I didn't like it. I felt like I had no control over the choices. I'm not sure if that is LTIB's fault or Freescale's fault. We didn't have a connection to the Internet so there was no way for me to use any sources other than what was provided. Regardless, it was a type the following few lines, look at some options that have already been selected, push go. One thing I really disliked was that the LTIB menuconfig had almost no help for options that were vague. That's not cool.<br /><br />I did meet a nice guy name Jon who works for a local company doing board support packages for custom embedded boards, that was cool. But overall, I wish I hadn't gone. It was nice out today so I did the next best thing and left at lunch to mow the lawn. Way better use of my vacation day.</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
Lee, sorry to hear that the support isn't up to your expectations. I'm sorry but I'm not available for freelance support, hopefully you'll be able to find someone who is. Best of luck.</div>
</div>
<div class='comment'>
<div class='author'>Lee Zhi Hong</div>
<div class='content'>
Hi. I am using the Freescale iMX53 QSB too. HAving problem and the support is not good enough to solve my problem.<br />I have to look for freelance to help me through it.<br /><br />Good luck with it.</div>
</div>
</div>
The TuxedoBoard has a Brain! (picked out)2011-05-06T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/05/the-tuxedoboard-has-a-brain-picked-out<div class='post'>
I've chosen an ARM SoC (system on chip) for the TuxedoBoard! The Texas Instruments <a href="http://focus.ti.com/docs/prod/folders/print/am1707.html">AM1707</a> ARM9 core will meet my requirements.<br /><br />The AM1707 is a 456-MHz ARM926EJ-S SoC that has an internal Ethernet MAC, multiple UARTs (serial ports), an external memory interface for a 32 bit wide SDRAM bus supporting up to 256MB RAM, an external NOR flash memory interface multiplexed with an SD card controller, USB, an LCD interface, and a few other interesting features.<br /><br />In keeping with the stated goals for the TuxedoBoard project, the schematic capture and layout will be done with <a href="http://www.lis.inpg.fr/realise_au_lis/kicad/">Kicad</a>. Kicad supports both Windows and Linux and is covered by the GPLv2 license. I'm currently using the 20100314 release as that's what comes with Debian 6.<br /><br />I'm in the process of creating the schematic symbols for the AM1707. I'm new to Kicad and I was not able to find an already made symbol with a permissive license, so this is taking a little time. I've made the power symbol and the EMIF_A (external NOR flash / SD card interface) symbol. Some pretty pictures of what I've got so far:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-E9u55OK94TI/TcPQQMCZbpI/AAAAAAAAAMw/3gz37OglG5k/s1600/AM1707_PWR_symbol.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://3.bp.blogspot.com/-E9u55OK94TI/TcPQQMCZbpI/AAAAAAAAAMw/3gz37OglG5k/s400/AM1707_PWR_symbol.png" width="174" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-07VOo7CIwHE/TcPQVLZYU6I/AAAAAAAAAM4/Z9d1b6ZRAnU/s1600/AM1707_EMIF_A_symbol.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="187" src="http://2.bp.blogspot.com/-07VOo7CIwHE/TcPQVLZYU6I/AAAAAAAAAM4/Z9d1b6ZRAnU/s400/AM1707_EMIF_A_symbol.png" width="400" /></a></div></div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Charles</div>
<div class='content'>
It'd be pretty fantastic to see that expanded outward to an easily modified (say I wanted to use a cortex-9 processor), extendable (maybe I want to include a distinct graphics chipset), and upgradable open module. <br /><br />Then again, I'm just a coder, I'll leave the circuit stuff to you guys.</div>
</div>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
Charles, yes, I saw that the other day. Looks like an interesting concept. In order to hit that price point, they're going to have to build a heck of a lot of them. I haven't read extensively about that project, but it's currently unclear to me if they're using open source tools to do the design and how open they're going to be about the hardware once it's out. Regardless, I hope they succeed. The world can only get better when there's more low cost embedded development kits :)</div>
</div>
<div class='comment'>
<div class='author'>Charles</div>
<div class='content'>
Andrew, you seen this yet:<br /><br />http://www.geekosystem.com/raspberry-pi-25-dollar-pc/<br /><br />Not sure if it applies to what you're doing, but the two projects sounded similar enough that i thought i'd point it out.</div>
</div>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
Yeah. I'm also going to make 3D files for the parts. Kicad supports integration of Wings 3D models and can render the board. I just think it looks cool. Probably will end up being a lot of work :)</div>
</div>
<div class='comment'>
<div class='author'>brian</div>
<div class='content'>
Just think you are going to have to make the parts in the layout tool as well. It might not be as bad as it sounds. If they have a similar part you can just work from that. You may have already know that. Good job :)</div>
</div>
</div>
Picking an Open Source Hardware License2011-04-30T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/04/picking-an-open-source-hardware-license<div class='post'>
I've been wandering around the Definition of Free Cultural Works open source hardware <a href="http://freedomdefined.org/OSHW">Statement of Principles</a> and doing a little thinking.<br /><br />I like the ideas behind open source hardware as stated in the Statement of Principles. I want the TuxedoBoard to have a licence that complies with these principles. The difficult part is finding a license that has already been written that can accomplish those principles. I'm not a lawyer so writing one myself seems like a bad idea to me.<br /><br />The whole open source hardware movement is at the heel of the hockey stick curve. Having a high quality open source hardware license, like the GNU GPL is for software, is something that's definitely needed.<br /><br />Creative Commons is nice but it is too general and doesn't require enough. The GPL seems awkward when applied to physical things or designs for physical things. The BSD and MIT licenses allow inclusion without releasing source and also seem to apply best to software. <br /><br /><a href="http://www.tapr.org/ohl.html">TAPR </a>doesn't quite go far enough and I don't think it completely complies with the OSHW Statement of Principles. For example, under TAPR, schematics or layouts can be distributed in PDF form. That's not helpful! I want a license that requires distribution of things like schematics and layouts in the original computer file format. <br /><br />So far, I've not found a high quality open source hardware license that I like. I'm confident that one will come about soon but I'm not sure who's going to write it. I'm not that concerned yet, TuxedoBoard is still just an idea, but the time to be concerned is quickly approaching. <br /><br />Maybe the best bet is a modified TAPR license. But then again, what license is the TAPR license text under? It's copyrighted but does not appear to be licensed for modification or reuse.</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
No, not yet. :(<br />I'll keep looking, though.</div>
</div>
<div class='comment'>
<div class='author'>brian</div>
<div class='content'>
Have you found anything else regarding open source licensing with schematics and Layouts. I was downloading KiCAD and was wondering the same thing I went on your blog and sure enough you had written about it.</div>
</div>
</div>
TuxedoBoard BGAs and Minimum Ball Pitch2011-04-28T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/04/tuxedoboard-bgas-and-minimum-ball-pitch<div class='post'>
After some discussion with Brian and lots of reading, I've learned quite a bit more about BGAs and ball pitch. I'm now leaning towards wanting at least 1mm ball pitch on any BGA parts used on the TuxedoBoard. 0.8mm ball pitch seems like the dividing line between being able to do a "traditional" dog bone fanout and needing to use via-in-pad. I want to avoid doing via-in-pad if at all possible. Not all board manufacturing houses can build PCBs this way, and even those that can will charge more or have lower yield than other methods. Even <a href="http://i.screamingcircuits.com/docs/Via_In_Pad_Guidelines.pdf">Screaming Circuits</a> would prefer to not deal with via-in-pad, although it sounds like they can. I believe <a href="http://blog.screamingcircuits.com/2009/08/we-build-the-beagleboard.html">they built the BeagleBoards</a> but even they consider the technology used on a low cost BeagleBoard to be special.<br /><br />With 1mm ball pitch BGAs, two or three 4mil (or 3.5mil) <a href="http://www.pcdandf.com/cms/magazine/95/3760">traces can fit between vias</a>, depending on a whole slew of other parameters. Brian warned me that some board houses can't reliably do 3mil and smaller traces, that's good to know. For the number of pins I'm considering for the ARM SoC (in the 300 pin range), a 6 layer board should be doable with 4 routing layers and 2 power. Now, whether 4 routing layers and 2 power is possible for the pinout, that's another question. :)<br /><br />With this in mind, a minimum of 1mm ball pitch seems the best way forward for now. I want the design to be easy and inexpensive to manufacture at a large number of places. In order to achieve this, microvias and via-in-pad aren't the way to go.<br /><br />Now, to pin down which SoC will be used!<br />The Freescale i.MX25 parts are 0.8mm and smaller ball pitch. :( But TI does make the <a href="http://focus.ti.com/docs/prod/folders/print/am1707.html">AM1707</a> ARM9 part with 1mm ball pitch which now looks interesting...</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
Regarding layer count, it looks more like a 6 layer board would have 3 power planes (2 ground, 1 power) and 3 routing planes (top, bottom, and 1 inner).</div>
</div>
</div>
TuxedoBoard SoC Selection2011-04-27T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/04/tuxedoboard-soc-selection<div class='post'>
I don't yet know which ARM <a href="http://en.wikipedia.org/wiki/System-on-a-chip">SoC</a> (system on a chip) I'm going to use for the TuxedoBoard. Cost and features both weigh heavily on my mind when looking at what different vendors have to offer. I wrote an email spewing some of my thoughts on this topic yesterday and I'd like to share an edited version:<br /><br />Due to my desired price point of about $100 for assembled and tested TuxedoBoards and because I expect the first build's volume to be about 100 boards, I'm looking to keep the SoC cost to under $8 each.<br /><br />So far, I've not had success finding Cortex-A8 devices in the $8 range when purchased in quantities of about 100 units, regardless of vendor or features. There's quite a good variety of ARM9 parts in the $6 range that include all the features I think I'm interested in. I would prefer to use a Cortex-A8 part but if the cost is too high, ARM9 will get the job done and allow my price point.<br /><br /><a href="http://en.wikipedia.org/wiki/Ball_grid_array">BGA packages</a> seem to be the only package offered for all Cortex-A8 and many ARM9 parts. That's OK with me as I've realized that selling an assemble-it-yourself kit isn't as important as making everything open so others can utilize the design. I'd prefer 1mm BGA ball pitch to allow for easy and cheap assembly but 0.8mm is second choice and would work fine. Pitch smaller than 0.8mm is a real turnoff to me because assembly tolerances get very tight and some assembly houses can't promise good yields. As I'm not looking to make the board into a mobile phone, or other space constrained device, physical size of the package isn't as important as ease and cost of assembly. <br /><br />I'm not that excited about <a href="http://en.wikipedia.org/wiki/Package_on_package">package-on-package</a> (POP) memory, like used on the BeagleBoards. I get the impression that a good number of assembly houses aren't yet comfortable with POP and that the BeagleBoard team went through some issues with manufacturing yield at first. My first manufacturing run will only be around 100 boards, having yield issues would be very bad and ruin the finance side of things.<br /><br />Freescale makes a very nice looking ARM9 core in the <a href="http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=i.MX253">i.MX253</a>. The i.MX253 prices out in the $6 range at <a href="http://www.digikey.com/scripts/us/dksus.dll?Detail&name=MCIMX253DJM4A-ND">Digi-Key</a>. TI makes a competing ARM9 part in the <a href="http://focus.ti.com/docs/prod/folders/print/am1802.html">AM1802</a> but I'm not clear on the pricing or features yet, still have to look into that.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
I Dub Thee, TuxedoBoard2011-04-26T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/04/i-dub-thee-tuxedoboard<div class='post'>
In order to have a name that I can refer to my planned open source hardware embedded Linux board with, I've chosen TuxedoBoard. Brian wrote me, "how about the TUX or Tuxedo board, its fitting and fun like beagle." Yes, it is! :)<br /><br />I've changed the decor of my blog just slightly, replacing the formerly orange background with an almost black one. It's more fitting of a tuxedo. This is not <a href="http://www.amazon.com/gp/product/B000SBH50Y/ref=as_li_ss_tl?ie=UTF8&tag=bradford07-20&linkCode=as2&camp=217145&creative=399349&creativeASIN=B000SBH50Y">Dumb and Dumber</a><img alt="" border="0" height="1" src="http://www.assoc-amazon.com/e/ir?t=&l=as2&o=1&a=B000SBH50Y&camp=217145&creative=399349" style="border: none !important; margin: 0px !important;" width="1" />.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
More Open Source Hardware Ramblings2011-04-25T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/04/more-open-source-hardware-ramblings<div class='post'>
I'm that much closer to designing a completely open source embedded computer!<br /><br />I've been reading <a href="http://www.amazon.com/gp/product/1936719010/ref=as_li_ss_tl?ie=UTF8&tag=bradford07-20&linkCode=as2&camp=217145&creative=399349&creativeASIN=1936719010">Do the Work</a><img src="http://www.assoc-amazon.com/e/ir?t=&l=as2&o=1&a=1936719010&camp=217145&creative=399349" width="1" height="1" border="0" alt="" style="border:none !important; margin:0px !important;" /> by Steven Pressfield and I've been rambling on and on about designing a board to my friend Brian. The hardest part is starting, then the hardest part is shipping. I'm working on the starting part and wanted to share my thoughts.<br /><br />The concept for an open source embedded board is that everything will be open source. By everything, I mean everything! The tools used to design the hardware, both schematics and layout, will be open source tools (currently leaning towards Kicad). The files used for schematic capture and layout will be open sourced (not just PDFs but real actual files you can pick up and use to make your own board). The information on how to bring up the board from a blank, just manufactured state, will be online with an open documentation license. The information on how to build Linux and a bootloader and useful software will be online with an open documentation license. Everything will be open!<br /><br />Along with general openness of the design process, I'd like to keep any software written or used on the board open as well. So no binary blob kernel drivers, no closed source software, and no non-free IP blocks. This is hard. It means no hardware accelerated 3d graphics. It means limited (probably no) DSP blocks.<br /><br />I'd like to be able to sell this board for $100 but I'm not sure if that's a reasonable target price. I'm not looking to make big bucks doing this, it's a learning exercise for me. But shipping product to customers is a goal I have because it will force me to actually get things working. A more realistic price point is probably $150, but then it loses the appeal of being cheap because a BeagleBoard is only $125 now and my board won't offer much more in terms of hardware.<br /><br />In order to meet the $100 price goal, I wouldn't use a Cortex-A8 part. I'd probably use an ARM9 processor. That gets the price lower but sacrifices a lot of nice things that Cortex-A8 parts have (faster, better floating point, etc). I'm not yet sure if the trade-off is worth it, in either direction. I'm leaning towards ARM9 because I think price is really important, even if my board is the only one out there that's completely open.<br /><br />As a functionality differentiator, I'd like to have hardware compatibility with <a href="http://shieldlist.org/">Arduino shields</a>. There's already a huge amount of hardware out there that works well with Arduino systems and is aimed at people who like to play with little computer systems. Taking advantage of that ecosystem while showing people how they can communicate with the same hardware, but from within Linux, would be really cool. An ARM9 is hugely more powerful than an ATmega and is a nice halfway point between an Arduino and a full PC (size, power, price, and ability wise).<br /><br />Once I have a design that will work, I plan to fund the build of prototypes and shipping hardware using <a href="http://www.kickstarter.com/">Kickstarter</a>. <a href="http://kck.st/hxl2x9">Open Vizsla</a> inspired me that this type of funding scheme for hardware can really work.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Free IP Block Boards?2011-04-17T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/04/free-ip-block-boards<div class='post'>
At the LFCS (Linux Foundation Collaboration Summit) recently, some of the discussion was about system on chip (SoC) vendors including IP blocks (basically special circuitry to interface to the world in a specific way, like for outputting 3D graphics) in their designs that require non-free kernel drivers. By non-free, I mean that the source code is not available without some type of non-disclosure agreement. <a href="http://lwn.net/SubscriberLink/437929/256fdc3dd5a456f0/">LWN mentioned</a> this. It's basically an accepted fact of life in the embedded world right now that if you want certain kinds of hardware, you're stuck with non-free kernel drivers.<br /><br />I don't want to deal with non-free IP blocks. The whole point of using Linux on embedded devices is because stuff works, and when it doesn't, you have the source and can see why. This enables delivery of a system faster and more effectively.<br /><br />Are there any low cost embedded development boards out there that do not include any non-free hardware? I'd prefer a board where all the drivers exist in the mainline kernel, but stuff in -staging is OK, too. An ARM processor isn't a requirement, I'm open to MIPS, PowerPC, SuperH, x86, or any other type, just as long as I can use all of the hardware without needing any non-free code. Oh, and it should cost $150 or less.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
The Freescale i.MX53 QSB Looks Neat2011-04-13T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/04/the-freescale-i-mx53-qsb-looks-neat<div class='post'>
Yesterday, I received an email from a Freescale representative saying that they would be holding a free day-long "lab" session in Rochester (where I live) next month about the i.MX53 Quick Start Development Board. Topics to be covered include getting both Linux and Android up and running on the board as well as showing how to set up a development environment on a host PC. There will be a few different "labs" (mostly following a script I assume) but everything will be provided and I think they include lunch. All attendees receive a $50 off coupon towards the purchase of an i.MX53 QSB.<br /><br />I had never heard of the i.MX53 QSB before so I was intrigued.<br /><br />It turns out, the <a href="http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=IMX53QSB">i.MX53 QSB</a> looks like a pretty cool little board. It's got Freescale's implementation of an ARM Cortex-A8 core running at 1 GHz, 1 GB of DDR3 RAM, USB, Ethernet, VGA (and LVDS) video output, SD card, microSD card, audio, and an expansion port. All for $150. One cool thing you can add to the expansion port is a 800x480 touch screen LCD!<br /><br />My impression is that Freescale was inspired by the BeagleBoard-xM, although I don't know for sure. The <a href="http://www.freescale.com/files/32bit/doc/user_guide/IMX53QSBRM.pdf">written documentation</a> appears to be a bit more in-depth than the BeagleBoard-xM and comes across as more professional but there are a lot of typos. The price is the same as the BeagleBoard-xM (without a coupon) and the feature set is very similar, except a few nice extras (expansion port and double the RAM).<br /><br />I'm very happy to see that Freescale is getting into this market. The $150 price point is cheap enough for hobbyists (like me) but also cheap enough that employers won't balk at an engineer asking to buy one. Both the i.MX53 QSB and BeagleBoard-xM are going to help grow the ARM device ecosystem at tremendous rates.<br /><br />Will Freescale be able to create a community around the i.MX53 QSB in the same way that the BeagleBoard has?<br /><br />That's the million dollar question. Just having cool hardware or software isn't enough. Community is probably more important. Solving problems, posting how-tos, sending code up-stream, having an IRC channel and mailing list... Without that, the i.MX53 QSB won't be nearly as successful as the BeagleBoards.<br /><br />I, for one, welcome our new low cost ARM development boards!</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Git is Cool!2011-04-05T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/04/git-is-cool<div class='post'>
I'm working on adding a patch to the CLFS embedded book this morning to enable soft floating point support in libgcc for ARM when used with uClibc. I had committed the patch in my CLFS embedded git repo but with a name that wasn't really in line with other CLFS patches. I hadn't yet pushed that commit and I wanted to change the name.<br /><br />I hadn't realized that I wanted to change the name until I started adding the new patch to the DocBook. So now I have unstaged changes and I want to change the name of a patch file that had already been committed. Also, having a commit that just changes the name to what it should have been in the first place seems messy to me, no reason to have two commits when one will do.<br /><br />Git makes it soooo easy! Thank you Git developers!<br /><br />Move the file, commit that change (and only that change), stash the unstaged changes, interactively rebase to squash the name change onto the creation of the patch, pop the unstaged changes off the stash buffer, and continue with my DocBook updates!<br /><br />$ git mv incorrect-name.patch correct-name.patch<br />$ git commit -m "Corrected name"<br />$ git stash<br />$ git rebase -i HEAD~2<br />$ git stash pop<br /><br />And to think I use to administer a Subversion server and tell everyone how great that was... :)</div>
<h2>Comments</h2>
<div class='comments'>
</div>
What Is CLFS?2011-03-17T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/03/what-is-clfs<div class='post'>
In this week's <a href="http://beagleboard.blogspot.com/2011/03/beaglecast-2011-03-14-beagleboard-xm.html">BeagleCast</a>, my blog post about <a href="http://bradfordembedded.blogspot.com/2011/03/toolchain-check-kernel-check.html">getting a kernel running</a> on my BeagleBoard-xM with CLFS got a mention. That's cool! But there seemed to be some confusion around what CLFS really is. Let me try to explain, in my own words:<br /><br />CLFS stands for <a href="http://trac.cross-lfs.org/">Cross Linux From Scratch</a>, it's an offshoot of the LFS (<a href="http://www.linuxfromscratch.org/">Linux From Scratch</a>) project. The goal of both LFS and CLFS is to provide instruction on how to build a Linux system, step by step, from source. In both projects, first a <a href="http://en.wikipedia.org/wiki/Toolchain">toolchain</a> is built and then a bootable GNU/Linux operating system is built using that toolchain. CLFS differs from LFS in that CLFS builds a GNU/Linux system for an architecture different than the one doing the building. For example, if I have an x86 based computer on my desk that I want to use to build GNU/Linux for a PowerPC system I just bought off eBay, I'd use CLFS rather than LFS.<br /><br />The methods described by CLFS are very similar to the methods used by <a href="http://www.openembedded.org/index.php/Main_Page">OpenEmbedded</a> or <a href="http://buildroot.uclibc.org/">Buildroot</a>, but each step is described and there's very little automation. You literally manually build, step by step from upstream source, a toolchain and then a functioning GNU/Linux system. Throughout the books there is information on all the different choices you are required to make. For example: if targeting an ARM processor, when building GCC, you have to choose which ARM version to target, like "armv7-a" for Cortex-A8.<br /><br />The CLFS project has three different books: Standard, Embedded, and Sysroot. The Standard book is the most developed book and mostly targets people who want to build a cross-compiled system for a desktop computer system. The Embedded book (the one I work on) builds a minimal GNU/Linux system using uClibc targeted at embedded devices where resources may be constrained. The Sysroot book uses a different technique to accomplish similar goals as the Standard book. Both the Standard and Embedded books are under active development, Sysroot hasn't been updated in a while.<br /><br />Where OpenEmbedded and Buildroot hide a lot of the complexity of building a cross compiled version of GNU/Linux, CLFS hides almost nothing. Think of CLFS as more of a learning experience rather than a quick way to build a cross system. A big advantage of CLFS over an automated system is that errors are usually easy to identify, since everything's done manually you stay at the terminal and you'll probably watch the configure scripts and compiler output. If something doesn't look right, you will have a much better idea of where the problem is, if not what the problem is.<br /><br />If you'd like to help with the CLFS project, or if you just have questions, come join the IRC channel (#cross-lfs) on Freenode and / or sign up for the clfs-support <a href="http://trac.cross-lfs.org/wiki/lists">mailing list</a>.<br /><br />PS: On Freenode (both in #cross-lfs and #beagle), I'm user "bradfa".</div>
<h2>Comments</h2>
<div class='comments'>
</div>
File System - Check!2011-03-17T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/03/file-system-check<div class='post'>
I've gotten an Embedded <a href="http://cross-lfs.org/view/clfs-embedded/arm/">CLFS</a> system running on my BeagleBoard-xM!<br />I made a <a href="https://gist.github.com/874183">public gist of the boot-up</a>.<br /><br />I'm still using the supplied U-Boot so I'm not using the automated "bootcmd", but that's shortly coming up on my list of things to fix (along with getting a few microSD cards). I've put my kernel (that was <a href="http://bradfordembedded.blogspot.com/2011/03/toolchain-check-kernel-check.html">built before</a> and uses no modules) in the first FAT partition on the microSD card and then created a 1GB ext3 partition as the 3rd partition, which is where my root file system lives. In order to boot I use the following U-Boot commands:<br /><ul><li>setenv console ttyO2,115200n8</li><li>mmc init ${mmcdev}</li><li>setenv mmcroot /dev/mmcblk0p3 rw</li><li>run loaduimage</li><li>run mmcboot</li></ul><br />When booted, the system's only using about 8MB of RAM according to 'top', which isn't half bad. Granted, it's not doing anything useful yet, but that's a nice starting place. My system doesn't have networking yet, but that's next up on the to do list along with an SSH server.<br /><br />In order to get this far, I've made some changes that aren't yet in the Embedded CLFS book. I've opened Trac tickets that document the issues and I'll be updating the book soon (hopefully this weekend).</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Toolchain, Check! Kernel, Check!2011-03-14T00:00:00-04:00hhttp://www.bradfordembedded.com/2011/03/toolchain-check-kernel-check<div class='post'>
I've been working on the <a href="http://trac.cross-lfs.org/">CLFS embedded book</a> for a few months now. I've been learning a lot and my goal has been to get a CLFS embedded system running on my BeagleBoard-xM. I'm now one step closer!<br /><br />I've built my own toolchain using CLFS instructions currently found in <a href="http://git.cross-lfs.org/?p=abradford/clfs-embedded.git">my git repo</a>. My toolchain consists of:<br /><ul><li>Linux Headers from 2.6.36</li><li>GMP 5.0.1</li><li>MPFR 3.0.0</li><li>MPC 0.8.2</li><li>Binutils 2.21</li><li>GCC 4.5.2</li><li>uClibc 0.9.31 (with uClibc provided patches)</li></ul><br />My kernel is from <a href="http://git.kernel.org/?p=linux/kernel/git/tmlind/linux-omap-2.6.git">tmlind's OMAP tree</a>, version 2.6.38-rc8. I started with the omap2plus_defconfig and modified it a little bit (although modification may not be needed, still have to check that).<br /><br />I'm using the provided xM U-Boot but with a few slight changes to the boot args and boot up sequence:<br /><ul><li>setenv console ttyO2,115200n8</li><li>mmc init ${mmcdev}</li><li>run loaduimage</li><li>run mmcboot</li></ul><br />I've made public gists of the <a href="https://gist.github.com/869016">boot console output</a> and my <a href="https://gist.github.com/869020">kernel config</a>, in case anyone is interested.<br /><br />Next step is to get the CLFS embedded filesystem up and running on my xM!<br /><br />EDIT: Thought my selection of ABI and triplets might be of interest as well:<br /><ul><li>target triplet: armv7a-unknown-linux-uclibceabi</li><li>ABI: aapcs-linux</li><li>little endian</li><li>hard float</li><li>arm mode only (no thumb instructions)</li><li>vfpv3 floating point hardware</li></ul>Also, happy Pi day!</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Chromium Minimum Font Size2011-03-03T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/03/chromium-minimum-font-size<div class='post'>
I've recently switched to using Chromium. I liked Firefox / Iceweasel but it has gotten a bit slower over time and Chromium is now faster for the places I visit on the web. Also, all the plugins I use on Firefox / Iceweasel are now available on Chromium (mainly Adblock Plus and Xmarks).<br /><br />But one thing I was missing, that Firefox / Iceweasel had, was an easy way to set the minimum font size through the GUI settings panel. Firefox / Iceweasel makes it easy, just pick the smallest font size you want from a drop down menu where you pick your default fonts. Chromium (at least the version that ships with Debian 6) doesn't have a menu choice to set this and the default was causing some pages I visit to have smaller fonts than I desired. And I don't like having to zoom.<br /><br />But the fix is easy!<br />Simply open up your ~/.config/chromium/Default/Preferences file and in the "webkit" section, add these two lines (make sure Chromium is not running when you both open and save the file):<br /><br />"minimum_font_size": 14,<br />"minimum_logical_font_size": 14<br /><br />You can substitute your favorite minimum font size for the 14 I've chosen. Next time you start Chromium, fonts won't be smaller than 14! :)</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Multiple Concurrent Linux Distributions in Debian Squeeze2011-02-28T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/02/multiple-concurrent-linux-distributions-in-debian-squeeze<div class='post'>
Ted Dziuba inspired me to try installing <a href="http://teddziuba.com/2011/01/multiple-concurrent-linux-distros.html">multiple concurrent Linux distributions</a> inside of an existing distribution. This is really handy for testing. Go try it (it won't take long). It's really awesome!<br /><br />Ted's instructions are rather minimal, although are a great place to start. I also found the <a href="http://wiki.debian.org/Debootstrap">Debian Wiki - Debootstrap</a> page to be helpful.<br /><br />In my case, I'm running Debian 6 squeeze on my desktop machine. I've installed Debian 5 lenny and Ubuntu 10.04 lucid as chroot environments. If after installing a Debian release, when running apt, you get errors about not being able to authenticate packages, make sure you run "apt-get update" first. That will take care of it.<br /><br />There's no "debian-minimal" package like there is for Ubuntu but (as root in the chroot environment) a quick "apt-get update && apt-get install aptitude sudo nano" got me to a place where installing the things I want is easy.<br /><br />One thing to be careful of, your user is still your user inside the chroot environment and your home directory is still your home directory! If you delete stuff in your home directory, it really will be gone, whereas if you delete / modify stuff (inside the chroot) from /etc and other system directories, it's only within the chroot.<br /><br />If you'd like to use rinse to install RedHat based distributions that aren't included by default (no recent Fedoras are for Debian 6's version of rinse), take a look at the <a href="http://gitorious.org/rinse/rinse/trees/master">rinse Gitorious tree</a> and grab the more up-to-date /etc/rinse/rinse.conf and *.packages files. I'll be doing this shortly and I'll post a follow up after installing some RedHat distros.<br /><br />See after the jump for list of commands I used to install both lenny and lucid...<br /><br /><a name='more'></a>sudo mkdir -pv /opt/chroot/lenny<br />sudo debootstrap --variant=buildd --arch amd64 \<br /> lenny /opt/chroot/lenny/ http://mirror.rit.edu/debian/<br />sudo mkdir -p /opt/chroot/lucid<br />sudo debootstrap --variant=buildd --arch amd64 \<br /> lucid /opt/chroot/lucid http://mirror.rit.edu/ubuntu/</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
For Ubuntu lucid, when in the chroot, you may get some errors when running apt (or other programs). They'll look like this: "start: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused"<br /><br />To fix these issues with upstart (which are only evident in the chroot), check out this bug report: https://bugs.launchpad.net/ubuntu/karmic/+source/upstart/+bug/430224</div>
</div>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
If you use the Gitorious /etc/rinse/*.packages files to install Fedora 13, just be forewarned that you may have an issue when attempting to run yum. The issue will be the errror: "Error: Cannot retrieve repository metadata (repomd.xml) for repository: fedora"<br /><br />In the /etc/yum.repos.d/fedora{,-updates}.repo files, simply comment out the mirrorlist line and put your favorite local Fedora mirror in for baseurl (with the actual path). This will make yum happy.</div>
</div>
</div>
Product Compliments and FPGAs, They Don't Get It!2011-02-27T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/02/product-compliments-fpgas-they-don-t-get-it<div class='post'>
The other day, I was reading Joel Spolsky's <a href="http://www.joelonsoftware.com/articles/StrategyLetterV.html">blog post about product compliments</a>. Joel states that, "Smart companies try to commoditize their products' compliments." The compliments of a product are the other things that people buy with that product. For example, operating systems and software compliment PC sales. When you buy a PC, you'll also probably buy an operating system (even if it comes with the PC) and some software. In this case, the software vendors want to sell you software, so it's their prerogative to make PCs into a commodity such that the price is as low as possible. For the most part, this has happened.<br /><br />One interesting side effect of thinking this way is you start to look at other companies and industries and notice how they're not doing this at all. Like in FPGA land. Really, there's the two big players: Altera and Xilinx. Neither of them is trying to commoditize their products' compliments, and for what it's worth, I can't tell which product they sell is the product and which is the compliment.<br /><br />With FPGAs you really need two things: the physical part and the software to realize a design. In either case, both Altera and Xilinx will charge you. You have to pay for the parts (which are really expensive if you buy them from online retailers like DigiKey but are much more reasonably priced direct or from distributors in volume) and you have to pay for the synthesis software! Some will argue that you don't have to use just Altera's or Xilinx's synthesis software, other vendors will sell you software that will work, but that software isn't cheap either. (For the companies selling just synthesis software, the physical chip is the commodity compliment, which kind of works, but not well enough.)<br /><br />If you assume that Altera and Xilinx want to sell you physical chips, they should be driving the synthesis software to be a commodity and be low priced. If they want to sell you software, they should be working to make their chips as commodity like as possible (easy to switch from one company's chips to the other). But they aren't doing either of these things!<br />Yes, you can get free versions of Altera's and Xilinx's tools, but those tools are limited and "real" FPGA designers will probably need the full version of the tools.<br /><br />So what's my point?<br />My point is the FPGA landscape is F'ed up. They're missing out on gaining new customers who are very price conscious (like hobbyists or bootstrapped start-up companies) but need advanced features in the synthesis software. There's going to be a huge market out there for hobbyist type development for FPGAs soon (if it hasn't started already) and both Altera and Xilinx are going to miss out on it.<br /><br />I predict that as soon as there's an open source / free software program that can do decent FPGA synthesis, both Altera and Xilinx are going to feel the pain. I also predict that one of those little FPGA companies are going to reap huge benefits, because they'll be the ones behind it.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Low Cost ARM Computer2011-02-23T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/02/low-cost-arm-computer<div class='post'>
I was thinking about my <a href="http://bradfordembedded.blogspot.com/2011/01/low-cost-arm-fpga-computer.html">ARM + FPGA computer</a> idea some more. There's already a lot of competition in the single board computer space and adding just another entry there isn't really worthwhile on a small scale, the prices would just be too high to attract any users. However, there's currently a rather small market for single board computer kits with free software like ideals.<br /><br />What if the goals of the kit were to be a free hardware project? By free hardware, I mean as close to open source / free software ideals as possible. The specifications, schematics, BOM, layout, and manufacturing files would all be created using free software and available to everyone. If you wanted to build your own version of the computer, you can either take the files and modify them and build one yourself, buy each component from the vendor of your choice, or buy pre-built models from a vendor. Different vendors could compete with a commodity (the free hardware design kits) but also have the freedom to modify the kits to add extra functionality, reduce costs, or change things all together. It'd be a reference design but with a free hardware spin.<br /><br />Along with publishing the design of the hardware, there'd also be a book like <a href="http://trac.cross-lfs.org/">Cross Linux From Scratch</a> made specific for the released designs. With the combination of free hardware and free documentation, it'd be an amazing teaching tool for people wanting to learn about embedded systems.<br /><br />Currently, projects like the <a href="http://beagleboard.org/hardware/design">BeagleBoard</a> offer some of their design files online for free, but you'll need expensive tools in order to edit them. There are free software tools out there that are capable (<a href="http://www.gpleda.org/index.html">gEDA</a>) along with lower cost tools (<a href="http://www.cadsoftusa.com/">Eagle</a>). If you want to attract hobbyists and very small companies to the embedded Linux market, using low cost or free software tools and providing high quality documentation is the way to go.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
KDE4 Sucks2011-02-12T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/02/kde4-sucks<div class='post'>
I upgraded to Debian 6 Squeeze last weekend on my desktop. I was very excited to get some more up-to-date packages (git, gcc, kernel, and chromium) but I'm disappointed by KDE4. Yes, I realize I'm a few years behind the rest of the world coming to this conclusion but that's the curse / blessing of Debian stable.<br /><br />I had been using KDE3 in Lenny. It did everything I wanted and generally kept out of my way. KDE4 is completely different, in a very bad way. My computer isn't slow, although it is 4 years old, but KDE4's out of the box configuration was way too laggy.<br /><br />It's back to Gnome for me and I'm actually liking the configuration of it that comes with Squeeze. It does what I need, was quick / easy to configure to my liking, and it gets out of my way. That's how it <b>should </b>be.</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
Just read today about the Trinity project which is maintaining and updating / improving the KDE 3.5 codebase. It's not an official part of the KDE project, but is definitely worth checking out. They have repositories set up for RHEL, Debian, Ubuntu, Fedora, and Slackware to make installation easy.<br /><br /><a href="http://www.trinitydesktop.org/" rel="nofollow">http://www.trinitydesktop.org/</a></div>
</div>
</div>
Pick an ARM ABI When Building GCC2011-02-04T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/02/pick-an-arm-abi-when-building-gcc<div class='post'>
If you follow the <a href="http://cross-lfs.org/view/clfs-embedded/arm/">CLFS embedded book for ARM</a>, you'll see that your ABI choice isn't used until compiling packages (ie: after you've built a cross-GCC). What you will also find is that if you want to use the aapcs-linux ABI (EABI) and you choose your target triplet to be arm-unknown-linux-uclibc (like the book says) that you'll have issues.<br /><br />Your issues will begin with configuring e2fsprogs using the $BUILD variable and will look like this:<br />error: FPA is unsupported in the AAPCS<br /><br />That's a very unhelpful error message. What you want it to tell you is why the heck the configure script decided to use FPA and AAPCS together if they obviously don't work.<br /><br />What happened was you chose your target triplet to be a triplet that GCC's configure thinks uses the OABI, but now you're building some software and telling GCC to use the EABI. Rather than spit out "Hey, you built GCC with the OABI and now you're trying to use the EABI!" it just complains about FPA.<br /><br />To solve this, if you plan to use the EABI, set your target triplet to be arm-unknown-linux-uclibcgnueabi. Then when you build GCC, pass it the --with-abi=aapcs-linux switch. This will produce a GCC that builds for the EABI.<br /><br />This caused me quite a few hours of banging my head on my keyboard. The <a href="http://buildroot.uclibc.org/">Buildroot</a> people know this, and it's included in their build scripts, but searching Google was not very helpful.</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
This (both my post and comment) has since been fixed! :)</div>
</div>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
You'll also need to make sure you pick the EABI for uClibc. The present patches only contain an OABI uClibc configuration. "make menuconfig" will help out there until it gets updated.</div>
</div>
</div>
My Embedded Cross Linux From Scratch Git Repo2011-01-29T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/01/my-embedded-cross-linux-from-scratch-git-repo<div class='post'>
I was maintaining a Git repository that mirrored the embedded CLFS book's master and my own changes on <a href="https://github.com/bradfa/clfs-embedded">github</a> but I've now got <a href="http://git.cross-lfs.org/?p=abradford/clfs-embedded.git;a=summary">my own repo</a> on the actual CLFS server. If you'd like to follow it, feel free.<br /><br />Clone the book from my repo:<br />$ git clone git://git.cross-lfs.org/abradford/clfs-embedded.git<br /><br />Build instructions to make the book into HTML and other formats are included. For the impatient, create the HTML version:<br />$ make BASEDIR=<i>[where you want them to go]</i><br /><br />Check out the <a href="http://trac.cross-lfs.org/">main CLFS page</a> and participate on the <a href="http://trac.cross-lfs.org/wiki/lists#MailingLists">mailing lists</a> if you have issues.<br /><br />EDIT: If you'd like to simply browse my repo via a webbrowser, check out:<br /> <a href="http://git.cross-lfs.org/?p=abradford/clfs-embedded.git">http://git.cross-lfs.org/?p=abradford/clfs-embedded.git</a></div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>israr</div>
<div class='content'>
ok...fine i got it andrew. now i will start right from scratch from your's book , let's hope that i can do it, because, in online book i m stuck at GCC-final build.it is so frustrating .Anways , thanks a lot for reply andrew.</div>
</div>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
israr, the main difference between my repository and the main repository or the online edition of the book is that mine usually contains a few additional updates. Every so often, my changes will get pulled into the main book.<br /><br />You can see the difference between my repository and the main repository by looking at the git log. If you view the web version of my repo's git log, the "mainline" branch is the current mainline book. My book is at the "master" branch. Everything in between is how my repo is different.<br /><br />Link to my repo:<br /><a href="http://git.cross-lfs.org/?p=abradford/clfs-embedded.git" rel="nofollow">http://git.cross-lfs.org/?p=abradford/clfs-embedded.git</a></div>
</div>
<div class='comment'>
<div class='author'>israr</div>
<div class='content'>
hello andrew !...another night gone to make it HTML..but succeded .<br /> Now 1 question:<br />-> Is there any significant difference in your book and <br /> clfs's, apart from some packages are latest</div>
</div>
<div class='comment'>
<div class='author'>israr</div>
<div class='content'>
hello Andrew!<br /> could u please guide me , hwere i can get the BOOK in HTML/PDF format.I spent my whole night installing, <br />If you want to convert the XML to HTML, install the following components:<br />* libxml2 * libxslt * DocBook DTD * DocBook XSL <br />* HTMLTidy * lynx * JDK * FOP and JAI<br /> some of them installed in ubuntu10.10 by [apt-get],but<br />some are not.thanks in advance,please help.</div>
</div>
</div>
Confusion When Cross Compiling GCC --without-headers and --with-newlib2011-01-29T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/01/confusion-when-cross-compiling-gcc-without-headers-and-with-newlib<div class='post'>
When building a GCC cross compiler, you're going to run into GCC's configure options named --without-headers and --with-newlib. If the configure script help isn't very helpful, check out this <a href="http://gcc.gnu.org/ml/gcc-help/2009-07/msg00373.html">GCC help mailing list thread</a> and read the <a href="http://gcc.gnu.org/install/configure.html">online help</a> for the configure script.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Next Steps - January 20112011-01-23T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/01/next-steps-january-2011<div class='post'>
Some things I'd like to do next:<br /><br /><ol><li>Compile a spreadsheet of all the major parts on the BeagleBoard-xM with links to the part product page and datasheet.</li><li>Understand why the $BUILD variable gets set to the -mabi= option in CLFS embedded section 6.3 and what the impact of this is. Should it get set when building GCC rather than for each package? See <a href="http://trac.cross-lfs.org/ticket/620">trac ticket 620</a>.</li><li>Set up a git repo locally and follow the CLFS embedded book within the repo to enable me to better react to and understand changes. In this way, if some part of the book changes, I can go back to a tag just before the change and then be able to tell what the change has done. I'll also be able to get a better idea of what files get put in what places as parts of the CLFS embedded book don't have good info on this.</li></ol></div>
<h2>Comments</h2>
<div class='comments'>
</div>
BeagleBoard-xM Datasheets2011-01-23T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/01/beagleboard-xm-datasheets<div class='post'>
I've compiled a list of all the major BeagleBoard-xM parts and links to the part product pages (and to the part datasheets) in a <a href="https://spreadsheets.google.com/ccc?key=0AlTRzDMNNOQkdFNJMnhzX2FPekZrdl9FZng4eFhFRGc&hl=en">Google spreadsheet</a>.<br /><br />You can find the schematic, BOM, system reference manual, and layout files on <a href="http://beagleboard.org/hardware/design">beagleboard.org/hardware/design</a>.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Micron Is Annoying2011-01-21T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/01/micron-is-annoying<div class='post'>
I'm attempting to compile all of the datasheets for the parts used on the BeagleBoard-xM, but Micron is being unhelpful. Every part used on the BeagleBoard-xM has a publicly available datasheet except the package-on-package Micron memory.<br /><br />The Micron part number is MT46H128M32L2KQ-5 but Micron won't let me download the datasheet unless I create an account (so I did) and then explain my expected development time, expected product launch time, expected monthly quantities, and provide an explanation of my design (which I'm not designing since I'm just looking for info on a board I bought). So I put some values in and now my "request has been forwarded to the appropriate product group."<br />WTF does that mean? <br /><br />Micron, this is stupid! It's a memory chip. Just make the datasheet available to everyone. There's no way any of your intellectual property will get leaked through a datasheet. Just stop with this nonsense.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Low Cost ARM + FPGA Computer?2011-01-21T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/01/low-cost-arm-fpga-computer<div class='post'>
What if there was a low cost, small, power efficient ARM processor based computer with a reasonable FPGA attached to it? I'm thinking 600 MHz ARM Cortex with an Altera Cyclone4. What if this board could be powered by <a href="http://en.wikipedia.org/wiki/Power_over_Ethernet">Power over Ethernet (PoE)</a>? What if the ARM processor could reprogram the FPGA over JTAG?<br /><br />As long as the interface between the ARM and FPGA was fast enough, the types of things this computer could do would be rather vast. Some computations are much faster in "dedicated" silicon (an FPGA isn't really dedicated silicon, but for the money it's close enough) than in general purpose processors. For example, crypto processing is dramatically faster in "dedicated" silicon than in a general purpose processor.<br /><br />What if you could have 47 of these little ARM + FPGA computers plugged into a 48 port PoE switch? (Or 15 in a 16 port switch?) The switch would power them and allow them to communicate. A desktop PC could coordinate the operations of the boards and feed each one the data for computations. It'd basically be a mini cluster of FPGAs that could scale and be affordable. It'd also be easy to wire compared to normal computers as there's only one wire per ARM + FPGA computer (the Ethernet cord provides both communications and power). And if you can't afford to buy a lot of nodes, you can just buy a few and expand when you can afford it. And if newer versions come out, you can add them to your cluster.<br /><br />There's lots of software out there to coordinate clusters of computers, this is nothing new. But currently a lot of the crypto busting computing power that's available comes as PCI-e cards with very expensive FPGAs attached. There's also the problem of scaling a PCI-e based solution, once you run out of PCI-e slots, you need another expensive computer, and you'll still need coordination software and expensive networking if you have more than one PC.<br /><br />If the ARM + FPGA computer could cost around $100, come with great documentation and <a href="http://en.wikipedia.org/wiki/Free_software">libre licenses</a> for the software and FPGA code, plus offer professional support, I wonder if anyone would buy it...</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
An ARM processor like TI's <a href="http://focus.ti.com/docs/prod/folders/print/am3505.html" rel="nofollow">AM3505</a> 600 MHz Cortex-A8 that costs $13.75 (USD) in quantities of 1000 (or $22 in qty of 1), has an Ethernet MAC, USB, SPI, DDR2 controller, and comes in a 1mm pitch BGA package, would be awesome for this computer.<br /><br />Throw 1 GB of DDR2, an Ethernet PHY, and a quality FPGA on there and we're off to the races! :)</div>
</div>
</div>
It's Alive! (The BeagleBoard-xM That Is)2011-01-12T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/01/it-s-alive-the-beagleboard-xm-that-is<div class='post'>
My BeagleBoard-xM lives!<br /><br />Granted, using the provided Angstrom distribution, but that's a good first step (and it verifies my hardware is working properly).<br /><br />I've made <a href="https://gist.github.com/777123">a public gist</a> for those interested in the output over the serial port (the DB9 style one) during boot. If you've got a BeagleBoard-xM and are using the supplied SD card image, on first boot, just know it takes forever and a day to read the ramdisk.<br /><br />Oh, and in case anyone wants to order a power supply, real serial cable, plastic standoffs, and plastic screws (to get it off the table), here's the <a href="http://www.digikey.com/">DigiKey</a> part numbers you'll need:<br />Power supply (5V @ 2.6A): T988-P5P-ND<br />3m male to female DB9 serial cable (straight through): AE9871-ND<br />M3, 15mm long, hex, threaded plastic standoffs (need 4): 25512K-ND<br />M3, 6mm long, flat head driver, plastic screws (need 4): 29341K-ND<br /><br />In all, the accessories should run about $23 (USD) plus shipping and tax. You don't need the USB OTG cable (besides, it can't supply enough juice to use the BeagleBoard-xM USB ports as a host).<br /><br />Now to build my own version of Linux for it!<br /><br />EDIT: Also of interest, <a href="https://gist.github.com/794308">a gist</a> of the /etc/fstab as found inside the Angstrom distribution that comes with the BeagleBoard-xM.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
The Tale Of Two ARMs - A Christmas Story2011-01-04T00:00:00-05:00hhttp://www.bradfordembedded.com/2011/01/the-tale-of-two-arms-a-christmas-story<div class='post'>
I got two ARM devices for Christmas! (OK, well, I have to share one of them with my wife...)<br /><br />My brother is awesome and got me and my wife an iPad. It's something I probably wouldn't buy for myself but since having it I think it's awesome. The reading electronic books part so far is my favorite although general web surfing, YouTubing, and Angry Birds aren't far behind. I'm currently reading <a href="http://producingoss.com/">Producing Open Source Software</a>, am planning to read <a href="http://www.network-theory.co.uk/docs/gccintro/">An Introduction to GCC</a>, and then I'll be on to <a href="http://www.amazon.com/gp/product/0596529686?ie=UTF8&tag=bradford07-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0596529686">Building Embedded Linux Systems</a>.<br /><br />My wife got me a <a href="http://beagleboard.org/">Beagle Board xM</a>! Something I've been wanting to get for a few months now. I'm very interested to see what the Beagle Board can do and how good the information is regarding how to do things with it. From what I've seen so far, there's a real lack of high quality and well written instruction on how to get Linux up and running beyond a simple precompiled distribution install. No one seems to have a walk through of building cross compilers, building the Linux kernel, and creating a file system. Maybe I'm not yet looking in the right parts of the Internet to find these things, but I think this could be a really cool thing to write about, in the way the Cross Linux From Scratch project does, but with the Beagle Board in mind so that it can be tailored to the hardware and be more of a hands on project that people can approach. (One of CLFS's shortfalls is that the embedded book is too general, there are so many varieties of each type of processor that it's hard to provide info for everyone who will read it.)<br /><br />What's really funny about these two gifts is how different they are yet how much inside is the same. It all comes down to the software and packaging. Obviously the Beagle Board doesn't care much about either since it's really just a low priced development kit, but both run a 1GHz ARM core processor, have graphics accelerators, network connectivity, flash storage, and some form of USB.<br /><br />I had an awesome Christmas. Now I need a serial cable and a 5V power brick! Digikey, here I come!</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>jefro</div>
<div class='content'>
Hi Andrew,<br /><br />Best places to look for static BeagleBoard information are http://beagleboard.org, of course; http://elinux.org/BeagleBoard ; and http://code.google.com/p/beagleboard/ . However, the best thing to do is to participate in the community, particularly the mailing list and IRC channel, both linked from http://beagleboard.org. It isn't excessively organized, but that's where the latest, best, and most personal help will come from.<br /><br />That being said - I have been working on a BeagleBoard book that is currently stalled. I'd be pleased to help you any way I can, for the simple cost of seeing what you need to learn. You can find me at jefro@jefro.net.<br /><br />Good luck & have fun! And please keep writing about your experiences, particularly what you plan to do with the board.</div>
</div>
</div>
Being Clear About The Work I Want To Do (Part 2)2010-12-21T00:00:00-05:00hhttp://www.bradfordembedded.com/2010/12/being-clear-about-the-work-i-want-to-do-part-2<div class='post'>
A month ago, I <a href="http://bradfordembedded.blogspot.com/2010/11/being-clear-about-work-i-want-to-do.html">wrote a post</a> saying I want to be clear about the work I want to do and that I'd follow up every month or so. It's about that time.<br /><br />I'm doing something interesting. I'm learning. It's open source. There's, so far, no bureaucracy.<br />I'm not yet teaching and I'm still a long way off from making any money. <br /><br />I've started working on the <a href="http://trac.cross-lfs.org/">CLFS</a> (Cross Linux From Scratch) project. I've opened two tickets for changes that should be made to the embedded book and submitted patches. They're minor things, no one seems to have noticed yet, but I feel very proud of myself.<br /><br />I'm making progress and it feels really good.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
America. Yeah!2010-12-21T00:00:00-05:00hhttp://www.bradfordembedded.com/2010/12/america-yeah<div class='post'>
I've been having conversations with various different people over the past few months about the state of America: our education system, economy, corporations, tax law, and politics. One trend I'm seeing from the people I have conversations with (and both on TV and online articles) is that people are very pessimistic.<br /><br />I'm not pessimistic. I'm very optimistic.<br /><br />Maybe we don't have the highest ranking education system. Maybe we're outsourcing a lot of jobs to India and China. Maybe our political system is fractured and uncooperative. Maybe all this is true.<br /><br />But you know what? Everyone outside North America, even if only slightly, is envious of us. Maybe they're not envious of <b>all</b> our things, but we have something that's better, worth being envious of.<br /><br />We have the best sports leagues (NHL, NFL, MLB, NBA, and PGA). We have THE financial market (Wall St.). We have the best higher education system (everyone knows Harvard, Princeton, MIT, Yale, heck even my alma mater RPI). We have the biggest, most high tech companies (Google, Facebook, IBM, Apple, and Intel).<br /><br />Almost all of these things that we have, that are the best, are spreading world wide. It's a global economy and in order to grow, it's required. But they all started here and for the most part are based here, in the USA.<br /><br />This is the reason so many people still flock to the US. We're still the land of opportunity, of the free, and where dreams can come true. It's why we have an illegal immigrant "problem." So many people want so badly to come here that they're willing to break the law, hide, and do jobs for very little pay with horrid work conditions.<br /><br />Stop being pessimistic. It'll only hold you back from greatness, especially in America.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Gawker Got Hacked... I Dislike Online Accounts2010-12-14T00:00:00-05:00hhttp://www.bradfordembedded.com/2010/12/gawker-got-hacked-i-dislike-online-accounts<div class='post'>
<a href="http://lifehacker.com/5712785/">Gawker got hacked</a> over the weekend and had all their users' data stolen, including encrypted passwords. Awesome.<br /><br />Gawker at least emailed all the people who had accounts letting them know that this happened, which is nice of them. I didn't even realize I had a Gawker account, but I do. I've changed my password there as well as my Google and Amazon passwords. I don't have any idea what password I used for Gawker but figure I should be cautious since I did use my real Gmail address for the Gawker account. All my banking stuff uses usernames and passwords I don't use anywhere else so I'm not concerned about those.<br /><br />I'm thoroughly annoyed by this. Generally I don't like getting accounts at tons of websites, mostly because I don't want to remember all the logins. On my work laptop I use <a href="https://web.archive.org/web/20101210053633/http://passwordsafe.sourceforge.net/">Password Safe</a> so I don't have to remember any passwords or usernames, but I shouldn't have to.<br /><br /><a href="http://www.quora.com/What-s-wrong-with-OpenID">OpenID isn't the answer</a>. I don't have the ability to easily generate throw away email addresses (right now). Using <a href="http://gmailblog.blogspot.com/2008/03/2-hidden-ways-to-get-more-from-your.html">Gmail's "+" option</a> doesn't work on 90% of the sites I've tried. So far, just not having accounts is the only answer. But not having accounts is actually fairly difficult these days.<br /><br />I'm thoroughly annoyed and I don't know how to fix this.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
My Inspiration2010-12-11T00:00:00-05:00hhttp://www.bradfordembedded.com/2010/12/my-inspiration<div class='post'>
I like playing with computers. Professionally, that's all I really want to do.<br /><br />I've been playing with computers since 5th grade. My family bought an Apple Mac Performa 577 and I couldn't get enough of it. I bought huge books all about how the software worked, I downloaded all sorts of things from AOL and then the Internet. I just had to play and learn.<br /><br />In high school I attended a vocational program for computing. Most people thought it was odd I'd that I'd attend vocational school as I had good grades and took honors courses and that just wasn't something that people thought went together. I loved it. I'm so glad I went to vocational school. Everybody should go. Learning the basics of a trade is invaluable further on in life. And yes, basic IT tasks are basically a trade these days, that's what we learned (A+ certification, installing Windows, putting together computers, basic networking, etc).<br /><br />I used to be a die-hard Apple fan. I bought the first G4 Yikes! tower from our local Apple reseller in high school. I had read about the announcement of this model and just had to have one. I spent my entire savings from working that summer to get it. I got the 400 MHz one, before they downgraded the lowest model to 350 MHz. I felt special. 66 MHz PCI graphics! Yeah! :)<br />Apple gave me a free copy of OS9 because I answered a lot of questions on their online support forums. This was huge for me. I didn't have much money but I really wanted OS9. Getting a free copy built a ton of loyalty for me.<br />I then bought the OSX beta. It was slow as balls, but that was OK because I was on the cutting edge.<br /><br />In college I played with a lot of Linux but still had my G4. I went to opening day at the Crossgates Mall Apple Store in Albany, NY. I got a free t-shirt and that built even more Apple loyalty. I couldn't afford buying updates to OSX, so I installed Yellow Dog Linux on the G4 and it became my NFS server for my music collection.<br /><br />I then went on to built computers and install Linux on them. One winter when I came home for the holidays I literally had 8 computers packed in mom's minivan. It was awesome. I built Gentoo Linux from stage 1 on a 200 MHz Pentium system. It took days. It was so much fun. I ran Return to Castle Wolfenstein demo servers off my laptop and built my first ever computer from scratch just so I could play the demo on Linux. It was an amazing learning experience. I learned that the motherboard I got had a newer chip set that didn't have support built into the Red Hat 7.2 provided kernel for the IDE controller, so it got stuck having the processor clock everything in and out in PIO mode. The hard disk performance was atrocious. So I learned to build kernels.<br /><br />Fast forward about a decade. Now I work at a big Fortune 500 company writing .NET software, playing with RFIDs and smart cards, and soon I'll be diving into some Verilog. I'm basically an engineer who does DRM for physical things, but I can't really go into the details. It's interesting, but I want to get back to playing with computers and especially Linux. Up to the beginning of 2010, I was designing and building an embedded XML-RPC server in C++ for a PowerPC board and the desktop Java app to control automated board testing. That was a cool job for the technical things, but the business side of things seemed to be turning for the worse and I jumped to the division I'm in now.<br /><br />These days, I want to play with embedded Linux. It's cool. It runs a whole ton of stuff. It's inexpensive and there seems to be a real dearth of quality people out there who know what they're doing with it and are willing to share that knowledge for a reasonable price. I want to become one of the people who knows what they're doing and then teach other people those same skills. Along with my continued interest in many things related to computing and Linux, this is my inspiration for writing a book about embedded Linux and for wanting to help with the Cross Linux From Scratch project.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Absolute FreeBSD and Building A Server With FreeBSD 72010-12-04T00:00:00-05:00hhttp://www.bradfordembedded.com/2010/12/absolute-freebsd-and-building-a-server-with-freebsd-7<div class='post'>
Two books on my bookshelf, that I read over a year ago, are <a href="http://www.amazon.com/Absolute-FreeBSD-Complete-Guide-2nd/dp/1593271514?ie=UTF8&tag=bradford07-20&link_code=btl&camp=213689&creative=392969" target="_blank">Absolute FreeBSD: The Complete Guide to FreeBSD, 2nd Edition</a><img alt="" border="0" height="1" src="http://www.assoc-amazon.com/e/ir?t=bradford07-20&l=btl&camp=213689&creative=392969&o=1&a=1593271514" style="border: medium none ! important; margin: 0px ! important; padding: 0px ! important;" width="1" /> by Michael Lucas and <a href="http://www.amazon.com/Building-Server-FreeBSD-Bryan-Hong/dp/159327145X?ie=UTF8&tag=bradford07-20&link_code=btl&camp=213689&creative=392969" target="_blank">Building a Server with FreeBSD 7</a><img alt="" border="0" height="1" src="http://www.assoc-amazon.com/e/ir?t=bradford07-20&l=btl&camp=213689&creative=392969&o=1&a=159327145X" style="border: medium none ! important; margin: 0px ! important; padding: 0px ! important;" width="1" /> by Bryan Hong. I bought them at about the same time and read them together. I was wanting to use FreeBSD on my desktop machine and I was setting up a virtual private server (VPS) on <a href="http://www.rootbsd.net/">RootBSD</a> to serve web pages, mail, and be an network time protocol server. These two books, although both published by <a href="http://nostarch.com/">No Starch Press</a> and about the same operating system, are very different in both content and style.<br /><br />Absolute FreeBSD is a thick tome covering everything about FreeBSD that an aspiring computer savvy person would want to know. Basically, you could read Absolute FreeBSD, never read the official <a href="http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/">FreeBSD Handbook</a>, and learn very similar information all while building a computer system that can serve web pages, DNS, mail, and many users. Michael Lucas is a great technical writer. He keeps things light when describing situations but then provides just enough technical details about how things are supposed to work, interspersed with step by step instruction, to assist you with setting up your own server. It's one of the best written books about computing that I've ever read. I thoroughly enjoyed reading it and putting the information to practical use. It's an awesome book and I learned not only how to set up a FreeBSD server, but also, why certain things work the way they do (especially DNS which is rather difficult to explain). This book will be a lasting reference book on my bookshelf that I will refer to in the future, even when doing non-FreeBSD things (like DNS), to help refresh my memory.<br /><br />Contrast this with Building a Server with FreeBSD 7, which is purely a step by step (literally) manual on installing and configuring often used software packages on a FreeBSD 7 system. There's very little, if any, explanation on: why you should set up configuration files in the specified ways, why certain commands are issued the way they are, and how to do anything beyond the basics of installing and getting said software running. I'm sad that I spent money buying this book, I could have gotten just as much out of it by borrowing it from a friend or library. I only used it once and now some of it is already outdated because the software packages described are either out of date or because FreeBSD has moved on (already being at version 8.1).<br /><br />The way these two books contrast each other is astounding. Usually, the No Starch Press books I've read are mostly written similarly to Lucas's style.<br /><br />One thing I definitely got out of these two books is that I want to be a writer like Michael Lucas.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
QEMU Booting CLFS PPC Issues2010-12-01T00:00:00-05:00hhttp://www.bradfordembedded.com/2010/12/qemu-booting-clfs-ppc-issues<div class='post'>
I'm building version <a href="http://cross-lfs.org/files/BOOK/1.1.0/view/ppc/index.html">1.1.0 of the Cross Linux From Scratch (CLFS) PowerPC</a> system. I don't actually have any PowerPC hardware so I'm using <a href="http://wiki.qemu.org/Main_Page">QEMU</a>. I have built QEMU 0.13.0 from source because Debian Lenny only comes with version 0.9.1 and it's a bit old.<br /><br />When booting my CLFS system in QEMU, this is my command line:<br /><blockquote>andrew@bigbox:~$ qemu-system-ppc -nographic \<br />-hda cross-lfs-ppc/disk.img -kernel \<br />cross-lfs-ppc/clfs-fs/boot/clfskernel-2.6.24.7 -M g3beige</blockquote>The switches used are:<br />-nographic -> QEMU should output to my terminal rather than VNC<br />-hda -> This is the file to use as disk /dev/hda<br />-kernel -> The kernel to use<br />-M g3beige -> QEMU should emulate a beige G3 PowerMac<br /><br />When I was building my CLFS system, I didn't follow the directions for Yaboot because I was under the impression that I could simply hand QEMU the kernel and root file system and it would be happy. When creating my disk image, I didn't create any partitions, everything's simply in hda. One of these two spots are probably the root of my problem. I'm going to try building Yaboot first and if that doesn't fix it, I'll work on creating the partitions correctly within my disk image file.<br /><br />When booting, QEMU goes through its normal bios output but after finding a display and building the device tree, it hangs with:<br /><blockquote>Calling quiesce ...<br />returning from prom_init</blockquote>After a few minutes, it will reboot and end up at the same place. The verbosity is not much help for debugging and Google doesn't seem to be my friend with this issue.<br /><br />I'll be sure to either comment on this post as to my resolution or I'll write another post detailing the fix.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Being Clear About The Work I Want To Do2010-11-29T00:00:00-05:00hhttp://www.bradfordembedded.com/2010/11/being-clear-about-the-work-i-want-to-do<div class='post'>
A Better Way of Work blog has a <a href="http://abetterwayofwork.com/?p=121070734">posting</a> saying people should make it clear what the work you want to do is. I'm going to do that and I'd like to be able to write additional posts under the "work" tag to update my status on this.<br /><br />I want to do work that:<br /><ul><li>Is interesting</li><li>Allows me to learn and to teach </li><li>Involves open source software or hardware</li><li>Pays well enough that I feel secure</li><li>Doesn't bog me down with corporate bureaucracy </li></ul><br />I'm willing to give up playing hockey. I've played for the past 22 years but I'd rather focus on the work I want to do. That will net me extra time in the evenings to spend with my family or to do my work.<br /><br />I'm willing to give up sleeping in. I'm trying to create the habit of waking up before 6am so I can work before going to my job. So far I've been successful at doing this 2 or 3 times per week. I'd like do this more often. Yes, I get tired earlier, but I'm much more focused and productive early in the morning.<br /><br />My biggest fear is that I won't be able to do the work I want to do and still make enough money to be secure. I don't need to be rich but I do need to be able to provide for my family.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
LED Lighting to Replace Fluorescents, The Future!2010-11-20T00:00:00-05:00hhttp://www.bradfordembedded.com/2010/11/led-lighting-to-replace-fluorescents-the-future<div class='post'>
I broke a fluorescent U shaped bulb yesterday evening in my kitchen. I was attempting to set it on the counter so I could remove the bulb that was in the fixture and it tapped the counter ever-so-slightly, shattering into a million pieces. I attempted to follow the <a href="http://www.epa.gov/cfl/cflcleanup.html">EPA guidelines for cleaning up</a> the mess, mercury and all, but it's not easy. We have hardwood-like floors in the kitchen, so after opening a bunch of windows and going in another room for a bit, I was able to scoop up most of the glass and powders that were near where the break occurred. But then there's glass everywhere, and I mean <b>everywhere</b>! Little tiny shards of it. It was a pain to clean up. After cleaning up, I proceeded to freak out about mercury. How much is in the air? What's too much? Am I breathing it? What's the impact on me, my wife, and our cats? I proceeded to cleaning of the entire kitchen: floor, counters, and stove top, multiple times. Windows and doors were open with fans running full blast for about 3 hours. That should get it all out, right?<br /><br />Come today, I'm still paranoid. Who can I call about this on a Saturday? Well, the <a href="http://poison.org/">National Poison Center</a> is open 24/7, so I called them. A very professional woman answered all my questions, told me the way I cleaned up was fine, and reassured me that even cleaning up just one bulb completely the wrong way on wood flooring is nothing to get concerned over if I've had windows and doors open. It made me feel a whole lot better.<br />I still cleaned the kitchen again after hanging up. Just to be safe :)<br /><br />Why am I so concerned over this?<br />It's because there's no sure way to know what the heck is going on with mercury. I'm no expert but I do know about <a href="http://en.wikipedia.org/wiki/Restriction_of_Hazardous_Substances_Directive">RoHS</a>, and mercury is definitely one of the restricted substances (ironically with an exemption for fluorescent bulbs). It can't be good for you. But you can't see it (when it's vapor), smell it, taste it, or easily test what concentration it exists in with a DIY kit. Thus, I was paranoid about it.<br /><br />I don't want this to happen ever again. But how can I ensure that this doesn't happen again?<br />Getting rid of all my fluorescent bulbs seems like a decent answer. But incandescents use quite a lot more electricity and don't really come in tube sizes so they're not really an option. What about LED lights?<br /><br />LED lights aren't really easy to buy right now. Home Depot lists quite a few different types of bulbs (84 in the LED lighting section) but many are only available online and some of those listed are holiday lights. None of those listed are tube type that can replace fluorescent bulbs. All are expensive (think 10x the price of incandescents or CFLs) and generally have lower light output compared to fluorescent or incandescent bulbs. I searched around online and found some places selling tube type LED lights but every single one I found requires you to rewire the fixture to bypass the ballast and starter. Some even require an external power unit (I assume to convert the AC to DC) that has to be wired in. <br /><br />I can rewire my fixtures to support different types of bulbs (I'm an EE and I know my way around a wire nut) but I don't <b>want</b> to do this. I'm pretty sure no body <b>wants</b> to do this, they just want to buy a light bulb and put it in. That's how light bulbs are expected to work.<br /><br />I think this is a market ripe for innovation. With upcoming US regulations requiring higher light output per watt combined with people's dislike for CFLs (and fluorescent bulbs in general [warm up time, flickering, buzz, cold weather performance, lack of dimming, etc]), the LED market place is going to experience rapid growth. Right now the growth seems to be in normal screw in type bulbs with some less elegant systems for tube bulbs.<br /><br />I'd expect the LED tube bulb market to be the largest market, yet there doesn't seem to be a (literally) drop in tube LED bulb out there at any price. Schools, corporate facilities, and hospitals use tons of tube lights and are often concerned about mercury releases, energy consumption, and "green-ness." These same institutions also don't want to have to pay an electrician to rewire every single light fixture. Tube LED lights that are a drop in replacement for tube fluorescent bulbs would sell like hotcakes! Even if there was a price premium, early adopters would pay extra to get their benefits. This would drive volume and technology development, ultimately bringing prices down.<br /><br />Maybe I should design a tube LED lighting system...</div>
<h2>Comments</h2>
<div class='comments'>
<div class='comment'>
<div class='author'>brian</div>
<div class='content'>
Yes its mostly due to cost. Look at Plasma and LCD TV's they use florescent back lights and they are cheap in comparison to the cost of an LED TV. In my opinion someday LED's will be cheap enough to replace other light sources. They also last so much longer and use less power.</div>
</div>
<div class='comment'>
<div class='author'>Andrew</div>
<div class='content'>
I found one company that is making drop in tube LED lights: <a href="http://www.everled.com/" rel="nofollow">EverLED</a> in Vermont. They ask $129 for one 4' tube. That's a bit pricey but I did some research and it would be hard with today's white LEDs to make a profit and compete with tube fluorescent bulb light output for under $100 (retail).<br /><br />I still think this is an area that's ripe for innovation and I'd expect some creative thinking could get the prices lower even with today's LEDs.</div>
</div>
</div>
Computer Configuration User Interfaces Suck2010-11-17T00:00:00-05:00hhttp://www.bradfordembedded.com/2010/11/computer-configuration-user-interfaces-suck<div class='post'>
When buying a computer, especially desktop or workstation class computers, why is it so annoying to configure a computer that you want?<br /><br />For example, go to Dell.com's "Large Enterprise" store and find the lowest cost configuration that gets you a workstation computer with at least the following specs:<br /><ul><li>2 or more processor cores with VT extensions (of any speed)</li><li>4GB of RAM (with or without ECC and of any speed)</li><li>Either a 160GB 10k RPM or 128GB SSD hard disk drive</li><li>DVD+/-RW burner</li><li>Windows7 32bit Professional w/ XP mode </li></ul>It's so impossible it's not even funny. I attempted to wade through this exact situation at my Fortune 500 the other day and I was not a happy camper.<br />What's the difference between an Optiplex 980, 960, and 780? How do those compare to the Precision T1500 and T3500? Which one will let you get at least the above specs for the least money? Why the #%@$ can't I get a 256GB solid state drive in any computer I want?<br /><br />I have no friggin idea.<br /><br />A great idea would be a web 2.0 style drag and drop interface for configuration of a PC. Along one side it would list all the different categories of things you can choose, like processors, RAM, disk drives, graphics, etc. You'd have a work area in the middle and you can drag any of the categories onto the work area. Once you drag a category onto the work area you can narrow your selection of available choices by putting constraints on the category, like I only want >2 cores. As you refine your search by adding constraints, options that aren't compatible with your constraints would simply not be available (like if you choose a constraint of ECC memory being required, you wouldn't be able to choose a Core2Duo processor anymore). As you refine your search by putting constraints on categories and adding additional categories to your work area, another column would list the number of configurations that fit your requirement along with highest and lowest prices. Most people would simply pick the lowest price computer that met their requirements.<br /><br />This would be great for pretty much anyone who has specific requirements for their PC. It wouldn't be that hard to develop (at least compared to other configuration interfaces) and would probably get a lot of use by customers. I'd think someone like Dell would really reduce the headache of customers by offering a system like this. It'd even help their internal sales people, probably especially when quoting larger orders for corporate clients.<br /><br />If no one has something like this next year, maybe I'll teach myself Ruby on Rails and some JavaScript and do it myself.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Open Source (at my Fortune 500)2010-11-12T00:00:00-05:00hhttp://www.bradfordembedded.com/2010/11/open-source-at-my-fortune-500<div class='post'>
At my Fortune 500 company the other day, there was a presentation performed by some of the lawyers about open source software and how it can be used within the company for internal uses and for use in products. It was interesting and I'm glad I went, but it was held right before lunch and I was very hungry, so I only stayed for the first half.<br /><br />In the first half of the meeting the lawyers reviewed the requirements of open source licensing, such as the fact that with most licenses you must provide the same freedoms you received and that open source licenses are just like closed source licenses, they use similar laws to be effective. This is interesting because one word that got a lot of attention is "distribution". Another topic that came up frequently is "viral" licenses that can "infect" other software (those are words from the presentation, not my words).<br /><br />What counts as distribution? Does passing a copy of a software from one employee to another count as distribution? With pay-for software, often it does, as you're expected to purchase a license for each person or machine that the software executes on. But sometimes with open source software it can be assumed that distribution means passing the software to another entity, be it another corporate entity or a customer. Like when I download a copy of Debian GNU/Linux from debian.org, that's clearly distribution because I'm not a part of Debian, I'm just a user of Debian. If I install Debian on my computer and my wife's computer, does that count as distribution? Do I then have to comply with all of Debian's licensing terms regarding distribution? It gets tricky, did I just "distribute" a copy to my wife?<br /><br />The "viral" part is also interesting. My company doesn't want to have to release large amounts of our currently closed source software to the public (or even just our customers) because we currently make a bunch of money selling said software. Should a "viral" license "infect" our current closed source software in such a way that we release some software that used something with a "viral" license, we'd then be obligated to give away the source to our closed source (or at least part of it). It's a legitimate issue for closed source software shops that make money the Microsoft way, by charging for each copy sold. It also makes sense because some of our software does really neat stuff that not many competitors can do. It's what sets us apart and we don't want to give our competitors any advantage if we can help it.<br /><br />I understand that distribution is a strange term that may be defined differently by different people and that makes it tricky. And I understand that "viral" licenses could force us to disclose source code that's currently secret. But what saddens me is that we're not changing our business model or internal practices (very much) to take advantage of open source. Not even in just a few of our products.<br /><br />Jeff Atwood <a href="http://www.codinghorror.com/blog/2010/09/because-everyone-needs-a-router.html">blogged recently</a> about how routers are becoming a commodity hardware platform because of things like DD-WRT. My company makes hardware and provides services for both our software and hardware (along with various other things that we do). It would be interesting to see my company (and / or competing companies) release source code to the public and focus on other avenues of making money, mainly service, support, and hardware. We'd have to keep ownership of the software in the sense that the Mozilla Foundation retains ownership of Firefox but allows the public to contribute.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
What's Going On in CLFS Chapter 5? [/tools and /cross-tools]2010-11-09T00:00:00-05:00hhttp://www.bradfordembedded.com/2010/11/what-s-going-on-in-clfs-chapter-5-tools-and-cross-tools<div class='post'>
In building the CLFS PowerPC cross compiler the instructions say to create both the /tools and /cross-tools directories (these are actually symlinks to directories of your choosing but it's the easiest way to reference what I'm talking about). I didn't fully understand why this is the case so I did some Googling and re-read parts of the CLFS book.<br /><br />Turns out it's briefly described across various sections, including: <a href="http://cross-lfs.org/files/BOOK/1.1.0/view/ppc/final-preps/creatingtoolsdir.html">4.2 (Creating the /tools Directory)</a>, <a href="http://cross-lfs.org/files/BOOK/1.1.0/view/ppc/final-preps/creatingcrossdir.html">4.3 (Creating the /cross-tools Directory)</a>, <a href="http://cross-lfs.org/files/BOOK/1.1.0/view/ppc/cross-tools/linux-headers.html">5.4 (Linux-Headers)</a>, and <a href="http://cross-lfs.org/files/BOOK/1.1.0/view/ppc/temp-system/introduction.html">6.1 (Introduction to chapter 6)</a>.<br /><br />The /cross-tools directory is where the actual cross compiler and its assorted "friends" will live. The /tools directory is where we're building a temporary system that can actually build a real system. The entire goal of chapter 5 is to build a GCC cross compiler that executes on your desktop PC and builds executables for some other computer system (in my case, my desktop's an x86_64 box running Debian Lenny and I'm wanting to create executables for PowerPC Linux).<br /><br />A fun problem is that in order to build a cross compiler, we need a compiler (and a bunch of other stuff). There may be a shorter method to achieving this, but I outline the CLFS way of doing things:<br /><ul><li>Starting off with section <a href="http://cross-lfs.org/files/BOOK/1.1.0/view/ppc/cross-tools/linux-headers.html">5.4 (Linux Headers)</a>, just the header files for a Linux kernel are installed into /tools/include. The headers will be needed later to build Glibc. </li><li>In section <a href="http://cross-lfs.org/files/BOOK/1.1.0/view/ppc/cross-tools/file.html">5.5 (File)</a>, we build and install File into /cross-tools (File lets us figure out what type any given file is). This version of File we build is a native application on our desktop PC and is used by other steps.</li><li>Section <a href="http://cross-lfs.org/files/BOOK/1.1.0/view/ppc/cross-tools/binutils.html">5.6 (Cross Binutils)</a> has us build a set of binutils, again these are native to our desktop PC. No cross compiling yet. Binutils assists in compiling and linking programs. We'll use this set of binutils to build our cross compiler, GCC. Binutils gets installed into /cross-tools.</li><li>Now we get to some compiler compiling, section <a href="http://cross-lfs.org/files/BOOK/1.1.0/view/ppc/cross-tools/gcc-static.html">5.7 (Cross GCC - Static)</a>. Here we build a very simple, statically linked version (it includes all libraries it needs internally to itself rather than reference them being installed in another location) of GCC that can only compile C programs. GCC is actually built without any C library (Glibc) as we don't yet have it. GCC goes into /cross-tools.</li><li>Section <a href="http://cross-lfs.org/files/BOOK/1.1.0/view/ppc/cross-tools/glibc.html">5.8 (Glibc)</a> finally gets us to the C library. When configuring Glibc, we must tell the configuration script how to find all the different items we've already installed, as they're in non-standard locations. Glibc gets built using our statically linked GCC and our kernel headers, it has just enough ability to do so, and installed into the /tools directory. The kernel headers are needed as some of the implementation of the C library involves making system calls, thus the C library needs to know how to make those system calls.</li><li>Now we can finally create an actual cross compiler. Section <a href="http://cross-lfs.org/files/BOOK/1.1.0/view/ppc/cross-tools/gcc-final.html">5.9 (Cross GCC - Final) </a>finally has us build a real C and C++ cross compiler. This step uses the previously built statically linked GCC and Glibc to create a real, dynamically linked, version of GCC. Now we have a cross compiler!</li></ul><br />I'm curious to see how much of what was used to build our cross compiler is still needed in order to complete the CLFS book. Maybe this is a job for git (another thing I'd like to learn).</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Cross-Compiled Linux From Scratch2010-11-08T00:00:00-05:00hhttp://www.bradfordembedded.com/2010/11/cross-compiled-linux-from-scratch<div class='post'>
I'm following the <a href="http://cross-lfs.org/files/BOOK/1.1.0/CLFS-ppc.html">Cross-Compiled Linux From Scratch (CLFS) version 1.1.0 book for PowerPC</a>. I'm using this to make sure I understand what's going on with cross compilation and setting up a basic Linux system for a different architecture.<br /><br />I've previously followed the CLFS book for creating x86 and x86_64 based cross compilers for PowerPC Linux at my day job, but I never really understood why I did each step in the process. I also do not have good experience with a lot of the tools used, such as ar, as, sed, and qemu (as I don't actually own any PowerPC hardware).<br /><br />My goal is to first understand the entire CLFS process and be able to either commit helpful changes back to the project or to be able to write additional documentation about how and why the CLFS project book has each step in it. I consider myself to be a decent Linux user but there are so many widely different things that can be done with Linux that it's difficult to be excellent at all of them.<br /><br />About 8 years ago I was a Gentoo zealot and through my adventures in installing and maintaining Gentoo boxes I learned a lot about Linux systems, but I wouldn't consider myself an expert [I've since reformed and I'm no longer a zealot, or a Gentoo user ;)]. I'm hoping to move closer to expert status by diving into the CLFS project and I want to be able to explain the entire process. I think it's very important to have an understanding of how things go together to create a Linux "distribution" (in quotes because CLFS isn't really a distribution as much as a receipe).</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Short and Sweet, But Not Always2010-11-06T00:00:00-04:00hhttp://www.bradfordembedded.com/2010/11/short-and-sweet-but-not-always<div class='post'>
Jason @ 37signals posted a <a href="http://37signals.com/svn/posts/2647-the-class-id-like-to-teach">blog entry about how he'd like to teach a class</a> where the goal is to learn how to edit complex ideas into various sized chunks that are still meaningful.<br /><br />I think that's awesome! I would love to improve my ability in this area. I tend to ramble on about things when writing and often I can get off of my original train of thought. I end up with long "papers" written about things I was excited to write about when I began writing. Other people do this too and I get turned off by long essays - even on things I'm interested in - if my mood isn't ready to read something long and detailed.<br /><br />Having the choice to get different sized versions of the same material is a sweet idea for marketing a product that conveys an idea. Offer different price points that coincide with different sized versions of the idea and the customer decides which version / price / size they want based on their own factors. This could even be applied such that the customer can "buy" the lower priced but shorter version as a trial (maybe for free?) and then opt to get a discount on a longer version that has more detail.<br /><br />Or this could revolutionize the "news" industry. Currently the fad is Twitter or articles, often they overlap. What's missing is the sizes between 140 characters and full blown articles. Maybe another part that's missing is the true long form version where detailed analysis and further investigation takes place.<br /><br />If I ran a newspaper and I wanted to survive in today's world, I'd make an iPad / iPhone / Android app that delivers news in this way. Then have it automatically pick articles you'd be interested in (a la Google News) based on browsing habits and present a 2-3 sentence or Twitter'ized version aggregated on a home screen but also pick the size you'd be interested in when you go to view the article with the option to see or "buy" other sizes of the same article. Google News meets Twitter meets traditional news reporting meets LexisNexis.<br /><br />It's MY SIZED news. That's a good idea.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Revolution OS2010-11-02T00:00:00-04:00hhttp://www.bradfordembedded.com/2010/11/revolution-os<div class='post'>
I just finished watching <a href="http://www.imdb.com/title/tt0308808/">Revolution OS</a> on Netflix. It was actually pretty good. I'd be curious to see an updated version made today. Things have changed but they're still going in the same direction they were when the movie was made.<br /><br />Even though I've been using Linux and BSDs for about 11 years, I didn't really know the faces or voices of the people who've made it all happen. It was cool to see and hear Richard Stallman and Linus. I've known who Bill Gates is, he was often on TV back when he ran Microsoft, and Steve Ballmer throws chairs, so I know who he is (plus he's really intense looking). It's funny that I don't care for Microsoft that much, yet I know the guys who made it happen. With GNU/Linux (to please Richard who makes the point over and over in the movie [probably in real life too]) I had no real idea who the people behind it were other than the names.<br /><br />I'm glad I watched Revolution OS. If you've got an hour and 20 minutes to kill, it's worth a viewing if you're interested in free / open source software and Linux.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
Rework, Drive, and Linchpin2010-11-01T00:00:00-04:00hhttp://www.bradfordembedded.com/2010/11/rework-drive-and-linchpin<div class='post'>
I have just finished (in this order) reading <a href="http://www.amazon.com/gp/product/0307463745?ie=UTF8&tag=bradford07-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=0307463745">Rework</a> by Hansson and Fried, <a href="http://www.amazon.com/gp/product/1594488843?ie=UTF8&tag=bradford07-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=1594488843">Drive</a> by Dan Pink, and <a href="http://www.amazon.com/gp/product/1591843162?ie=UTF8&tag=bradford07-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=1591843162">Linchpin</a> by Seth Godin.<br /><br />I really enjoyed Rework. It is part of the reason I started this blog and want to get back into Linux. I'm not yet sure how I can make decent money off Linux with the skills I have, but I'm excited to try. And Rework, combined with reading a lot of things written by VCs, has given me a good feel for the tech new business starting landscape. Mostly that there's no one right way to start up a business, but without trying, it won't work.<br /><br />Drive was also very good, but a little more wordy than Rework. Also, before I read Drive, I watched some of Dan Pink's presentations on YouTube. Basically what he says in the book is what he says in the videos online. That was somewhat a letdown for me, but still worth reading the book as he is able to go into more detail of examples.<br /><br />Linchpin kind of let me down. It's very wordy for the message. I liked the message, but it just takes too long for Godin to get it across. A book with half the amount of pages would have gotten the message across just as well, if not better.<br /><br />I'd recommend both Rework and Drive. Especially if you're at a job where you feel something is missing but you're not sure what. You're talented and capable but you're not reaching your full potential at work, these two books will talk to you. Linchpin will talk to you too, but it's annoying and less inspirational.</div>
<h2>Comments</h2>
<div class='comments'>
</div>
I'm Writing a Book2010-10-26T00:00:00-04:00hhttp://www.bradfordembedded.com/2010/10/i-m-writing-a-book<div class='post'>
I'm writing a book about embedded Linux but I'm not going to compete with traditional technical books. O'Reilly isn't my competition. (I personally really like many O'Reilly books and how they're often they're written by open source contributors.)<br /><br />My competition is often outdated and unfriendly, hard to <u><b>use</b></u> technical resources such as the web, mailing lists, wikis, forums, and yes, books.<br /><br />I want to teach people how to develop embedded Linux systems. I don't want to provide a step by step here's <u><b>how</b></u> you put together an embedded Linux system, although I'm going to do that. I want to provide the how, but right behind that I'm going to provide the <b><u>why</u></b>.<br /><br />Why does the manual tell me to do this step and then that step? What's going on under the covers? What if I (the reader) want to do something similar to this but changing things X, Y, and Z?<br /><br />I'm going to answer those questions. You (the reader) are going to help me.<br /><br />The book's going to be inexpensive (not free). There's going to be online collaboration (think Google Groups or a forum). The book's going to take on the mantra of release early, release often. Trees are going to die, but the book will be available in electronic formats too.<br /><br />I think it's going to be amazing! I hope you will, too.</div>
<h2>Comments</h2>
<div class='comments'>
</div>