Humble (Apple) Pie…

Backstory

You may have read my previous post about not liking the new Apple MacBook Pro with TouchBar… it was fairly “screw them, they aren’t pro any more” and a bit of “I can get more for less elsewhere, vote with your wallet people!” … well… yeah… turns out I was wrong. Put the resistance on hold.

Part of my move away from MacOS (I don’t like the small m Apple, please stop it) was going to involve having a dedicated Linux machine with a native install, as I did a lot of development using Ubuntu and had become really comfortable using the OS and thought I could use it as a daily driver. I still believe I could do that, and I could live with the small issues I had if the hardware worked a treat. But it didn’t …

Now, I have a big iMac 27″ with a 5k display, so I don’t need another desktop, it’s a gorgeous machine, but I wanted portable power. I also didn’t want a 15″ beast, I wanted as capable as possible a machine in a modern 13″ enclosure (which is closer to the 11″ MacBook air of old). The onus is on LAP in laptop here for me. That involves trackpads working really well… I had so many issues with the trackpad on the linux machine, that I ended up sitting at my desk and using a mouse with it. I had hacked lots of files, tweaked lots of settings, and got it as usable as I could (why anyone would enable top left/right of the pad for copy paste by default baffles me) but frequently it just shit it’s pants and became annoying, missing clicks, losing cursors and pissing me off. Fine… I’ve got Windows 10 on here, I’ll use that… Nope.

_a5e6bffa-a33d-11e5-8ae7-bdc8d919d82a
Accurate depiction of me using the Dell for half an hour

Windows update swooped in and ruined my graphics drivers, meaning that the display was totally ruined, I had to roll back drivers and fight with it to ignore that update. Which pisses me off, I like to keep everything as up to date as possible (if you have 24 App Updates needed on your phone… I will try and update them when you aren’t looking).

It was at this stage that I thought… This was £1,500 worth of laptop.. I shouldn’t have to fight this hard with it! That was my hard earned cash. I was talking to a friend, and the conversation veered into a rant on my part, by the end of it, without him having suggested anything I had decided to sell the thing, and order something else…. something Appley.

Decision to change

I took a hit and sold it on eBay for £1000, which was a £300 loss (I saved £200 off retail when I bought it) and ordered about £1,800 worth of 13″ tbMBP. 16GB RAM, 3.1 GHz dual core i5 (6th Gen) and 256GB of SSD Storage. Nowhere near as powerful as the Dell offering, but wow, what a breath of fresh air to use after fighting for months.

Satisfied-Seal
When you move your finger on the trackpad and the cursor follows

I don’t even feel bad about it, I really gave the Dell a chance, in both Windows and Linux, I fought hard with the configs, and lost a fair bit of cash in trading it in. But when push comes to shove, spending a bit more for a machine that I look forward to using, is better than wasting £1,300 on something that I basically avoided using.

Conclusion

I am almost glad I fought with the Dell, if I had just got the Mac first, I might have not realised how much better it was (for me) than the alternative, and may have thought I had spent too much for another Apple device. But now I really appreciate my new machine.

It’s a gorgeous machine to work with, I LOVE the new keyboard, I don’t really care about the touchbar, I would rather the fn keys if I was honest, it doesn’t add any value, but I can live with it. The Dell’s main port was a Thunderbolt USBC so I already had the cables I needed to use with the Mac’s ports. Lesson learned, think twice before moving out of your comfort zone with technology you are investing a lot in, and don’t let internet opinions sway you (including this one, I guess). Try things out, think how they will work in your workflow, go with what makes you happy, and you will actually use.

IMG_0237
Clearly I’m not a photographer, but you got to admit, it’s pretty

 

Posting to Amazon SQS with .NET Core

Background

I work for a Microsoft Managed Partner during the day, so I spend a LOT of time working with their cloud, Azure; so to try and stop my self from becoming boxed in to that ecosystem I try to branch out when I am doing stuff outside work. Recently I wrote a small .net core application, that I run on a regular basis on a Linode Linux Server I run in the cloud. It’s great, it only costs a few bucks a month, and is easy to monitor/control.

Recently I thought that instead of logging into the website or SSH-ing into the server itself, it would be really nice if I could just look at a website/app to see

  1. Is it running?
  2. How far through it’s job is it?
  3. How long did the last job take?
  4. Are there any problems?

Really simple stuff. I didn’t want to bother with a lot of overhead, and I am no expert on nginx and couldn’t be bothered adding a web server to my little server, as that brings issues with firewalls, and security that I didn’t have time for.

So I decided to go for a simple “reactive” design and just dump out little messages onto a queue and have something grab those messages and update a status based on the last message it processed. Great, I can separate the “Status Updater” from the main application, avoiding a lot of security issues, and I can play around with various technologies for the “Status Client” without having to make any more changes to the main application itself. Just have it post messages, and it doesn’t need to know what is happening with them.

AWS SQS (Simple Queue Service)

I decided to try out Amazon’s queue offering, SQS. My biggest issue with AWS is that it is fairly unintuitive to get started with if you just want to jump in for one little thing.. I had a lot of trouble getting a basic Elastic Beanstalk site up and running and pointing at a custom domain (I still haven’t got it pointing to the apex domain address).

Basically when you create an application using their AWS SDK for .NET they want you to add a section to an app config with a bunch of details, that it can just read from… well I figured it would be easier to just skip by this, provide creds manually and get something working, then I can always go for better practice when I move it into my application. WRONG. It was a bit of a bitch, as the AWS documentation sends you all over the place for authorising stuff, sending stuff, different clients (which don’t all follow the same design patterns) and most of the tutorials seem to be in Java/Python. All of these things are fine, if you are sitting down for best practice stuff, and know AWS well. But I just wanted to hack together a hello-world console app to see how stuff worked.

I will go on record saying that you probably SHOULD do things their way, there’s other issues with hacking credentials in at compile time… but that’s what I wanted to do.

Creating a Queue and IAM user

This post is really focussed on the code side of things, I followed these steps to create an IAM user for authentication in SQS and then create a queue with that user having full access.

SQS Getting Setup

Creating .NET Core Project

You can download and install .NET Core from the official site here. Simply select your operating system of choice, and follow the instructions.

Create a folder for your project using your UI or a mkdir command, then using command prompt/terminal, init a new project using the command

> $ dotnet new

This will create you a basic console application shell, including a project.json and program.cs file. There are you basic building blocks for the sample application.

I use Visual Studio Code for my .NET Core development (as there is great C# support in the form of extensions, which should be offered to you if you open the folder you created and initialised earlier). Its a great cross platform text editor aimed at being used for development, it’s got built in git support and is great for debugging with. You can get it here.

In your editor of choice, open up project.json and add the AWS SQS Library as a dependancy. It should look like this:

project

You can see that I’ve added version 3.* to mine, meaning when you restore the packages in the project, it will get the highest version with a major version of 3 (i.e. 3.5).

Visual studio code will prompt you to restore dependencies when you edit this file, or you can download the required resources using restore at the command line

> $ dotnet restore

Now your project is all setup and ready to be used.

Posting and reading from the queue

This is a very simple application that posts a message to a queue, and receives one, just by passing the credentials in the code. I have noticed that AWS nicely handles duplicates (Probably handled by the MD5 Hash of the content) so if you send the same message multiple times, it only seems to show up once.

You can get this working for yourself by cloning the repo on GitHub here or simply copy from the sample below.

Checking the status of your queue

If you login to your AWS console, then navigate to SQS, you will be able to select your queue and check the number of messages in it (Don’t be alarmed if you used the sample, saw “Hello world” output but your queue reads 0, as after we have output the message, that sample code removes the message from the queue). If you add more messages, or comment out the removal portion of the code, you will see messages sitting in your queue shown like this

queue

And it’s as simple as that! I would recommend using the AWS best practices and use a credentials file to properly secure your login information, but for a quick test, this will get you up and running and posting to the queue on a cross platform language in no time.

Installing Ubuntu Linux 16.04 beside Windows 10 on Dell XPS 13″ (It’s easy when you know how)

Sorry for the wall of text – Jump to the technical bit

Pramble

tl;dr – I didn’t like the new MacBook Pros, bought a Dell and want to use Linux on it.

After a fairly unimpressive keynote at the October 2016 Apple “Hello Again” event, I chose to vote with my wallet and move away from Apple for the first time in about 8 years. Every phone, TV box, desktop, laptop, tablet (and one watch) that I have bought in those 8 years has been Apple. The increased price, the lack of (currently) useful ports, that gimmicky bar… and most importantly the underwhelming spec pushed me away. I bought a Dell XPS 13″ (Kabylake i7 7500u) with 16GB RAM, 512GB SSD, 4k touch screen… it was much higher spec than any affordable MacBook Pro and came in at a lovely £1350 from PC world.

One of the things that I have wanted to do for a while was to run Linux as my main operating system, to familiarise myself with it, and simply as a challenge. I use Linux (in both centOS and Ubuntu flavours) at work in virtual machines for running Hadoop Sandboxes and Docker hosts, as well as tinkering with Raspbian in my spare time on a few Raspberry Pis at home.

How not to do it

I dove right into things, found a tutorial on installing Linux beside windows, took it for gospel then ended up shit creek without a paddle. A wiped hard drive with two bootable memory keys that couldn’t see my hard drive and no operating system. After hours of fumbling around in the dark and remaking memory keys I managed to install Windows (albeit in legacy boot mode) but at least I didn’t own a dell brick. Deciding this wasn’t ideal and I wanted UEFI Boot mode to be enabled (I am certain that it’s faster, and it’s the more up to date way to boot anyway). After a bit of reading I discovered that in order to install in UEFI mode, you must make your USB UEFI mode bootable. After getting that working, I had to change a hard drive setting which meant another install. So if you are going to undertake this task, do you reading, make sure you know what’s what and prepare well.

Installing both operating systems.

Steps

  1. Download installation media iso files
  2. Prepare UEFI only bootable memory keys
  3. Setup BIOS – AHCI/SATA Mode (Not RAID)
  4. Install Windows 10
  5. Disable fast boot and hibernate in Windows
  6. Shrink drive partition
  7. Boot into Ubuntu Live CD – Leave shrunk partition area blank
  8. Install Ubuntu

Getting Started

1: Download Installation media iso files

First things first, prepare some installation media for the two operating systemms. I downloaded ISOs for Windows 10 (Home edition, as that’s what the laptop license is for) and Ubuntu 16.04 Desktop. On my windows machine (although there are clients for Windows, Mac and Linux)

Windows media can be found from your manufacturer’ website, MSDN, or microsoft. Ubuntu is download from here: http://releases.ubuntu.com/16.04/

2: Prepare UEFI only bootable memory keys

Once you have the files download, use a tool like Rufus to burn each of these to separate memory keys (incase you need to start again). I wanted to make sure that I wasn’t in legacy boot mode, so I made sure to select UEFI only for the boot type and not the mixed Legacy/UEFI boot mode. This way, if I accidentally left Legacy mode on, it wouldn’t show up.

I really underestimated how important having BIOS setup properly was to this whole thing, and went in fairly haphazard and changed whatever it took to get the laptop to boot.

3: Setup BIOS – AHCI/SAT Mode (Not RAID)

Make sure that your BIOS settings are correct, if like me, the hard drive is set to RAID and not AHCI/SATA and you have to change it, you will need to reinstall windows onto it, so make sure to do this before you start. I disabled legacy boot mode altogether as I didn’t want to make a mistake.

4: Install Windows 10

This bit is relatively simple, put in your Windows 10 USB and tap F12 while the computer starts up, you should get into a boot mode screen, select the USB Memory key (if it isn’t showing up, you probably have it set for a legacy install… go back and remake the key to only support UEFI). I deleted all partitions on my hard drive, because I was starting fresh here, thus Windows sorted out it’s own partitioning for me, this is the way I would recommend, so you are starting with a clean slate. Click through the install process, and make sure to install windows from scratch.

Once it is up and running you can either apply a load of updates or just proceed with the install. I would recommend just moving on, as if something goes wrong you might have to reinstall windows and all that time will be wasted.

5: Disable fast boot and hibernate

This is pretty simple. Just open up a command prompt and run the following

powercfg /h off  

That disables hibernate, and thus fast boot. This is because windows locks the hard drive and stores the contents of it’s RAM in the swap file when it hibernates, so it can boot quicker by ust copying the swap data back into memory and not have to load the kernel from disk again. This can cause issues with the hard disk being locked, and also if you alter a file on that drive from your other operating system, windows can think the drive is corrupt and barf.

6: Shrink Hard drive partition

This is nice and easy. Open up disk  manager on windows (hit the windows key, type disk manager and it should pop up, sometimes it has a name like ‘manage drives and partitions’) in this tool, right click on your hard drive, select the size to shrink it to and hit go…. your windows partition will be reduced to whatever size you select and the rest of the drive will be left blank.

7: Boot into Ubuntu Live CD – Leave shrunk partition area blank

Shutdown your computer and insert the Ubuntu USB (remove the windows one too) hit your F12 button and boot it up again, keep tapping F12 until you see the boot menu again and then boot off the USB.

Once you are in the linux live CD.. don’t be tempted to format the unused space to ext4 like I was… the installer then sees the hard drive as full and only wants to wipe it all and install linux. If you leave the area you freed up by shrinking the windows partition unused, Ubuntu will ask if you want to install alongside windows, which is what you are after.

8: Install Ubuntu

Double click the Install Ubuntu app on the desktop, this will take Ubuntu you through the installer. When asked, click ‘Install Ubuntu alongside Windows’. I would recommend installing updates and third party software during the installation (which involves putting the laptop on the network, either wired or by entering the wifi details) and then let it run.

Once the install finishes when you reboot your laptop you will be greeted with a maroon screen, this is the bootloader, it allows you to select which operating sytem you want to run when the laptop starts up. Mine defaults to Ubuntu, with Windows being two lower down the list. I didn’t want to change that, but you might be able to.

Issues

So this isn’t perfect.. whether it is problems with Ubuntu itself, or just my laptop’s hardware compatibility with it, I have one major issue. I can’t suspend the laptop and then wake it up, either manually or by closing the lid when in Ubuntu mode. It seems to wake up, the keyboard lights up, but the screen remains blank. I have seen some reports of this being a generic issue with 16.04 but I can’t be sure.

Also the trackpad drivers don’t seem great for Ubuntu, sometimes the cursor skips all over the place, and sometimes it just starts randomly pasting text for no reason… but in general it works enough that I am happy to use it day to day.

Also, as this laptop has a 4K screen, I turned scaling in Ubuntu to 1.5 to make things a bit easier to see.

Useful links..

These links are the ones that I used to help me get the install working:
http://askubuntu.com/questions/696413/ubuntu-installer-cant-find-any-disk-on-dell-xps-13-9350/696646
Shrink and install
Keep partition blank
http://www.everydaylinuxuser.com/2015/11/how-to-install-ubuntu-linux-alongside.html

Using webstorm 2016.2.3 with Node JS

I recently ran into a little problem setting up a new Node.js project on a Linux (Ubuntu 16 with Cinnamon) VM.

Trying to create a new application I was asked to pick a ‘Node interpreter’ … no problem I thought, and ran a which command for nodes.

$which nodejs
/usr/bin/nodejs

Awesome… that’s us done right… nope.

untitled
Oh, ok, surely that’s /usr/bin/npm right?

 

untitled
Nope, and don’t call me Shirley.

I will come clean here, I am not a node expert, and this is a bit new to me. I think at some point there has been a change, and the old ‘node’ paths and keywords have changed to ‘nodejs’ and things have moved, thus all my google searches have come up wanting. Either that or I got apt-get happy and installed some weird version of node. When I tried to run ‘node -v’ I got told it wasn’t installed, and that I should could install it by using apt-get and looking for something called ‘node-legacy’ so I figured I had the latest version.

I found a stack overflow post that suggested running ‘npm root -g’

$npm root -g
/usr/local/lib/node_modules

Great right? … only that doesn’t exist on my machine. However, the error said that it couldn’t find ‘npm-cli.js’ under that directory… A quick file search later and I have found it.

/usr/share/npm/bin/npm-cli.js

Bazinga! It worked.. I put in the ‘/usr/share/npm’ setting, and it seems to be happy with that. I am not sure what has happened, if I’ve installed something wrong, or this is just where stuff goes, or I’ve installed it with some weird config. Maybe someone smarter, or more knowledgable than me will come across this and let me know in the comments.

 

Importance of space

Space.. the final frontier.

Well… not quite. But it’s really important to me. I value space, both personal and physical. I have lived in a bunch of different houses and apartments over the last decade, and I never really felt I had enough space. Laptops were sat in the living room, kitchen table or wherever was the least noisy place in the house. Raspberry Pis were scattered in drawers and gathered dust at the back of a desk, PCs were left whirring in bedrooms… we all know the frustration.

When my girlfriend and I bought our first place together, one of the most important things for me was a place to put the technology, with enough space for me to tinker with various projects.

We got the house from a couple with two young girls, as we don’t have any kids (yet) we didn’t need a huge amount of rooms, one for guests (Georgia’s pet project) and one for me to put laptops, desktops, xbox, raspberry pis, important documents… everything.

It started off a pink, fairly grimy room, and with a few days of work, it’s become my favorite room in the house. My ‘office’.

capture
Pre and post officification

Technology has become such a fundamental part of all our lives, and it’s still frequently confined to a secondary status in a lot of our homes. If we ever need an extra bedroom, I am now comfortable enough to say “Ok, time to move house” as opposed to giving up the office and having bits of tech scattered about the place. My iMac runs 24/7 as both my main PC and our home media server. The Xbox and 32″ TV gives me a place I can come and watch a movie or play xbox with my friends with out taking over the main TV in the living room, or escape when programs about baking/dancing or wedding dresses are on. I also don’t feel like I’m being relegated to some damp cold corner of the house we currently aren’t using, I’m in the office.

The desks were made with the IKEA room builder, which is a fantastic piece of software

capture

It also prints off a list of everything you need, prices it, and can be ordered in the correct way so you can pick up all the bits in the store without going back on yourself. All in all the office in my pic came to £220 plus about £50 on a good set of paint. A tiny investment for a great creative space.

I now have all the space I could want, all the house’s tech is tucked away out of site, I have somewhere to escape to and my (much) better half has somewhere to work when she is studying.

You spend thousands of pounds on all your tech, invest a bit of space in it too.. a dedicated office allows you to feel much more creative, isolates you from distractions found in every other room (dogs barking, washing machines running, TV blaring away etc), it also means I can setup my work laptop, a nice monitor and work away without having to shove everything off to the side and feel like I’m typing on my knee, the freedom to work from home, and actually feel like I can properly work makes life much easier.

I’m not saying kick the child out into the porch, but if you have some spare space, or could make some, consider creating your own tech room/office where you are free from distractions and can really get creative.

Turn your iPad into a second monitor

I’m on the go a fair bit, but always like to find time to work on small projects, do some Pluralsight courses and generally tinker on my laptop when I’m about (or just in the living room with a glass of wine and a movie).

However I am spoiled by spending most of the day with three 24″ inch monitors at work. I actually find it really hard to jusy have one screen for working on anything other than a console app… And even then googling stuff feels like a bit of a chore with minimising the IDE (first world problems, I know!)

I’ve gotten around this by using Duet Display by the folks at Duet. It’s a great little app that turns an iPad into a second monitor. Great for having emails/Spotify/or a web browser in. The screen is really responsive and looks great on my iPad Air (which matches the size of my MacBook Air to within an inch, so it feels really natural to use)

The app is about £13 but it’s well worth it for the convenience of a second display that I was probably carrying with me anyway. Chuck in a pair of headphones and you have a mobile office that fits in the smallest laptop bags!


I’ve worked for hours on it without really thinking about it. It’s more responsive than a cheap USB Display adapter and saves carting wires and monitors about the place.

Installation is simple, buy and install the app on your iPad from the App Store, download the desktop client for either Windows or Mac from the Duet Display Website for free, install it (restart might be required) then plug in the iPad, open the app on your tablet and it will show up just like any normal monitor. Use your display settings to align it (usually have to lower it a bit so Windows slide between the two smoothly) as you would any other monitor.

It’s developed by a bunch of ex-Apple engineers, so that would explain why it’s so easy to use. It has the bonus of charging your iPad at the same time too.

Anyway, give if you are used to two monitors, give it a bash, and if not, give it a go anyway, you might like it! 

My Take on the Connected Pi (Part 1)

Inspiration

I have got about three Raspberry Pis knocking about my house, all in various states of incomplete projects. I love to play with them, and get things working, but I never seem to finish the projects as they always seem to involve a bit of extra tech/resources that I don’t have.

But recently I came across a blog entry by Frederick Vandenbosh where he used a Pi Zero to create a small, internet connected, display with two buttons and a range of functions. I decided that I would give it a try, and commit to building it, and extending it. I found his blog entry lacked a few steps for a basic user that can halt a project like this, so I have tried to provide a few bits of information that Frederick hasn’t mentioned.

So firstly, thanks to Frederick for the inspiration, you can check out his blog here for lots of cool Pi Projects.

Getting all the parts

For the case, Frederick’s project involved a 3D printed case, and while I would love a 3D printer, I don’t have one yet, and decided to improvise. I chose to take some Perspex and cut it into an acceptable shape, finish it off with a mix of hot glue and Sugru.

I am working using a Raspberry Pi 3 as a development machine, but will be porting the final product over to a Raspberry Pi Zero before adding it into the case.

I have had to do some soldering for the display and Pi Zero itself, nothing major and easily doable for even a beginner with a few youtube tutorials.

I have some parts from an Arduino starter kit (like a breadboard and buttons) that made life a bit easier.

Sugru
Raspberry Pi 3
Raspberry Pi Zero
Soldering Kit
Arduino Display
Arduino Kit

Setting up the Raspberry Pi

Check out this blog post for setting up your Raspberry Pi.

Soldering the display

I used the kit I mentioned above to solder the adafruit display to the pins that come with it, this means that I can use standard Male/Female wiring or a breadboard to control the device without soldering more and more bits together. I didn’t take any pictures while soldering, as that would have required more hands than evolution blessed me with. There is a good tutorial here on how to solder pins to a PCB. The guy is using an arduino, but it’s a similar size and pin setup.

If everything goes ok, you should end up with something like this. (Remember to put the short bits on the display side, or you will need to connect wires above the display and that will make it pretty hard to fit in a case.

IMG_9166
(Clearly I’m not an expert, but this is good enough for me!) 

We will be wiring up the display and communicating with it using a protocol called “I2C” this is a setting that you need to enable on the pi itself. The best method is to use

sudo raspi-config

the process is described here in great detail. Take note of the two apt-get commands that you need to run to install the libraries required to communicate over I2C.

It is also important to note, that if you are using the Adafruit SSD1306 display like I am, there are two jumpers on the back of the display that you need to solder over to connect and enable I2C on the device (I was pulling my hair out before realising that I had to do this) it should look like this when complete.

IMG_9185
(Note the SJ1 and SJ2 jumpers that are connected)

That is your display ready to be used!

Connecting the display to the Pi

The wiring pattern is the I2C Pattern as mentioned before. This can be a little slower to respond than something like SPI but it uses less pins, so is suited for our fairly compact project.

The wiring is exactly as Frederick mentions in his blog and he provides a great diagram for hooking it up.

Screen-Shot-2015-12-11-at-09.41.55
(The original image is from Frederick’s blog and can be found here)

If you have a breadboard like I have, you can use it to connect the display to the pi without any further soldering. Remember, breadboards connect horizontally so if you plug your display in Rows 1-8 in slot A you can use slot B (or any other) on those same rows to connect to the appropriate pins (As I have done below)

IMG_9169

Preparing the software

Now you need to install some libraries on the Pi so that you can interact with the display, Frederick doesn’t mention that in his blog, but if you follow the steps here you should be up and running. You need to update the apt libraries, install some python and linux tools, install PIP (A python specific package manager), install a Python GPIO library and finally a python image library to allow you to construct images to output to the screen .In short:

sudo apt-get update
sudo apt-get install build-essential python-dev python-pip
sudo pip install RPi.GPIO
sudo apt-get install python-imaging python-smbus

Once that is complete, you need to get the Adafruit libraries for interacting with the display. They are all available on the Adafruit GitHub page. That is done as follows

sudo apt-get install git
git clone https://github.com/adafruit/Adafruit_Python_SSD1306.git
cd Adafruit_Python_SSD1306
sudo python setup.py install

This downloads the Adafruit python libraries for talking to the display, and installs/builds them so that they are easy to use.

Modifying the python files to work with the Pi

Now this is a step that took me a while to figure out, and I didn’t find it documented anywhere. Remote onto your Pi with SSH or Microsoft Remote Desktop (using XRDP) or simply connect it to a monitor, you need to run a command as follows (once the Pi is wired up correctly)

sudo i2cdetect -y l

You should get a grid output, telling you how your display is connected that looks something like this

Screen Shot 2016-07-31 at 17.51.37

In the image above, you can see that 3d is shown. That means I am connected on address 0x3D this is important in a minute.

On the pi, navigate to the Adafruit git folder you downloaded. Mine was located in /home/pi/Adafruit_Python_SSD1306. In here, navigate to the “Adafruit_SSD1306” folder and you can see a file called “SSD1306.py”

One of the first lines in there you will see is

SSD1306_I2C_ADDRESS = 0x3C

and you need to change that to

SSD1306_I2C_ADDRESS = 0x3D

(Note the 3D that we spotted earlier)

That alone isn’t quite enough, as remember we ran an “Install” command a while ago? Well that was compiling this code, and it’s the compiled code that we are running when we reference the library. So we need to re-run the command

sudo python setup.py install

in the root of the Adafruit git folder. i.e. From inside the folder we downloaded with the git command earlier.

Fonts

Now, Frederick uses a font called “Minecraftia.ttf” in his blog post, and in the comments he mentions where it can be downloaded from. If you download the .zip file from here (on your pi using it’s web browser) http://www.dafont.com/minecraftia.font and extract it using package manager (right click on the pi, and extract here).

I tried a few approaches to installing this font and found that the best way was to copy it to a folder beside the code I am running and reference the file directly.

Time to code

Now, Frederick’s prototype has some great code for a little device like this, so I just took his python script exactly and followed his instructions on how to load it at run time (so it runs as soon as the Pi boots up) .. I am not going to try and improve it right now, my only change was to reference the font file in the folder below the script one, so every line like this:

font = ImageFont.truetype('Minecraftia.ttf', 35)

now looks like this

font = ImageFont.truetype('fonts/Minecraftia.ttf', 35)

Your best bet is to just take his script and save it onto your pi, and make that change if you are having issues with the fonts. Here is a link to the  raw script.
Placement of this script is important, it needs to be able to see the Adafruit libraries and the font we mentioned. I copied the folders and files so my directory structure looks like this:

/home/pi/project/
    -- Adafruit_Python_SSD1306 (the git folder)
    --info_display.py (Frederick's program)
    --launcher.sh (Frederick's launcher script)
    --fonts (folder)
         --Minecraftia-Regular.ttf

You can also see Frederick’s blog for writing a launcher bash script and using a cron job to run it when the Pi boots (you don’t have to know what those things are, Frederick explains exactly how to use them to get the script to run on booting up the Pi)

Done!

And that’s it! I copied Frederick’s diagram and added a few buttons to my breadboard to switch between modes, and it worked perfectly. You can also use the IDLE IDE on the Pi itself to run the code by opening the info_display.py file and selecting “Run Module” from the menu, or just reboot the Pi and watch it work. If you run into any errors, run the module through IDLE and it should report any problems.

Hope it all works out well, and thanks very much to Frederick for the blog, this is basically all his work with a few pointers from me!

I’ll post updates soon on constructing a case and migrating the project to a Pi Zero, then onto modifying it to run off batteries and adding new features!

Setting up a Raspberry Pi

Getting Started

First things, first. Setting up the Raspberry Pi from scratch. There are a load of tutorials and resources on the official site for doing this. You could always download NOOBS (their ‘starter’ version of the operating system, which is a simple copy and paste onto a memory card, then following some on screen instructions when the Pi boots) But my process is as follows:

Download Raspbian

The easiest place to do this is on raspberry.org from their Raspbian downloads page

Install Raspbian to the Micro SD card for the Raspberry Pi

The first stage is formatting the SD Card for usage. I am using a Mac so I use SDFormatter and I believe they have a windows version of the application too. Pretty simple to use. Plug in your SD card, and make a note of the name of your card from the App, notably the disk number before the main card name. In the screenshot below mine is “disk6” (either directly or via an adaptor) then click “Format” (See Below) 

Screen Shot 2016-07-31 at 15.09.33

After this you to install your raspbian image onto the card. On a Mac I use the built in command line tool “dd” to do this.

If you aren’t comfortable with command line tools like this, I would recommend using NOOBS.

There is a great tutorial here for installing Raspbian onto an SD card. I used the command

sudo dd bs=1m if=/Volumes/Media/Software/2016-05-27-raspbian-jessie.img of=/dev/rdisk6

As my img file was on an external drive called “Media” and as you can see the last portion says “rdisk6” as I mentioned above, that is the drive number from when I formatted my card.

If “dd” is working correctly, nothing will happen… I know that sounds counter intuitive, but there is no console feedback from “dd” and it can take a fair bit of time to write all the files to the disk. I usually leave for about 10/15 minutes and then come back. If there are no errors, chances are it is still working away. Don’t be tempted to hit return again, just leave it for a while, and when it eventually completes you will get some feedback and the cursor will return to wherever it was before.

That’s it! Plug the SD Card into the little slot on your Pi, power it on, and after the boot sequence it should log you in! The default user is “pi” and the password is “raspberry” if you are going to leave the Pi exposed and online outside a firewall, you might want to change this.

Once my Pi is up and running I launch “terminal” (the command line for Linux based systems) and run

sudo apt-get update

which updates my Pi’s records of what packages are available online, then

sudo apt-get upgrade

to actually upgrade any packages installed on my Pi (there are a bunch by default, not to mention the actual Linux kernel itself) to the latest version.

The “sudo” command at the start of these is the Linux version of “Run as Administrator”; it runs the command with elevated privileges as the updates are going to modify system files, and you don’t want non-admin users to be doing that.

You can access other Pi options by running

sudo raspi-config

From the terminal, which will save you editing config files manually.

At this stage, you are good to go with a fully working Pi. Enjoy!

 

 

 

 

Blog at WordPress.com.

Up ↑