Tuesday
Jul292014

"Stay away from Buffalo Hard Drives"

That's what they told me at The Oliver Group, a local data recovery company where I took my Buffalo DriveStation Axis Velocity external 3TB hard drive. "They have their own software that sits on a controller board between the drive and the OS. We've seen it cook the hard drives to the point of physical damage."

I tested the theory by taking the Seagate 3TB drive out of the enclosure and connecting it to my PC with a USB3 dock I use for many other drives. Turns out Windows sees it as an uninitialized drive, and only in Drive Manager. Pop it back in the Buffalo enclosure and everything is there.

It all started when I was recording a .NET Rocks! show with Ted Pattison. The recording stopped abruptly and I was unable to access the files I was just recording to. Other files started giving me trouble as well. I could read the directory data just fine. There was no typical skipping noise or churning noise coming from the drive indicating bad sectors. There was only a problem when I went to copy or otherwise access the files. The OS hung like it no idea how to deal with the issue. That's becuase it didn't. 

The Oliver Group was able to restore all the data onto a new hard drive except for about 50MB worth of files, some of which hadn't been touched in over a year. $500 later, as the technician was handing me my drives he said, "a word of warning. Stay away from these Buffalo external drives."

And so I shall.

Monday
Jun232014

Prepare your Mac for use with Xamarin iOS.

Before you can get started with Xamarin.iOS, whether or not you plan to use Xamarin Forms, you must prepare your Mac for network use. This document will show you how to configure the network, install VNC for remote access, and connect using either a crossover cable or a network hub.

The Connection

I recommend making a hard Ethernet connection to your Mac using a Static IP address. You can either use a network hub or a crossover cable to make the physical connection.

If you use a hub or switch, connect both your Windows machine and the Mac to the hub/switch using standard network patch cables. If you want to avoid the extra hardware, just connect the Windows machine directly to the mac using a crossover cable, a network cable in which the Transmit and Receive wires are crossed. You can get one online at MonoPrice for less than $2.00.

Select an IP Address for your Mac

If you're already using the network port on your Windows machine, you'll have to find an available IP address. If you're behind a NAT router your Windows IP address will be something like 192.168.X.X or 10.1.X.X. That means you need to know what IP addresses are available and which are reserved for DHCP. You can find this by connecting to your router as an admin.

First, we need to find the address of the router. This would be listed as your Default Gateway in your network settings. You can find this by opening a command prompt and typing "ipconfig/all". My default gateway is 192.168.1.1. Connect to this address with your browser ("http://192.186.1.1"), log in as the admin and find your DHCP settings. There will be a range of addresses to use for DHCP. In my case, the range is 192.168.1.2 to 192.168.1.100. That means we can safely use 192.168.1.101 and higher for static IP addresses.

My Windows machine actually has a static IP address of 192.168.1.101, so I will assign my Mac Mini to use 192.168.1.102.

If you're not using the Ethernet port on your Windows machine, you can give it any IP address you like, so long as it doesn't conflict with your current IP address. To be safe, make up something non-standard. Give yourself 100.9.33.1 and give the Mac 100.9.33.2. The rest of this document assumes the first scenario: 192.168.1.101 for the Windows machine and 192.168.1.102 for the Mac.

Windows Network Settings

IP Address: 192.168.1.101

Subnet Mask: 255.255.255.0

Default Gateway: 192.168.1.1

Refer to this document if you don't know how to change your IP address on Windows.

Mac Network Settings

Open up System Preferences on the Mac and navigate to the Network screen. Select Ethernet Connected and enter the following settings:

Click to enlarge

IP Address: 192.168.1.102

Subnet Mask: 255.255.255.0

Default Gateway: 192.168.1.1

 

 

 

Add DNS Server

Click the Advanced button and navigate to the DNS Tab. Click the + button in the bottom left, and enter 192.168.1.1 (or whatever your Default Gateway address is). Click the Apply button. You are networked!

Install VNC

VNC is a free remote-desktop app with support for every platform. You can get this for free from http://realvnc.com. Go there with Safari on your Mac and download VNC. Make sure you download the complete version, not just the viewer. Follow the installation instructions.

Run VNC Server

Run the VNC Server app on the mac and configure a password from the Options menu.

click to enlargeOnce you enter a password you're good to go. 

Install VNC on your Windows machine, run the VNC Viewer, and enter the Mac's IP address and the VNC password, and you should be able to connect to it.

Now you are ready to rock and roll in the world of Xamarin.

 

 

 

Monday
Jun022014

My findings after day 1 with Xamarin Forms

Xamarin Forms

I couldn't reist any longer. I dove in. Full on. Mac Mini, Galaxy S5, iPad Air. Got it all set up in my hotel room awaiting the comencement of the awesomeness that is the Norwegian Developer's Conference. Here's what I found after my first day.

First of all, Xamarin Forms is the latest abstraction from Xamarin that lets you build a solution in Visual Studio with projects for iOS, Android, and Windows Phone all sharing a single UI layer which you can build in XAML and C#. Kind of.

XAML and not XAML

The XAML you know and love is in there somewhere, but it's not all there. Xamarin has it's own controls with its own properties and bindings, and for obvious reasons. Still, it's XAML and XAML is awesome.

No Intellisense

Unless I'm missing something, the XAML editor has no intellisense, so that brings you back to the dark ages a bit. Please, someone tell me I'm wrong and that there's a tweak somewhere that I didn't find.

Shared Projects vs Portable Class Libraries

These are the two project types via which you can use Xamarin Forms. I chose to go down the Shared Projects rabbit hole. The main UI project doesn't compile an assembly. Each platform-specific project calls for the UI, which gets compiled to native code on that platform. Sweet, but there are some drawbacks.

No ValueConverters in Shared Projects?

I could not find a workaround to this one. Since ValueConverters are made accessible as local resources, and local resources are dependent on assembly bindings, and Shared Projects do not produce assemblies, voila. Not supported. I looked for a hack for hours. No luck. The ugly workaround is to create bindable formatted string properties. A better workaround is to move your valueconverters to a PCL, or use a Portable project.

iOS doesn't like empty string fields when binding

For iOS, String Fields of Labels must be initialized to non-empty before bindings will work. I found this out by binding the Text property of a label to a string property of an object. While it worked in Android and Windows Phone, I had to initialize the bound property to some non-empty string value otherwise the binding wouldn’t work.

The Good News

Despite all that, I was able to make an app that displayed location data in real time on all three platforms in one day, setup to done. That would have never happened if I had to learn Objective C and Java.

Good job Xamarin.

 

 

 

Saturday
Jan112014

Simplifying Kinect for Windows 1.x Skeleton Drawing

KinectTools is an abstraction over the code that handles the Kinect v1 sensors, Skeleton data, and drawing the skeleton. It exposes Brush and Pen properties for drawing so you have control over the look of your skeleton. It also can place a PNG file over the head as you move around, providing hours of jocularity.

If you've done any work with the Kinect for Windows 1.x SDK you've probably already created an abstraction such as this. But if you haven't here's a nice one for you.

What's cool about this is that it uses the term Body which is what SDK 2.0 calls a Skeleton. I've also written this abstraction for SDK 2.0 (only in pre-release) so using this will get you ready for the future. The next version of the GesturePak Recorder and sample code uses this abstraction as well.

Here's a very simple WPF app that uses the KinectTools SimpleBodyEngine class to draw the skeleton in real time, put an image of my head on top of it, and turn the background of the window red if you move your right hand out in front of your body 1/3 of a meter.

XAML:
<Window x:Class="SimpleBodyEngineTest.MainWindow"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="600" Width="800">
    <StackPanel>
        <StackPanel Orientation="Horizontal">
            <TextBlock Text="Sensor: "/>
            <TextBlock Text="{Binding SensorStatusName}" />
        </StackPanel>
        <Image x:Name="BodyImage" Source="{Binding ImageSource}" Stretch="None" />
    </StackPanel>
</Window>
Code:
using System;
using System.Windows;
using System.Windows.Media;
using Microsoft.Kinect;
using KinectTools;

namespace SimpleBodyEngineTest
{
    /// <summary>
    /// Interaction logic for MainWindow.xaml
    /// </summary>
    public partial class MainWindow : Window
    {
        // This object makes handling Kinect Skeleton objects easy!
        SimpleBodyEngine BodyEngine = new SimpleBodyEngine();

        public MainWindow()
        {
            InitializeComponent();
            // event handlers
            this.Closing += MainWindow_Closing;
            BodyEngine.BodyTracked += BodyEngine_BodyTracked;

            // put Carl's head on top of the Skeleton
            BodyEngine.HeadImageUri = new Uri("carl.png", UriKind.Relative);
            
            // bind the XAML to the SimpleBodyEngine
            this.DataContext = BodyEngine;
        }

        /// <summary>
        /// This event fires when a *tracked* skeleton frame is available
        /// </summary>
        /// <param name="sender"></param>
        /// <param name="e"></param>
        void BodyEngine_BodyTracked(object sender, BodyTrackedEventArgs e)
        {
            // Get the Z position of the hand and spine
            var hand = e.Body.Joints[JointType.HandRight].Position.Z;
            var spine = e.Body.Joints[JointType.Spine].Position.Z;

            // if the hand is in front of the spine...
            if (Math.Abs(hand - spine) > .3)
                // turn the background red
                Background = Brushes.Red;
            else
                Background = Brushes.White;
        }

        void MainWindow_Closing(object sender, System.ComponentModel.CancelEventArgs e)
        {
            BodyEngine.Dispose();
        }
       
    }
}
Forget about the hundreds of lines of code to draw the Skeleton. If you just want to handle the data, read this blog post I wrote on the basics of Skeleton tracking. This code is so simple. Put up an image and bind it to a property. Create a new SimpleBodyEngine object and make it the DataContext. Done.

Download the code here and enjoy.

Carl
Friday
Jan102014

Simplifying Speech Recognition with .NET

Over the many years I've been using .NET's Speech.Recognition features, I've learned a few things. Now I've encapsulated all that goodness into one class, SpeechTools.SpeechListener

A big problem with Speech Recognition is false positives. You only want the computer to interpret your speech when you are speaking to it. How does it know that you're talking to your friend rather than issuing a command?

The answer comes from our friend Gene Roddenberry, creator of Star Trek. Any time Kirk wanted to talk to the computer he'd first say the word "Computer." That little prompt is enough to "wake-up" the computer and listen for the next phrase. To let Kirk know that it was listening, the computer would make a bleepy noise. We can do the same.

Another thing we can do is determine whether the word or phrase was spoken by itself or as part of a larger phrase or sentence. You only want the computer to respond to a command when the command is spoken by itself. If you speak the command or phrase as part of longer sentence it should be ignored.

Above all, the code for speech recognition should be much easier than it is. If I just want to recognize a set of phrases or commands, it shouldn't require hours of learning about grammars and builders and all that jazz.

SpeechListener simplifies all of that. Take a look at this demo app window:

SpeechTools Demo

The app is ready to test without modification. Just press the "Start Listening" button.

By default, our wake up command is the word "Computer," but it can be anything you like. Say the wake up command by itself. SpeechTools plays the Star Trek Computer wake up wave file (provided) and fires a WakeUp event for your convenience.

At this point it is listening for any of the given phrases. Say "Is it lunch time yet?" and a Recognized event will fire, passing the actual SpeechRecognizedEventArgs object from the API. To recognize another word, repeat the whole process, starting with the wake up command.

Now, check out the code. First the XAML:
    <Window x:Class="SpeechToolsDemo.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="Speech Tools Demo" Height="350" Width="525">
        <StackPanel>
            <StackPanel x:Name="SettingsPanel" HorizontalAlignment="Left">
                <CheckBox x:Name="ListenForWakeUpCommandCheckBox"
                    Margin="10,10,0,0"
                    FontSize="14"
                    IsChecked="True">Listen for Wake-up Command</CheckBox>

                <StackPanel Margin="10,10,0,0" Orientation="Horizontal"
                    IsEnabled="{Binding ElementName=ListenForWakeUpCommandCheckBox, 
                    Path=IsChecked}" >
                    <TextBlock FontSize="14"
                        Text="Wake-up Command:" />
                    <TextBox x:Name="WakeUpCommandTextBox"
                        Margin="10,0,0,0"
                        Width="200"
                        FontSize="14"
                        Text="Computer" />
                </StackPanel>

                <TextBlock Margin="10,20,0,0"
                    FontSize="14"
                    Text="Enter words or phrases to recognize, one per each line:" />
                <TextBox x:Name="PhrasesTextBox"
                    Margin="10,10,0,0"
                    FontSize="14"
                    Width="450"
                    Height="130"
                    VerticalScrollBarVisibility="Visible"
                    HorizontalScrollBarVisibility="Visible"
                    TextWrapping="NoWrap"
                    SpellCheck.IsEnabled="True"
                    AcceptsReturn="True" />
            </StackPanel>

            <Button x:Name="ListenButton"
                HorizontalAlignment="Left"
                Margin="10,10,0,0"
                FontSize="14"
                Width="100"
                Content=" Start Listening " />
            <TextBlock x:Name="HeardTextBlock"
                Margin="10,10,0,0"
                FontSize="16" />
        </StackPanel>
    </Window>
	
Fairly straight ahead here. Now for the wonderful part. The Code:
        using System;
        using System.Windows;

        namespace SpeechToolsDemo
        {
            /// <summary>
            /// Interaction logic for MainWindow.xaml
            /// </summary>
            public partial class MainWindow : Window
            {
                SpeechTools.SpeechListener listener = null;
                // set properties: Build Action = None, 
                // Copy to Output Directory = Copy Always
                string WakeUpWavFile = "computer.wav";

                public MainWindow()
                {
                    InitializeComponent();
                    listener = new SpeechTools.SpeechListener();
                    listener.SpeechRecognized += listener_SpeechRecognized;
                    listener.WakeUp += listener_WakeUp;
                    // seed the Phrases. You can change them, of course!
                    this.PhrasesTextBox.Text = "This is cool\n" + 
                        "Is it lunch time yet?\n" +
                        "Let's Party";
                    this.ListenButton.Click += ListenButton_Click;
                }

                void listener_WakeUp(object sender, 
                    System.Speech.Recognition.SpeechRecognizedEventArgs e)
                {
                    // This event fires when you speak the wake-up command
                }

                void listener_SpeechRecognized(object sender, 
                    System.Speech.Recognition.SpeechRecognizedEventArgs e)
                {
                    // Fires when a phrase is recognized
                    HeardTextBlock.Text = DateTime.Now.ToLongTimeString() + ": " + e.Result.Text;
                }

                void ListenButton_Click(object sender, RoutedEventArgs e)
                {
                    if (ListenButton.Content.ToString() == " Start Listening ")
                    {
                        // use a wake up command for added accuracy
                        if (ListenForWakeUpCommandCheckBox.IsChecked == true)
                            listener.WakeUpOnKeyPhrase(WakeUpCommandTextBox.Text, 
                                true, WakeUpWavFile);
                        // set the phrases to listen for and start listening
                        listener.Phrases = PhrasesTextBox.Text;
                        listener.StartListening();
                        // UI stuff
                        SettingsPanel.IsEnabled = false;
                        ListenButton.Content = " Stop Listening ";
                    }
                    else
                    {
                        listener.StopListening();
                        // UI stuff
                        SettingsPanel.IsEnabled = true;
                        ListenButton.Content = " Start Listening ";
                    }
                }
            }
        }

You don't have to use a wake up command, of course, but if you want to, just call WakeUpOnKeyPhrase passing the phrase. If you want SpeechListener to play a WAV file when it "wakes up" - a nice little extra touch - pass true for the second argument (PlayWaveFileOnWakeUp) and pass the wave file name as the third parameter. If you don't want to play a wav file just pass false and an empty string.

The Phrases property takes a CRLF delimited string of phrases and internally creates a grammar from it. Just set Phrases to the words and phrases you want it to recognize.

Finally, call StartListening().

If you called WakeUpOnKeyPhrase prior to StartListening, nothing happens until you utter the Wake up command, at which point the WakeUp event fires. Now SpeechListener is waiting for you to speak one of the phrases. It will either fire the SpeechRecognized event or nothing at all, after which you'll have to speak the Wake Up command again to repeat the process. This pattern continues until you call StopListening().

If you are not using Wake Up command, the SpeechRecognized event will continue to fire until you call StopListening().

If you want more fine-grained access to the properties and events, just acccess the public RecognitionEngine field, which exposes the SpeechRecognitionEngine object used internally. You can party on all the events if you like.

How it works - Sentence Detection

Before we can determine if a word or phrase is part of a sentence, we have to create a Grammar that allows for wild cards (undefined speech) on either side of the phrase. Here's the code that I use. I create two extra GrammarBuilder objects containing wild cards. One goes before the choices, and another one goes after. This is important, and you'll see why in a minute.
public Grammar CreateGrammar(string[] phrases)
{
    Grammar g;

    // first, put the phrases in a choices object
    var choices = new Choices(phrases);

    // create a grammar builder to prepend our choices
    var beforeBuilder = new GrammarBuilder();
    // append a wildcard (unknown speech)
    beforeBuilder.AppendWildcard();
    // create a semantic key from the builder
    var beforeKey = new SemanticResultKey("beforeKey", beforeBuilder);

    // do the same three steps to create a "wild card" to follow our choices
    var afterBuilder = new GrammarBuilder();
    afterBuilder.AppendWildcard();
    var afterKey = new SemanticResultKey("afterKey", afterBuilder);

    // create the main grammar builder
    var builder = new GrammarBuilder();
    builder.Culture = RecognitionEngine.RecognizerInfo.Culture;
    builder.Append(beforeBuilder);
    builder.Append(choices);
    builder.Append(afterBuilder);

    // create a new grammar from the final builder
    return new Grammar(builder);
}
The function IsPartOfSentence determines if a RecognitionResult is part of a sentence by checking the Words collection. The word "..." denotes a wild card (undefined or unknown speech). So, if the word "..." is in the Words collection, we can safely ignore it because it was spoken in the context of a bigger phrase.
public bool IsPartOfSentence(RecognitionResult result)
{
    foreach (var word in result.Words)
    {
	if (word.Text == "...")
	    return true;
    }
    return false;
}
The rest of the code is fairly straight ahead, except for one thing that drives me nuts about Speech Recognition. Typically, if you want to interact using Speech, that means you will say something, the PC will respond (typically with the Speech.Synthesis.SpeechSynthesizer) and then you want it to start listening again for more commands or phrases which may be different from the last ones depending on what you want to do.

But here's the thing. When you recognize speech asynchronously you handle an event. If you want to change up what you're doing, you have to get out of this thread to let the calling code complete. Fact of life, but still a PITA. So, to get around this I implement an old favorite pattern of starting a 1 millisecond timer just one time. It's quick and easy and it works without ceremony. To get access to the SpeechRecognizedEventArgs parameter, I just stuff it in the timer object's tag.
System.Windows.Threading.DispatcherTimer GetOutOfThisMethodTimer;

public SpeechListener()
{
    // other initialization code here
    GetOutOfThisMethodTimer = new System.Windows.Threading.DispatcherTimer();
    GetOutOfThisMethodTimer.Interval = TimeSpan.FromMilliseconds(1);
    GetOutOfThisMethodTimer.Tick += GetOutOfThisMethodTimer_Tick;
}

void SpeechRecognitionEngine_SpeechRecognized(object sender, 
   System.Speech.Recognition.SpeechRecognizedEventArgs e)
{
    GetOutOfThisMethodTimer.Tag = e;
    GetOutOfThisMethodTimer.Start();
}

void GetOutOfThisMethodTimer_Tick(object sender, EventArgs e)
{
    GetOutOfThisMethodTimer.Stop();
    var obj = GetOutOfThisMethodTimer.Tag;
    GetOutOfThisMethodTimer.Tag = null;

    if (obj == null)
    {
        StartListening();
        return;
    }

    var args = (SpeechRecognizedEventArgs)obj;
    
}    

Download the code here and enjoy!

- Carl