TestFlight » iOS app testing, tracking, etc.

Wednesday, March 14th, 2012

I’m working on a small iPad project (making a LITE version of an existing app) and the previous developer made me aware of TestFlight.

It’s pretty amazing. If you make iOS apps, you definitely need to sign up.

Once you have an account…

  1. You invite people via email to register their devices.
  2. Once they do that, go to the TestFlight website and export their device ID’s. You’ll get a text file.
  3. Log into the iOS provisioning portal and select Devices.
  4. Upload their devices
  5. Go to the Provisioning Section
  6. Edit your Provisioning Profile
  7. Put a checkmark next to the devices that you want to add to your profile…these are the testers that you want to be able to run your app on their device
  8. Submit…wait a bit.
  9. Download the Profile
  10. Drag the file onto the XCode icon
  11. Now you can create Adhoc distributions for those devices
  12. Follow this tutorial to create an Archive file of your app
  13. Log in to your TestFlight account and upload this build
  14. Confirm the users
  15. Your users will get an email. They click on the link in the email from their iOS devices…and the app will install!!!

AIR app – user idle detection

Wednesday, March 14th, 2012

Found out that it is super easy to deal with timeouts…i.e. when you create kiosk applications, you generally want the activity to timeout and go to some sort of attract screen or activity reset when no one has touched the screen, move the mouse or press the keyboard.

There’s the NativeApplication class for AIR to the rescue:


/* this is in seconds */
NativeApplication.nativeApplication.idleThreshold = 60;
NativeApplication.nativeApplication.addEventListener(Event.USER_IDLE, handleUserIdle);
NativeApplication.nativeApplication.addEventListener(Event.USER_PRESENT, handleUserPresent);

private function handleUserIdle (e:Event) : void
{
  trace("Idling...");
  //reset activity
  //show attract screen
}

private function handleUserPresent (e:Event) : void
{
  trace("There's a user!");
  //hide attract screen
}

AS3 – messaging between objects, a decoupled approach

Sunday, February 12th, 2012

I’m working on an interactive installation. I’m building a couple of AIR apps. A really useful object that I’ve worked with in the past is a Broadcaster. Think of it as the telephone operator of old…you know the kind at a switchboard that actually has to connect a wire between the two callers. Well, the Broadcaster is great for keeping objects decoupled from each other. Typically if you want to send message between objects, each object has to somehow know the existence of the other object. You do this by keeping a reference to it.

//This is inside the sender class
// save a reference to the receiver
// so we can tell it to do something later
var receiver;
receiver = {the_receiver_object};
receiver.reset_activity("fade_out");

With the Broadcaster class, you just tell the Broadcaster that you’re interested in a certain message. Any object in your application can fire that message. Once that is fired, you get an event and it’s up to your class to handle it.

// in the receiver class
// when we receive the "RESET" message,
// run the reset_activity() function.
Broadcaster.subscribe( "RESET", this.reset_activity );

function reset_activity ( e:RichEvent ) {
//do something
}

// in the sender class
// fire off the "RESET" message
// and also pass along some parameters
Broadcaster.fire( "RESET" , {transition:"fade_out"} );

 

Notice that neither class has any knowledge of the other class. They are really just ‘connected’ by the message that they are firing and listening to.

The beauty of this is that any class can fire a message and any class can listen(subscribe) to that message. The Broadcaster does the routing between sender and receiver just like the old telephone switchboard days!

Here’s the awesome minair library that has the Broadcaster class. If you just want to use this messaging architecture you’ll need to grab the Broadcaster, RichEvent, and the Log classes. Good luck!

Configuring git to ignore certain files

Friday, February 10th, 2012

I’ve been working with git on and off for about a month now. I’m on a Mac and git always picks up changes to the .DS_Store files within directories. Also when working with XCode, it picks up changes to the user workspace files, etc.

There are a few ways to tell git to ignore these files and not keep them under version control…mainly:

  1. putting a .gitignore file in a directory and listing filenames and/or filename patterns to ignore – this can be checked in just like any other file. The effect is that anyone that clones your directory also will ignore these files.
  2. entering the filenames and/or filename patterns to the .git/info/exclude file in your local repo – only affects your local repo
  3. running the command git config –global core.excludesfile ~/.gitignore_global (.gitignore_global is a file where you specify the ignore patterns) to globally ignore certain files (see the link above fore more details) – I haven’t tried this yet, but this seems great so that you don’t have to re-specify which files to ignore every time you work with a git repo

 

Tool for Marine Biologist

Sunday, January 15th, 2012

My brother-in-law is a marine biologist and he often will do ocean surveys as a contractor. Recently, he did a survey to measure the amount of eel grass in a certain area. The way that he does this is via scuba and a GPS unit. He essentially gathers data, in a methodical way (I’ll spare you the details), and then uses the data to generate a black & white pixel map. Black areas indicate that eel grass is present. He then usually brings this map into a really expensive software to calculate the total area of eel grass.

I recently happened to be tooling around in Processing and decided to make a very quick and dirty tool to do this calculation. You run this program, pick the image, set the pixel measurement for your area unit, and the software will tell you how many area units you image contains. The interface is not pretty…just functional.

PrimeSense 3D sensor

Thursday, January 5th, 2012

PrimeSense has partnered with ASUS an entered the market with their own 3D sensor, the Xtion Pro Live (There’s actually the Xtion Pro model which doesn’t have a RGB camera…so if you’re just interested in getting 3D data but not actual image data, this is the version for you).

I can’t get one of these yet…just got the Kinect to play with. But here are the comparisons.

KinectXtion Pro Live

 

 

Kinect SpecsXtion Pro Live specs

Moving Time Machine backups to a bigger drive

Thursday, January 5th, 2012

I have a drive with 3 partitions (The Big One, Rachel Time Machine, James Time Machine). I had quite an ordeal trying to moving each TM backup to a new/bigger drive. Initially I tried doing a straight copy from the File Manager…i.e. just dragging the Backups.backupdb directory to the new drive. It worked for Rachel Time Machine (took probably 3 or 4 hours). (I also found out that doing a straight file copy essentially is not efficient because doesn’t preserve the links (logical pointers to files) but essentially duplicates the file that the links are pointing to (see patrix’s comments in the link above). After copying the files over, I pointed Rachel’s laptop to use the her new Time Machine drive and I can see all of the older backups. Great!

Migrating James Time Machine to a new drive was not so smooth. Doing a straight file copy took forever (something like 5 to 6 hours)…I basically let it run over night. In the morning I woke up to find that it didn’t work. Here’s the ordeal described for a Stack Exchange post. As you can see from the post, trying to copy using OSX’s Disk Utility didn’t really work the way I wanted to. It basically cloned my drive so that my NEW drive was just as small as my original (I think it basically created a small partition…but the weird thing is that if you look at the disk in Disk Utility, there is only one partition and it should be 1TB, the size of the new drive…hmmm). Carbon Clone Copy didn’t work either. Finally SuperDuper! did the trick!!!

I have to thank this Neil’s Apple Blog post for mitigating some of the frustration along the way…it was comforting to see someone else with the same issue and that a solution was possible!

Kinect…back on it

Sunday, December 11th, 2011

Got a Kinect from Best Buy today for $99. Holiday prices I guess…good deal eh? :)

Starting to play with Kinect again. Did it at work earlier this year when I was still on the East Coast. Back then we were testing out its capabilities to determine whether it would be a good platform to use for a science museum activity. We got the OpenNI, NITE(from Primesense…the company that actually developed the hardware in the Kinect sensor) samples working just to see how well skeleton tracking works.

Fast forward to the present…

So, I’ve just downloaded Cinder and their Kinect Cinderblock(their term for a library) and ran the samples. Pretty neat. It allows you to essentially get the sensor information from the Kinect and also allows you to control some of the hardware on it such as the motor so you can adjust the tilt of the Kinect. Here’s their example of a visual point cloud composed from Kinect’s depth information.

PrimeSense and ‘recently’ Microsoft released SDKs to help you work with the Kinect. (These SDKs, also come with libraries that help you interpret and make sense of the Kinect sensor information. I.e., they take the raw color and depth info from the cameras and (if a person is standing in front of it) will give you information about the body…such as the location of their head, hands and joints. Without it, the info from the cameras are essentially just color information and depth information.

Here’s an article comparing the features of Microsoft’s SDK and PrimeSense’s SDK. The one thing that I see is a major benefit for Microsoft is that there’s no calibration pose needed. From an interaction designer’s perspective, this is huge because user’s don’t expect a calibration step. It would be best if the technology just works!

Pomodoro Technique

Monday, December 5th, 2011

I recently read about the Pomorodo Technique on a post at the Stack Exchange forum on personal productivity.

At its very basis, it’s a way to help you have focused intervals of work with minimal distractions. It also helps with goal setting, task management, time management and time estimation.

The jist of it is this (iterate on the items in your todo list) :

  1. Set a timer for 25 minutes (this timer and time unit is called the ‘Pomodoro’)…work on the task without interruptions
  2. Mark an X next to your todo (this helps you keep track of how many Pomodoro’s you’re using for this todo task)
  3. If you’ve finished the task, cross it out from your todo list.
  4. Take a 5 minute break. During the break, do not do anything related to the task. You can check your email, etc.
  5. If this is your 4th Pomodoro, take a longer break…30 minutes.
There’s a lot more to this technique but at the heart is this cycle of work intervals and breaks.
It’s been helping me so far. I really like the short bursts of focus. I have lots of things to juggle now so this is a great technique to help me organize my time.

Stanford AI Class

Thursday, November 17th, 2011

I’m taking the Stanford AI Class, which started about 6 weeks ago. It’s been eye opening and it’s also a welcomed challenge. This is essentially an online version of the AI class that they teach at Stanford. This is the first time that they are trying an online version that is open to all. They have over 85,000 students enrolled. I’m a bit behind on the lectures because of the arrival of baby Maya. So, I should stop blogging and get back to the class!