Superpatterns Pat Patterson on the Cloud, Identity and Single Malt Scotch

17Mar/162

Uploading data to the Salesforce Wave Analytics Cloud

bi_phoneOverDesktopAs you might know from my last post, I moved from Salesforce to StreamSets a couple of weeks ago. It didn't take long before I was signing up for a fresh Developer Edition org, though! I'm creating a StreamSets destination to allow me to write data to Wave Analytics datasets, and it's fair to say that the documentation is... sparse. Working from the Wave Analytics External Data API Developer Guide and Wave Analytics External Data Format Reference (why are these separate docs???), and my understanding of how Salesforce works, I was able to put together a working sample Java app that creates a dataset from CSV data.

Here's the code - I explain a few idiosyncrasies below, and reveal the easiest way to get this working with Wave.

package wsc;

import java.nio.charset.StandardCharsets;
import java.util.Arrays;
import java.util.List;

import com.sforce.soap.partner.Connector;
import com.sforce.soap.partner.Error;
import com.sforce.soap.partner.PartnerConnection;
import com.sforce.soap.partner.QueryResult;
import com.sforce.soap.partner.SaveResult;
import com.sforce.soap.partner.sobject.SObject;
import com.sforce.ws.ConnectionException;
import com.sforce.ws.ConnectorConfig;

public class Main {

	// Describes the data we'll be uploading
	static String metadataJson = 
			"{\n" +
			"    \"fileFormat\": {\n" +
			"        \"charsetName\": \"UTF-8\",\n" +
			"        \"fieldsDelimitedBy\": \",\",\n" +
			"        \"fieldsEnclosedBy\": \"\\\"\",\n" +
			"        \"numberOfLinesToIgnore\": 1\n" +
			"    },\n" +
			"    \"objects\": [\n" +
			"        {\n" +
			"            \"connector\": \"AcmeCSVConnector\",\n" +
			"            \"description\": \"\",\n" +
			"            \"fields\": [\n" +
			"                {\n" +
			"                    \"description\": \"\",\n" +
			"                    \"fullyQualifiedName\": \"SalesData.Name\",\n" +
			"                    \"isMultiValue\": false,\n" +
			"                    \"isSystemField\": false,\n" +
			"                    \"isUniqueId\": false,\n" +
			"                    \"label\": \"Account Name\",\n" +
			"                    \"name\": \"Name\",\n" +
			"                    \"type\": \"Text\"\n" +
			"                },\n" +
			"                {\n" +
			"                    \"defaultValue\": \"0\",\n" +
			"                    \"description\": \"\",\n" +
			"                    \"format\": \"$#,##0.00\",\n" +
			"                    \"fullyQualifiedName\": \"SalesData.Amount\",\n" +
			"                    \"isSystemField\": false,\n" +
			"                    \"isUniqueId\": false,\n" +
			"                    \"label\": \"Opportunity Amount\",\n" +
			"                    \"name\": \"Amount\",\n" +
			"                    \"precision\": 10,\n" +
			"                    \"scale\": 2,\n" +
			"                    \"type\": \"Numeric\"\n" +
			"                },\n" +
			"                {\n" +
			"                    \"description\": \"\",\n" +
			"                    \"fiscalMonthOffset\": 0,\n" +
			"                    \"format\": \"MM/dd/yyyy\",\n" +
			"                    \"fullyQualifiedName\": \"SalesData.CloseDate\",\n" +
			"                    \"isSystemField\": false,\n" +
			"                    \"isUniqueId\": false,\n" +
			"                    \"label\": \"Opportunity Close Date\",\n" +
			"                    \"name\": \"CloseDate\",\n" +
			"                    \"type\": \"Date\"\n" +
			"                }\n" +
			"            ],\n" +
			"            \"fullyQualifiedName\": \"SalesData\",\n" +
			"            \"label\": \"Sales Data\",\n" +
			"            \"name\": \"SalesData\"\n" +
			"        }\n" +
			"    ]\n" +
			"}";

	// This is the data we'll be uploading
	static String data = 
			"Name,Amount,CloseDate\n" +
			"opportunityA,100.99,6/30/2014\n" +
			"opportunityB,99.01,1/31/2012\n";

	// This will be the name of the data set in Wave
	// Must be unique across the organization
	static String datasetName = "tester";

	// Change these as appropriate
	static final String USERNAME = "user@example.com";
	static final String PASSWORD = "p455w0rd";

	// Status values indicating that the job is done
	static final List<String> DONE = (List<String>)Arrays.asList(
			"Completed", 
			"CompletedWithWarnings",
			"Failed",
			"NotProcessed"
			);

	public static void main(String[] args) {
		PartnerConnection connection;
		
		ConnectorConfig config = new ConnectorConfig();
		
		config.setUsername(USERNAME);
		config.setPassword(PASSWORD);

		try {

			connection = Connector.newConnection(config);

			System.out.println("Successfully authenticated as "+config.getUsername());

			// Wave time!
			
			// First, we create an InsightsExternalData job
			SObject sobj = new SObject();
			sobj.setType("InsightsExternalData");
			sobj.setField("Format","Csv");
			sobj.setField("EdgemartAlias", datasetName);
			sobj.setField("MetadataJson", metadataJson.getBytes(StandardCharsets.UTF_8));
			sobj.setField("Operation","Overwrite");
			sobj.setField("Action","None");

			String parentID = null;
			SaveResult[] results = connection.create(new SObject[] { sobj });
			for(SaveResult sv:results) {
				if(sv.isSuccess()) {
					parentID = sv.getId();
					System.out.println("Success creating InsightsExternalData: "+parentID);
				} else {
					for (Error e : sv.getErrors()) {
						System.out.println("Error: " + e.getMessage());
					}
					System.exit(1);
				}
			}

			// Now upload some actual data. You can do this as many times as necessary,
			// subject to the Wave External Data API Limits
			sobj = new SObject();
			sobj.setType("InsightsExternalDataPart");
			sobj.setField("DataFile", data.getBytes(StandardCharsets.UTF_8));
			sobj.setField("InsightsExternalDataId", parentID);
			sobj.setField("PartNumber", 1);
			
			results = connection.create(new SObject[] { sobj });
			for(SaveResult sv:results) {
				if(sv.isSuccess()) {
					String rowId = sv.getId();
					System.out.println("Success creating InsightsExternalDataPart: "+rowId);
				} else {
					for (Error e : sv.getErrors()) {
						System.out.println("Error: " + e.getMessage());
					}					
					System.exit(1);
				}
			}

			// Instruct Wave to start processing the data
			sobj = new SObject();
			sobj.setType("InsightsExternalData");
			sobj.setField("Action","Process");
			sobj.setId(parentID);
			results = connection.update(new SObject[] { sobj });
			for(SaveResult sv:results) {
				if(sv.isSuccess()) {
					String rowId = sv.getId();
					System.out.println("Success updating InsightsExternalData: "+rowId);
				} else {
					for (Error e : sv.getErrors()) {
						System.out.println("Error: " + e.getMessage());
					}		    	
					System.exit(1);
				}
			}

			// Periodically check whether the job is done
			boolean done = false;
			int sleepTime = 1000;
			while (!done) {
				try {
					Thread.sleep(sleepTime);
					sleepTime *= 2;
				} catch(InterruptedException ex) {
					Thread.currentThread().interrupt();
				}
				QueryResult queryResults = connection.query(
						"SELECT Status FROM InsightsExternalData WHERE Id = '" + parentID + "'"
						);
				if (queryResults.getSize() > 0) {
					for (SObject s: queryResults.getRecords()) {
						String status = (String)s.getField("Status");
						System.out.println(s.getField("Status"));
						if (DONE.contains(status)) {
							done = true;
							String statusMessage = (String)s.getField("StatusMessage");
							if (statusMessage != null) {
								System.out.println(statusMessage);								
							}
						}
					}
				} else {
					System.out.println("Can't find InsightsExternalData with Id " + parentID);
				}
			}
		} catch (ConnectionException e1) {
			e1.printStackTrace();
		}
	}
}
  • Lines 7-14 - I'm using the WSC with the SOAP Partner API, just because I'm working in Java, and that was what was used in the bits of sample code included in the docs.
  • Lines 19-72 - this is the metadata that describes the CSV you're uploading. This is optional, but recommended.
  • Lines 75-78 - CSV is the only format currently supported, though the docs reserve a binary format for Salesforce use.
  • Line 82 - the dataset name must be unique across your org.
  • Lines 85-86 - change these to your login credentials.
  • Line 117 - the API wants base64-encoded data, so you'd likely try encoding the data yourself and passing the resulting string here, resulting in an error message. Instead you have to pass the raw bytes of the unencoded string and let the WSC library sort it out.
  • Lines 137-154 - you can repeat this block in a loop as many times as necessary.

You will need the WSC jar, and the SOAP Partner API jar - follow Jeff Douglas' excellent article Introduction to the Force.com Web Services Connector for details on setting this up - use the 'uber' JAR as it contains all the required dependencies. The sample above used Jeff's Partner API sample as a starting point - thanks, Jeff!

The fastest way to get started with Wave is, of course, Salesforce Trailhead. Follow the Wave Analytics Basics module and you'll end up with a Wave-enabled Developer Edition all ready to go.

Once you have your Wave DE org, and the sample app, you should be able to run it and see something like:

Successfully authenticated as wave@patorg.com
Success creating InsightsExternalData: 06V360000008RIlEAM
Success creating InsightsExternalDataPart: 06W36000000PDXFEA4
Success updating InsightsExternalData: 06V360000008RIlEAM
InProgress
InProgress
Completed

If you go look in the Wave Analytics app, you should see the 'tester' dataset:

WaveAnalytics

Click on 'tester' and you'll see the 'big blue line':

TesterDataset

Now you can drill into the data (all 2 rows of it!) by account name, close date etc.

You could, of course, extend the above code to accept a CSV filename and dataset name on the command line, and create all sorts of interesting extensions. Follow the StreamSets blog to learn where I plan to go with this!

4Mar/1611

Thank You For The Music

I joined the developer evangelism team at Salesforce in October 2010, nearly five and a half years ago. It's been a fantastic run, but it's time for me to move on, and today will be my last day with Salesforce.

Over the past few years I've worked with Apex, Visualforce, the Force.com APIs, Heroku, Salesforce Identity and, most recently, the Internet of Things, but, more than any of the technologies, it's the people that have made Salesforce special for me. I've worked with the best developer marketing team in the industry, and the most awesome community of admins and developers.

So, what next? Starting on Monday I'll be 'Community Champion' at StreamSets, a San Francisco-based startup focused on open source big data ingest. I'll be blogging at their Continuous Ingest Blog, speaking at conferences (including GlueCon, coming up in May), tweeting, and learning all about this 'big data' thing I keep hearing about.

Thank you, Salesforce, for my #dreamjob, and all the fun times over the years. It's been a blast!

Tagged as: 11 Comments
25Mar/144

Visualforce on Chromecast, as a Service!

After writing my last blog entry, on how to display any Visualforce Page on Google Chromecast, it occured to me that I could run the app on Heroku. So, if you have a Google Chromecast, and a Salesforce login with API access enabled, you can try it out right now.

Go to https://vf-chromecast.herokuapp.com/; you'll see this page:

Visualforce on Chromecast

Follow the instructions, log in, authorize the app to access your data, and you'll be able to select a Visualforce Page to 'cast' to your TV.

Select a Visualforce Page

One new feature here - if you select a Visualforce Page that uses a standard controller, and is thus expecting a record ID as a parameter, you'll get the opportunity to select a record. For simplicity, I'm just showing the first 10 records returned by the database.

Select a Record

Choose a record, hit send, and you'll see the page displayed by the Chromecast, in this case, it's a Mini Hack we ran a couple of Dreamforces ago:

Success

As always, the code is on GitHub.

Having done Raspberry Pi, Minecraft, and now Chromecast, I'm looking for new ideas for interesting Salesforce integrations. Leave a comment if you think of one!

21Mar/142

Display ANY Visualforce Page on Google Chromecast

Last time, I described how I ran a simple 'Hello World' application, served from a Force.com Site, on the Google Chromecast, a $35 digital media player. In this blog entry, I'll show you how to show any Visualforce page, not just a public page on a Force.com Site, on the Chromecast.

IMG_1579

A quick recap... (Skip this paragraph if you've already read the previous entry). Chromecast is actually a tiny wifi-enabled Linux computer, running the Chrome browser, connected to a TV or monitor via HDMI. A 'receiver' app, written in HTML5, runs on the device, which has no input capability (mouse/keyboard), while a 'sender' app runs on a 'second screen' such as a laptop, smartphone, or tablet, the two apps communicating across the local wifi network via a message bus. The sender app typically allows the user to navigate content and control the media stream shown on the Chromecast (the 'first screen'). The CastHelloText-chrome sample allows the user to type a message in the sender app on the first screen, and displays it on the second screen via the receiver app.

Given a working sample, the next question was, how to access data from the receiver app? The core problem is that the Chromecast can only load a public web page - it can't login to Force.com. The sender app runs on a desktop browser, smartphone or tablet, however, so perhaps it would be possible to login there, and send a session ID to the receiver app via the message bus? I worked through a few alternatives before I hit on the optimal solution:

Load the Visualforce page via Frontdoor.jsp

Frontdoor.jsp, which has existed for some time, but has only been formally documented and supported since the Winter '14 Salesforce release, "gives users access to Salesforce from a custom Web interface, such as a remote access Force.com site or other API integration, using their existing session ID and the server URL".

To authenticate users with frontdoor.jsp, you pass the server URL and session ID to frontdoor.jsp in this format:

https://instance.salesforce.com/secur/frontdoor.jsp?sid=session_ID&retURL=optional_relative_url_to_open

Sounds perfect! The only problem is that the session ID you pass to frontdoor.jsp must come from one of:

  • The access_token from an OAuth authentication (obtained with 'web' or 'full' scope)
  • The LoginResult returned from a SOAP API login() call
  • The Apex UserInfo.getSessionId()

The session ID from a Visualforce page or controller isn't going to cut it here. So, I reached for Kevin O'Hara's excellent nforce and built a quick Node.js sender app that has the user authorize API access via OAuth (including web scope!), runs a query for the list of Visualforce Pages in the org and presents them as a drop-down list. You can choose a Visualforce Page, hit 'Send', and the sender app constructs the frontdoor URL with the OAuth access token and relative URL for the page and sends it to the receiver via the message bus.

Screen Shot 2014-03-21 at 12.09.08 PM

Note that, while you can indeed send any Visualforce page to the Chromecast for display, remember that the Chromecast doesn't have any capacity for user input, so tables and charts work best.

I tried a couple of approaches for the receiver app; first I simply redirected to the frontdoor URL, but then I realized that it would be more useful to load the frontdoor URL into a full-page iframe. That way, the receiver app could stay running in the 'top' document, ready to receive a different URL, and periodically reloading the iframe so that the session doesn't time out. Here it is in action:

All of the code is in my CastDemo project on GitHub. Feel free to fork it, extend it, and let me know in the comments how it works out.

When it came down to the code, this was a very straightforward integration; the vast majority of the work was thinking around the problem of how to have a device with no input capability authenticate and load a Visualforce page. Now that Frontdoor.jsp is documented and supported, it's an essential tool for the advanced Force.com developer.

POSTSCRIPT: Almost as soon as I hit 'publish' on this post, I realized I could push the app to Heroku, and allow anyone with a Chromecast and API access to Salesforce to see their Visualforce Pages on TV. Read the next installment here.

7Mar/140

Getting Started with Chromecast on Visualforce

About a month ago, Google released the Google Cast SDK, allowing developers to create apps that run on the Chromecast, a $35 digital media player. The primary use case of Chromecast is to stream media - movies, TV shows, music and the like - via wifi to your HDMI TV/monitor, but, looking at the SDK docs, it became apparent that the Chromecast is actually a miniature ('system-on-chip') computer running Chrome OS (a Linux variant) and the Chrome browser. If it's running a browser, I wondered, could it load Visualforce pages from Salesforce and display, for example, a chart based on live data? If so, this would allow any HDMI-capable TV or monitor to be used as a dashboard at very low cost. When I was given a Chromecast by a colleague (thanks, Sandeep!) in return for alpha testing his app, I decided to find out!

This first blog post explains how I ran a simple 'Hello World' sample on the Chromecast, loading the app from Visualforce. Next time, I'll show you how I pulled data from Salesforce via the REST API and showed it as a chart.

Chromecast

Chromecast setup was pretty straightforward - a matter of connecting the device to an HDMI input on my TV and a USB power source, downloading and running the Chromecast app, and following the prompts to complete setup. The Chromecast app locates the device on the local network using the DIAL protocol. Note that, since the app is communicating directly with the device, it won't work on wifi networks that enforce AP/Client Isolation (many offices and hotels).

After installing the Cast Extension for Chrome and verifying that the Chromecast could display content from YouTube, it was time to put the device into development mode! This actually proved to be pretty tricky - you need to enter the Chromecast's serial number into the Google Cast SDK Developer Console. Sounds straightforward, but the serial number is laser etched into the Chromecast's black plastic case in very small type indeed. I entered it incorrectly the first time round, and had to take a photo of the serial number and zoom in to see that the last character was an S and not an 8!

Serial

Another gotcha I encountered is that it's necessary to go into the Chromecast settings (in the Chromecast app) and enable Send this Chromecast's serial number when checking for updates. This information is on a separate page from the device registration instructions, so it's easy to miss.

Now my Chromecast showed up in the developer console, it was time to get an app running. Since the Chromecast has no input devices (keyboard, mouse, etc), a 'receiver app' running in an HTML5 page on the device is controlled by a 'sender app' running on a 'second screen' such as a laptop, smartphone or tablet. The two apps are connected over the local network by a message bus exposed by the Google Cast SDK.

Diagram

Looking through the samplesCastHelloText-chrome looked like the simplest example of a custom receiver. In the sample, the sender app, running on an HTML5 page in Chrome, allows you to enter a message ('Hello World' is traditional!) and sends it on the bus. The receiver app displays the message, and reflects it back to the sender, to demonstrate the bidrectional nature of the bus.

It was straightforward to convert the vanilla HTML pages to Visualforce - the first change was to wrap the entire page in an tag and remove the DOCTYPE, since Visualforce will supply this when it renders the page.

<apex:page docType="html-5.0" applyHtmlTag="false" applyBodyTag="false"
           showHeader="false" sidebar="false" standardStylesheets="false"
           cache="false">
<!-- <!DOCTYPE html> -->
<html>
...rest of the page...
</html>
</apex:page>

Visualforce doesn't like HTML attributes with no value, so, in chromehellotext, I changed

<input id="input" type="text" size="30" onwebkitspeechchange="transcribe(this.value)" x-webkit-speech/>

to

<input id="input" type="text" size="30" onwebkitspeechchange="transcribe(this.value)" x-webkit-speech="true"/>

Adding the Visualforce pages to a Force.com Site made them public on the web. This is important - the Chromecast can only load public web pages - it has no way of authenticating to a server. You'll find out in the next blog post how I was able to access the Force.com REST API to securely retrieve content.

Once I had a pair of public pages, I registered my sample app, entering the public URLs for my Visualforce pages, and pasted the resulting app ID into the chromehellotext page. Loading that page gave me a text control into which I could type a message. Hitting return to submit the message pops up the Cast device selector.

HelloSender

I selected my device from the list, and - 'BAM!' - my message popped up on the TV screen - success!

HelloChromecast

One very nice feature of the Chromecast is that it allows remote debugging in Chrome. You can find the device's IP address in the Chromecast app, say 192.168.1.123, and simply go to port 9222 at that address, in my example, http://192.168.1.123:9222/.

Debugger

You get the usual Chrome developer tools, right down to the ability to set breakpoints and inspect variables in JavaScript - marvelous!

Breakpoint

I've published the sample app, so you can try it out yourself. If you have a Chromecast, go to my sender app page; you should be able to connect to your device and send a message.

At this point, I had to do some thinking. The Chromecast, as I mentioned before, loads a page from a public web server. How could I show data on the page, preferably without making the data itself publicly available? Read on to the next post!

Portions of this page are reproduced from work created and shared by Google and used according to terms described in the Creative Commons 3.0 Attribution License.

8Jun/1225

Raspberry Pi fix for HDMI to DVI cable issue

My Raspberry Pi arrived this week. After creating a boot image on an SD card I had lying around (using the excellent RasPiWrite utility), I initially booted it up on my TV, using the composite video output - all working!

Raspberry Pi in text mode

After a little exploration from the command line, startx brought up the GUI.

Raspberry Pi running X

As well as the composite video output, the Raspberry Pi supports HDMI. My monitor (a Viewsonic VX2235WM-3) has VGA and DVI inputs, so I ordered the AmazonBasics HDMI to DVI Cable. Connecting up to my monitor, I was disappointed to see no video signal whatsover - the monitor wasn't seeing the Raspberry Pi at all.

Googling around, I discovered that you can set various configuration options that are read before the Raspberry Pi even boots. With a little experimentation, I found that setting

hdmi_force_hotplug=1

in config.txt solves the problem - I see video output from the moment I power up the Raspberry Pi! This makes sense - the description of hdmi_force_hotplug is "Use HDMI mode even if no HDMI monitor is detected" - I'm guessing the cable is not signalling the presence of a monitor to the Raspberry Pi, so it decides that it doesn't need to send HDMI output.

Watch this space for more Raspberry Pi fun!

15Nov/1113

Running Your Own Node.js Version on Heroku

UPDATE (3/3/12) - there's a much easier way of doing this now - see 'Specifying a version of Node.js / npm' in the Heroku Dev Center. The mechanism described below still works, but you should only go to all this trouble if you want something really custom.

Here's a completely unofficial, unsupported recipe for running your own Node.js version on Heroku. These instructions are based on those at the Heroku Node.js Buildpack repository, with some extra steps that I found were necessary to make the process work. Note that buildpack support at Heroku is still evolving and the process will likely change over time. Please leave a comment if you try the instructions here and they don't work - I'll do my best to keep them up to date.

Before you start, update the heroku gem, so it recognizes the --buildpack option:

gem update heroku

(Thanks to 'tester' for leaving a comment reminding me that using an out of date heroku gem can result in the error message ! Name must start with a letter and can only contain lowercase letters, numbers, and dashes.)

Note: If you just want to try out a completely unofficial, unsupported Node.js 0.6.1 on Heroku, just create your app with my buildpack repository:

$ heroku create --stack cedar --buildpack http://github.com/metadaddy-sfdc/heroku-buildpack-nodejs.git

Otherwise, read on to learn how to create your very own buildpack...

First, you'll need to fork https://github.com/heroku/heroku-buildpack-nodejs. Now, before you follow the instructions in the README to create a custom Node.js buildpack, you'll have to create a build server (running on Heroku, of course!) with vulcan and make it available to the buildpack scripts. You'll have to choose a name for your build server that's not already in use by another Heroku app. If vulcan create responds with 'Name is already taken', just pick another name.

$ gem install vulcan
$ vulcan create YOUR-BUILD-SERVER-NAME

Now you can create your buildpack. You'll need to set up environment variables for working with S3:

$ export AWS_ID=YOUR-AWS-ID AWS_SECRET=YOUR-AWS-SECRET S3_BUCKET=AN-S3-BUCKET-NAME

Create an S3 bucket to hold your buildpack. I used the S3 console, but, if you have the command line tools installed, you can use them instead.

Next you'll need to package Node.js and NPM for use on Heroku. I used the current latest, greatest version of Node.js, 0.6.1, and NPM, 1.0.105:

$ support/package_node 0.6.1
$ support/package_npm 1.0.105

Open bin/compile in your editor, and update the following lines:

NODE_VERSION="0.6.1"
NPM_VERSION="1.0.105"
S3_BUCKET=AN-S3-BUCKET-NAME

Now commit your changes and push the file back to GitHub:

$ git commit -am "Update Node.js to 0.6.1, NPM to 1.0.105"
$ git push

You can now create a Heroku app using your custom buildpack. You'll also need to specify the Cedar stack:

$ heroku create --stack cedar --buildpack http://github.com/YOUR-GITHUB-ID/heroku-buildpack-nodejs.git

When you push your app to Heroku, you should see the custom buildpack in action:

$ cd ../node-example/
$ git push heroku master
Counting objects: 11, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (8/8), done.
Writing objects: 100% (11/11), 4.02 KiB, done.
Total 11 (delta 1), reused 0 (delta 0)

-----> Heroku receiving push
-----> Fetching custom build pack... done
-----> Node.js app detected
-----> Fetching Node.js binaries
-----> Vendoring node 0.6.1
-----> Installing dependencies with npm 1.0.105

Dependencies installed
-----> Discovering process types
Procfile declares types -> web
-----> Compiled slug size is 3.3MB
-----> Launching... done, v6
http://strong-galaxy-8791.herokuapp.com deployed to Heroku

To git@heroku.com:strong-galaxy-8791.git
cd3c0e2..33fdd7a  master -> master
$ curl http://strong-galaxy-8791.herokuapp.com
Hello from Node.js v0.6.1

w00t!

Note: Due to an incompatibility between the default BSD tar on my Mac and GNU tar on Heroku, I saw many warnings while pushing my Node.js app to Heroku, of the form

tar: Ignoring unknown extended header keyword `SCHILY.dev'
tar: Ignoring unknown extended header keyword `SCHILY.ino'
tar: Ignoring unknown extended header keyword `SCHILY.nlink'

These are annoying, but benign - the push completes successfully. If you're on a Mac and you want to get rid of them, add the line

alias tar=gnutar

just after the opening #!/bin/sh in both package scripts.

Tagged as: , 13 Comments
21Oct/110

Quick Update on Planet Identity

Planet Identity (PId) mostly runs itself, thanks to Sam Ruby's excellent Planet Venus; usually, the only maintenance required is to add new subscriptions as folks submit interesting feeds. Very rarely I remove a feed from PId, usually because it's dead, but occasionally because the feed content doesn't quite 'fit' PId. Over the past few days a couple of people mentioned that Dave Kearns' IdM Journal, while a fine selection of links to relevant content, seems out of place amongst the 'primary source' articles at Planet Identity. I agreed, and, Dave having no objection, I've removed IdM Journal from PId. If you want to continue receiving IdM Journal, just point your feed reader at http://feeds2.feedburner.com/idmjournal/LhRB.

Do feel free to leave any suggestions for PId in the comments here, and have a good weekend, identity folk!

14Jun/116

Node.js Chat Demo on Heroku

STOP! If you're just getting started with Node.js and/or Heroku, then go read James Ward's excellent Getting Started with Node.js on The Cloud, then come back here...

Heroku's announcement of the public beta of their new 'Celadon Cedar' stack, including Node.js support, inspired me to try out Ryan Dahl's Node Chat demo server on Heroku. Getting it up and running was very straightforward - I went to GitHub, forked Ryan's node_chat project to my own account and grabbed the source:

ppatterson-ltm:tmp ppatterson$ git clone git://github.com/metadaddy-sfdc/node_chat.git
Cloning into node_chat...
remote: Counting objects: 183, done.
remote: Compressing objects: 100% (72/72), done.
remote: Total 183 (delta 117), reused 168 (delta 110)
Receiving objects: 100% (183/183), 50.07 KiB, done.
Resolving deltas: 100% (117/117), done.

Now I could create my Heroku app...

ppatterson-ltm:tmp ppatterson$ cd node_chat/
ppatterson-ltm:node_chat ppatterson$ heroku create --stack cedar node-chat
Creating node-chat2... done, stack is cedar
http://node-chat2.herokuapp.com/ | git@heroku.com:node-chat.git
Git remote heroku added

...and add the couple of files that Heroku needs to run a Node.js app (see the excellent Heroku docs for more info):

ppatterson-ltm:node_chat ppatterson$ echo "web: node server.js" > Procfile
ppatterson-ltm:node_chat ppatterson$ echo "{ \"name\": \"node-chat\", \"version\": \"0.0.1\" }" > package.json
ppatterson-ltm:node_chat ppatterson$ git add .
ppatterson-ltm:node_chat ppatterson$ git commit -m "Heroku-required files" Procfile package.json
[master a7617af] Heroku-required files
2 files changed, 2 insertions(+), 0 deletions(-)
create mode 100644 Procfile
create mode 100644 package.json

Now everything is ready to deploy:

ppatterson-ltm:node_chat ppatterson$ git push heroku master
Counting objects: 187, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (67/67), done.
Writing objects: 100% (187/187), 50.40 KiB, done.
Total 187 (delta 118), reused 182 (delta 117)

-----> Heroku receiving push
-----> Node.js app detected
-----> Vendoring node 0.4.7
-----> Installing dependencies with npm 1.0.8

Dependencies installed
-----> Discovering process types
Procfile declares types -> web
-----> Compiled slug size is 2.9MB
-----> Launching... done, v4
http://node-chat.herokuapp.com deployed to Heroku

To git@heroku.com:node-chat2.git
* [new branch]      master -> master
ppatterson-ltm:node_chat ppatterson$ heroku ps
Process       State               Command
------------  ------------------  ------------------------------
web.1         starting for 3s     node server.js
ppatterson-ltm:node_chat ppatterson$ heroku ps
Process       State               Command
------------  ------------------  ------------------------------
web.1         up for 6s           node server.js
ppatterson-ltm:node_chat ppatterson$ heroku open
Opening http://node-chat.herokuapp.com/

And, just like that, my chat server is up and running and I see it in my browser. It all works nicely - I can hit the URL from a couple of browsers and see all the chat messages going back and forth. Only one problem, though - I'm seeing errors when the chat server is idle:

A look at the logs reveals that connections are timing out.

2011-06-14T19:10:44+00:00 app[web.1]: <Pat2> Hi there
2011-06-14T19:10:44+00:00 heroku[router]: GET node-chat2.herokuapp.com/send dyno=web.1 queue=0 wait=0ms service=6ms bytes=16
2011-06-14T19:10:44+00:00 heroku[router]: GET node-chat2.herokuapp.com/recv dyno=web.1 queue=0 wait=0ms service=3520ms bytes=102
2011-06-14T19:10:53+00:00 app[web.1]: <Pat> Now I can talk to myself - woo hoo!
2011-06-14T19:10:53+00:00 heroku[router]: GET node-chat2.herokuapp.com/send dyno=web.1 queue=0 wait=0ms service=2ms bytes=16
2011-06-14T19:10:53+00:00 heroku[router]: GET node-chat2.herokuapp.com/recv dyno=web.1 queue=0 wait=0ms service=9185ms bytes=128
2011-06-14T19:10:53+00:00 heroku[router]: GET node-chat2.herokuapp.com/recv dyno=web.1 queue=0 wait=0ms service=9203ms bytes=128
2011-06-14T19:11:24+00:00 heroku[router]: Error H12 (Request timeout) -> GET node-chat2.herokuapp.com/recv dyno=web.1 queue= wait= service=30000ms bytes=
2011-06-14T19:11:24+00:00 heroku[router]: Error H12 (Request timeout) -> GET node-chat2.herokuapp.com/recv dyno=web.1 queue= wait= service=30000ms bytes=

So what's up? The answer is in the Heroku docs for the new HTTP 1.1 stack:

The herokuapp.com routing stack will terminate connections after 60 seconds on inactivity. If your app sends any data during this window, you will have a new 60 second window. This allows long-polling and other streaming data response.

From the logs, it looks like the connection is being dropped after only 30 seconds, but, no matter, the principle is the same - I need to periodically send some data to keep the connections open. The solution I settled on was having each client set a 20 second timer after it starts its long poll; on the timer firing the client sends a 'ping' message (effectively an empty message) to the server, which, in turn, forwards the ping to all attached clients, causing them to cancel their ping timers and iterate around the long polling loop. Normal chat traffic also causes the timer to be cancelled, so the pings are only sent during periods of inactivity. You can see the diffs here. Now my chat server stays up for hours without an error:

If you grab my fork from GitHub, you'll see I also added message persistence, using Brian Carlson's node-postgres module - mostly because I just wanted to see how easy it was to access PostgreSQL from Node.js on Heroku. The answer? Trivially easy 🙂 As Jeffrey mentions in the comments, apart from those code changes, I also needed to add the 'pg' module in package.json and the shared-database addon. The new package.json looks like this:

{
  "name": "node-chat",
  "version": "0.0.1",
  "dependencies": {
    "pg": "0.5.0"
  }
}

The command to install the shared-database addon is:

heroku addons:add shared-database

Disclosure - I am a salesforce.com employee, so I'm definitely a little biased in favor of my Heroku cousins, but, I have to say, I remain hugely impressed by Heroku. It. Just. Works.

Tagged as: , 6 Comments
14Jun/110

Superpatterns Reboot

You'll probably have noticed that things have been pretty quiet here at Superpatterns this past few months - mainly because the Force.com blog has been the outlet for my work-related blogging. If you've been coming here in the past for the identity-related content, you might be interested in some of my posts there:

Some topics just don't fit into the main 'flow' over at Force.com, though, so I'll start blogging them here and flag them from Force.com from time to time. Tune in later today for some Node.js goodness...

Filed under: Uncategorized No Comments