Skip to content

OAuth2 and LabVIEW — Part Three, Improving the Example

This is part three of a three four nine part blog post where I describe how to use OAuth2 with LabVIEW. See also:

In part one, we created a web service that the authentication process is going to use to call us back with an authentication code. In part two, we wrote code to go through the authentication process and call an example Google web service. Here in part three, we’re going to start writing some tests, replace the JSON parser, and think about what we else we could do to improve the example.

Improvements for Testing

I’m usually not a Test-Driven Development (TDD) kind of guy.  That’s where you write unit tests first, show that they fail, and then write the code that passes the tests.  I’ll sometimes use TDD when I clearly know what the right answer for a function ought to be.  For example, I once used TDD for figuring out a SQL query to a database—I could look at the database and deduce what I wanted the query to return, so I wrote a test for that.  Then I iterated until I got a SQL query that returned the right answer, and then iterated until it was efficient.

In our OAuth2 case, I decided from the start not to use TDD.  I wasn’t as confident that I knew the “right answer” to each step.  But I still kept testability in mind as I went along, and I wasn’t afraid to go back and write tests (and refactor for testability) after the first pass of the app was “done”.

It’s worth talking about the structure of the C# example that I started from.  It was badly structured for testability.  Here’s a high-level pseudo-code overview of this structure:


  1. Call doOAuth()


  1. Call the Authorization endpoint and parse the response
  2. Call performCodeExchange()


  1. Call the token endpoint and parse the response
  2. Call userInfoCall()


  1. Call the userinfo endpoint and parse the response

My complaint about this is that it’s difficult to test anything but userInfoCall() in isolation.  If I write a test for performCodeExchange(), it will also test userInfoCall(), whether I want it to or not.  There are ways around this—there’s a concept called “mocks”, where you can “mock” the behavior of userInfoCall() inside of performCodeExchange().  Sam Taggert wrote a few blog posts on this topic.

When I rewrote the C# code in LabVIEW, I chose a different structure:


  1. Call AskPermission()
  2. Call PerformCodeExchange()
  3. Call UserInfo()
  4. Call GetPhoto()


  1. Call the Authorization endpoint and parse the response


  1. Call the token endpoint and parse the response


  1. Call the userinfo endpoint and parse the response


  1. Retrieve the user’s photo and decode it

I believe this makes the code more cohesive, less coupled, and easier to test.  I can test PerformCodeExchange(), for example, without also calling UserInfo() and GetPhoto().

One of my other mild regrets in this code was how I created some “passthrough” arguments in

Both redirect uri out and client id out are exact copies of their respective inputs.  This leads to usage that looks like this, which takes advantage of the default input values for redirect uri and client id.

If I am looking at this code the first time, I will assume that there’s some sort of coupling between AskPermission and PerformCodeExchange with respect to these parameters. There’s not; they just share the need for a common redirect URI and client ID.

Consider this alternate approach:

Here, it’s more obvious that those input parameters are exactly the same.  I know that—at least with respect to these two parameters—I don’t need to call AskPermission to generate those values for the PerformCodeExchange function.  Both ways work, but I prefer the style of the second one, so I changed it in one of my later versions of the code.

Frustration with the Unit Test Framework

Okay, so time to add some unit tests.  Since I had it installed, I went ahead and tried NI’s Unit Test Framework, which comes with the Professional version of LabVIEW.  It’s been a few years since I used it, so I started by right-clicking on My Computer in the LabVIEW Project and created a new unit test:

That was soooooo the wrong thing to do!!  The Unit Test Framework (UTF) didn’t work if I did this.  It created a new “Untitled.lvtest” file in my project, which you I then configured, set up test cases, and even tested with my test cases.  The problem was when I clicked “OK” on the configuration dialog. All my configuration was lost, and the “.lvtest” file reverted to its unconfigured state.

Fortunately, Fabiola De la Cueva pointed me at her excellent NIWeek video series on Unit Testing, and to a section of the book she co-wrote with Richard Jennings, LabVIEW Graphical Programming, 5th Edition.  The trick is to right-click on the VI you want to unit test, and create a unit test from there:

I’m not sure why the other way doesn’t work—I’ve asked NI to explain.  But this way has been working for me, now that I know about it.

Update: I worked with NI to debug this. It turns out that the Unit Test Framework doesn’t work correctly with UNC filenames, such as \\server\docs\My Project. If I had stored my project on the C: drive, it would have worked.

Because unit tests are, you know, for testing units of code, I like to start at the bottom of my VI hierarchy and work my way up.  In my case, the simplest VI I have is the, which computes a secure hash.  The algorithm is documented by NIST, so I went to their documentation to find some test vectors, and that’s what I configured the unit test to do.  I set up five test cases for strings such as “abc”, the empty string, and progressively longer strings up to a million letter a’s.  (There’s an even longer test case that NIST defines, but even 64-bit LabVIEW can’t handle a string that’s 2 to the 33rd power number of characters long.)

Next, I wrote a unit test for the base64url encoder, the next simplest VI.  Again, it’s a well-known algorithm, so I searched for some existing test vectors on the web, and am using them in my unit test.

If I were super diligent, I would write unit tests for everything.  But I do think there’s a diminishing return.  I strongly believe that “some unit tests” is way better than “no unit tests”, but an exhaustive set of unit tests is, well, exhausting.  As an aside, a long, long time ago, I created the first LabVIEW test suite internal to LabVIEW R&D, and took the same philosophy.  Testing “all of LabVIEW” is daunting and insurmountable, so we got started by testing “some of LabVIEW”, and let it grow from there.  We focused on the fundamentals—“hey, somebody broke ‘add’!”—as well as the areas of the code that commonly broke, which at the time, was almost anything to do with arrays of Booleans.

Suffice it to say that there is plenty more opportunity to write unit tests, such as for all the parsers I wrote.  Maybe I’ll get to them soon.

Speaking of Parsers

Recall from above that I wasn’t happy with the JSON parser that’s built into LabVIEW.  To see why, let’s revisit the “Parse Token”, which I wrote with the built-in JSON parser…

This VI takes JSON as an input and converts it to a Map.  Note that I had to define a cluster that had the same names and datatypes as the JSON to parse.  For web services, this isn’t realistic.  For example, if the Token web service call fails, it returns JSON like this:

   "error": "invalid_grant",
   "error_description": "Bad Request"

To handle that, I’d have to create another cluster and another call to “From JSON”, along with another call to Build Map.  It’s not terrible, but it seems harder than it needs to be.

I think it’s time to replace it with something better.  I searched in VIPM and on the web for alternatives, and there are several. There’s one called JSONtext that has an installation link from the LabVIEW palette menus…

JSONtext is written by Dr. James D. Powell, a LabVIEW Champion, Certified LabVIEW Architect, and contributor of several useful tools.  With those credentials, it seems like a good place to start.

After installing JSONtext, it looks powerful and daunting—especially compared to the built-in To/From JSON functions…

My first thought is to wrap some part of JSONtext into a layer that will convert the JSON to a Map, since that’s the path I went down with From JSON.  But I soon realize that the API’s approach is to just treat the JSON string like a Map, and provide functions that make it easy and efficient to look things up in the JSON string.

With the Map approach, my top-level VI looked inside the Map (with the aptly named “Look In Map” function) to find the access_token for the API call:

If I delete my “Parse Token” and just pass out the raw JSON string, I can call JSONtext’s “Find Item” function as a drop-in replacement:

Sweet! I like deleting code and keeping the same functionality!

I then made similar edits and deleted “Parse User”.  Great!

Wait! It’s not working.  There’s a subtle difference between the two JSON parsers.  The built-in one stripped quotes around strings, and JSONtext’s “Find Item” does not.

But wait!  There’s another JSONtext VI which does what I want.  It’s in the palette menu shown above, and is called “Find Item (as LVtype)”.  If I tell this VI that I want a string, it will strip the quotes from it.

Here’s the top-level diagram showing the two HTTP calls that return JSON, and how simple it is to parse strings out of that JSON.  Thanks, Dr. Powell!

What’s Next?

How else should I improve this example?

  • The main thing that bothers me is not having a more elegant solution to communication between the web server callback and the rest of the application. I’ll see what ideas you readers come up with.
  • I could always write more unit tests.
  • I want to try to build a standalone app, so that I can programmatically start the web service. I somehow suspect that I’ll run into challenges here.
  • Of course, the whole point of this example is to start using it. There is a universe of web services out there to take advantage of.  When I have some interesting tools to share, I’ll create a new post.

Thanks for reading!  I look forward to your comments and questions!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.