Let’s Talk About LabVIEW Units, Part 1
First, let’s start with a poll…
I’m a fan of Units in LabVIEW, but I totally understand that I’m in the minority. I’m writing this blog post to perhaps persuade at least a few of you to give them a try.
What are they?
A “Unit” in LabVIEW is a label you can add to floating point numbers to tell it what physical quantity the number represents.
For example, I can assign the unit “degC” to a front panel control, to denote that the value is Degrees Celsius. Similarly, “degF” is Degrees Fahrenheit. If I wire a control representing degC to an indicator representing degF, when the VI runs, it automatically converts the value for me. I don’t have to remember F = C * 9 / 5 + 32.
LabVIEW knows about lots of physical units, using the International System of Units (SI). It also knows about prefixes like “milli”, “centi”, “kilo”, etc. If you’d like to explore them, you can right click on a unit label, and select “Build Unit String”.
This brings up a dialog to help you create a valid unit string.
Note that you can do arithmetic on units. For example, I can represent velocity as “m/s” (meters per second), or “mi/h” (miles per hour), or “cm/ms” (centimeters per millisecond).
Similarly, I could have exponents on units, such as m/s^2 to represent acceleration in meters per seconds squared.
In the example above, I hardcoded the unit as m/s, but I could also have taken a control of unit “meters” and divided it by a control of unit “seconds”, and produced a result that was velocity:
Many of the builtin LabVIEW functions understand units. The unit is part of the LabVIEW data type, which also means that LabVIEW will make sure that any arithmetic you do on units is valid. For example, I can’t add meters and seconds, because they are different base SI units (length vs. time).
Note that you can change the units on the front panel while a VI is running, as long as you enter a compatible unit. For example, in the VI above, I could change “m/s” to any velocity unit (such as “km/d” for kilometers per day) at runtime. Because the unit is part of the data type, you can’t change to an incompatible unit (e.g., change velocity to length) unless the VI is editable.
We’ve seen one rough edge to units so far: almost nobody writes “miles per hour” as “mi/h”. It’s more accepted in the USA to write “mph”. Or what if I wanted to be more verbose, and say “miles/hour”? Well, too bad. The LabVIEW Unit system doesn’t support that. There’s a long list of feature requests for making LabVIEW Units better and more customizable.
By the way, most of the decision makers about NXG saw no value in LabVIEW Units (because they didn’t use LabVIEW for much real-world work), so I think the plan was that NXG would never support Units. That made me sad. But fortunately, that resolved itself when NXG was retired.
How do Units work?
Under the hood, the system uses the base SI units. There are seven base SI units:
- second (time)
- meter (length)
- gram (mass)
- ampere (current)
- kelvin (temperature)
- mol (substance)
- candela (luminance)
That’s it. Everything else can be represented based on those. For example, here’s what one volt looks like in SI units:
1 V = 1 kg m^2 s^−3 A^−1
(Fortunately, LabVIEW does have “V” as shorthand for volts.)
To be clear, the data flowing down the wire is in those base units. For example, if I have a control set to 0 degC, the value on the wire is actually -273.15 degrees Kelvin. Consider our earlier example:
Here, the 100 degrees Celsius in the display is converted to -173.15 Kelvin on the wire, which flows to the indicator. The indicator then converts Kelvin to degrees Fahrenheit. Most of the time, you don’t have to care that this conversion is happening.
Units seem kind of cool. How come they aren’t more popular?
Two main reasons:
- Most people don’t know about them, or understand how to use them, or get confused when they don’t do what they expect
- They are hard to use in reuse libraries (including vi.lib), limiting where you can use them
This blog post is trying to address the first of these. My main piece of advice is to understand units well enough that you aren’t fighting with them. Understand where they work, and where they don’t.
The second limitation is very real, and doesn’t have an elegant solution. For example, suppose you want to use the “Standard Deviation and Variance” VI from vi.lib with an array of lengths. You’ll get a broken wire, because that VI isn’t configured to use units.
One way to solve this is to remove the units before calling the subVI. There’s a function called “Convert Unit”, where you tell it what unitless value you want on the wire. For example, if I have an array of values in feet (“ft”) where I want to compute the mean and standard deviation, I can do this:
The wire before “Convert Unit” has values in the base SI unit of meters. By telling it I want “ft”, it converts the values on the wire from meters to feet by multiplying by 3.28084 before calling the subVI.
Note also that Convert Unit also works to add a unit to the data type. For example, I could take the mean and put the length unit back on it.
I could have done the same for the Standard Deviation, which in this case, would also be in feet. But note that the variance output should actually be converted to feet squared.
Now, I could have written the Standard Deviation and Variance VI to use a special unit value called Polymorphic Units. Below, I’ve done a “Save as” on that VI and created my own copy. I can assign “$1” to a placeholder unit, and at runtime, it’ll adapt to the units of the value on the wire. (If I had other inputs with different units, I could use $2, $3, etc.)
Note that all I had to do was add a unit label to the controls and indicators, and put $1 for the unit of the input array, mean, and standard deviation, and put $1^2 in the unit label for variance. The block diagram adapted to this change automatically.
Now, I can wire an array of lengths to the subVI without having to call Convert Unit. It’s actually computing the mean on the base SI unit of meters, and only converting to feet for the display.
This same subVI would adapt to any units I wired in. Here, for example, I have an array containing values in Hertz. (s^-1):
I am calling the exact same subVI with polymorphic units.
In theory, we could add polymorphic units to every VI in vi.lib and in every reuse library in VIPM. Why didn’t we? A few reasons:
- It’s kind of a pain. You have to think through every subVI and how it should use units.
- There’s a slight performance hit to add unit handling to everything that uses units. Since most people don’t ever use units, we felt like we should instead optimize for the non-unit case.
- Due to the way polymorphic units are implemented, where they don’t really know their units, there is at least one bug lurking nearby. I won’t get into the details (because I’d have to contrive a VI which showed the issue), but I recall that Celsius and Fahrenheit temperatures have issues with polymorphic units in certain circumstances. It has to do with Kelvin, Celsius, and Fahrenheit not having the same zero value. This is unlike every other SI unit, such as 0 meters = 0 feet = 0 miles.
Because of all of this, it’s usually only practical to use units in the few places where they can save you from writing a bunch of code. (Such as writing that Celsius to Fahrenheit conversion algorithm again!)
In part two, I’ll show you some examples of where I’ve used units in different applications I’ve been working on. Once you understand what units are good for and where the rough edges are, they are a really handy tool.
Stay tuned. Comment below if you have any good unit stories.
I really like the LabVIEW units feature. One thing I think would be helpful would be to be able to define your own base units. I wanted a unit “count” for counts from an encoder, so I could use scaling factors of “count/mm” or “count/s”. Technically, this is a “unitless” unit (like radian), so I could have scaling factors with units of “s^-1” or “mm^-1”; but having the base unit is easier to read and understand. Another example might be a “tick” from a clock.
Thanks for the comment, Jeff. Interestingly, “ticks” (of an 80MHz clock) was one of the things used in the Excel spreadsheet mentioned in part 2 of this blog post. I ended up making “tick” values unitless, even though they represent time. But I also changed my app to not use “ticks” as much, in favor of floating point values with time as their unit.
User-defined units was definitely a feature we had on the list to figure out. One big challenge is how to make them shareable/deployable. Are they saved in an INI file that goes with your VIs? Are they saved with the project?
In the grand scheme of things, improving the unit subsystem rarely bubbled up in priority to get much attention in any release, and I think that was probably a fine decision. I’m glad they work as well as they do. They are one of Jeff K’s pet projects.
This was useful – did not know this. Thanks! I am often dealing with timeseries 2D array data where each column is a parameter. The file has many columns and all kinds of data (mass, velocity, flow). A customer might get data from a third-party sensor in this file that is not the unit we expect – for example kg/hr instead of g/s. Is there a schema or method to make it easy for customer to create additional columns with converted units? We provide EXE (and not VIs) to customers. If could code a generic tool with the most frequent offenders, but before I do that, I wonder if this has already been solved elegantly.
Hi, Gurdas.
I’m not sure I completely understand the user experience you are describing.
If you’re reading from a file, you would probably use something like the “Convert Units” function to tell LabVIEW how to interpret the data coming in and apply units to it.
In case it wasn’t clear from my post, there’s a difference between “compile time” units and “display time” units. The unit that is attached to the data type of a wire is a “compile time” unit. Because LabVIEW is strongly typed, you can’t change the unit of the wire without recompiling.
In a built executable, you (perhaps obviously) can’t change “compile time” things, but you can change “display time” things. So, for example, if a computation or indicator uses velocity (in SI units, that’s m/s), then it’s baked into the application to always be velocity. You can change the control or indicator to be km/h, mi/s, mm/ms, etc., but you can’t suddenly change it to be acceleration (m/s^2). In internal LabVIEW terminology, we used the term “compatible units”. So all ways of expressing velocity would be “compatible units” that could be changed at runtime, because it didn’t affect the internal computations that always used the fundamental SI m/s encoding under the hood.
I think this means that it is not feasible to add new columns that don’t have predetermined units, because you don’t know those columns exist when you build your program. However, if you know that a column is always going to be a unit compatible with g/s, but you don’t know exactly what it is going to be, you could use a case structure to encode the most common encodings, such as kg/hr and g/s.
I hope this helps explain things and points you in the right direction. Feel free to reach out to me directly if you’d like to discuss it more.