Aristos Queue on Inlined VIs and Memory Usage
Our first guest blog post… we’re excited to have a short write-up from Aristos Queue (known in real life as Stephen Mercer, one of our Senior Software Engineers in LabVIEW R&D).
A rumor has reached my ears. A false rumor about LabVIEW, inlining subVIs and buffer allocations. A rumor in need of quashing.
Several experienced G programmers at NI met to discuss what we had learned at NIWeek. At this meeting, one of our number expressed his excitement about the Inline subVI into calling VIs option (found in the VI Properties dialog). First he talked about the performance improvements. Then he talked about the memory usage improvements. To demonstrate, he turned on Tools >> Profile >> Show Buffer Allocations and showed buffer allocations on a number of subVIs. Then he turned on inlining for those subVIs and refreshed the buffer allocations. We witnessed a miracle: all of the buffer dots disappear on all inlined subVIs. To him, all the extra memory that inlining a subVI might allocate at launch and waste was acceptable since he could eliminate all the jitter from memory allocations while running. Many heads in the room were nodding in excitement; they would all be using inlining heavily on real-time (RT) projects in the future.
I, too, was impressed with this trick at first… but the more I looked at his diagram, the more I realized this had to be a bug. I went to the LabVIEW compiler gurus to confirm my suspicion. Sure enough: the buffer allocation dots just do not draw on inlined subVI nodes although the buffers are still allocated. In one sense, this is minor bug. The VIs still function correctly, nothing is broken in the compiler… this is merely a cosmetic bug that the little black dots aren’t being drawn. In another sense, it is a major bug because of what it teaches our users. Users rely on the dots to do the memory analysis of their VIs; they need our buffer allocation dots to tell them for what LabVIEW is doing, and on RT, those dots show them where jitter is occurring. A feature that eliminates all the dots? That’s solid gold even if it does make the code size bigger!
I hate to be a killjoy. I hate telling you that the vision of no dots on your block diagram is just an illusion. But if I let you wander around in this dream state, you’ll end up bashing your head on something (probably your keyboard as you are trying to debug where the jitter in your RT application is coming from). Therefore, I have to say this: Buffer allocation dots do not draw on inlined subVI nodes. This is a bug in LabVIEW 2010 and 2011. It has been filed to be fixed in a future LabVIEW version. I don’t even have a good workaround for you to ease your transition back to the waking world. The best I can offer is this: Inlining a subVI almost never removes the need for a buffer allocation, so if you look at the buffer allocations with inlining turned off, those buffer allocations are probably still there when you turn inlining on. Do your “hide the dots” work with inlining off, and when you’re satisfied with that, turn inlining on for the performance benefits.
I hope you had a pleasant sleep.
— Aristos Queue