In part one of this 2020 update, I began a journey of updating my OAuth2 example to use a new feature in LabVIEW 2020–the new hash function that supports SHA-256, among other algorithms.
Where I left off, I needed to modify the output of the new VI to create a byte array instead of converting it to a lowercase hex string.
I proposed three choices:
- Ignore the 2020 VI and just use the .Net implementation I used in 2019.
- In my SHA256.vi, add code after Byte Array Checksum.vi to convert the hex string back into a binary array.
- Make my own copy of Byte Array Checksum.vi and remove the subVI which converts to a lowercase string.
Which one did you choose? I decided to try all three. I already had #1, since it was the 2019 version. Here’s a quick and dirty implementation of #2, where I convert the hex string back to a byte array.
And here’s an implementation of #3, where I went and found the VI that called Bytes to Lowercase Hex String, made a copy of it, and removed the subVI call. I replaced it with a straight Byte Array to String.
What do you think so far? There are things I dislike about both #2 and #3.
- In #2, it seems wasteful to convert it to ASCII, and then convert it back. These aren’t large strings, but it just seems like a hack.
- In #3, I dislike the idea of modifying a vi.lib VI–especially one that’s not on the palettes.
I’m leaning towards #3, because it feels like the right implementation, even if it violates the “don’t mess with vi.lib VIs” principle.
Before I commit to a solution, let’s run the unit tests on each. The results for #1 pass with flying colors, of course. The results for both #2 and #3, though, fail. And I thought this was going to be easy. Keep reading below…Read more