Quantcast
Channel: All LabVIEW posts
Viewing all 204074 articles
Browse latest View live

Re: Chart History Length in Loop

$
0
0

 wrote:

I don't see any reason why your chart wouldn't show more points once you have set the X-axis to autoscale.

 

What I do see is that your VI will execute the For Loop 5 times, and that will be as fast as the DAQ Assistant can return a data point.  If you keep hitting the run button, you'll see more data points in groups of 5.


I Owe RF a beer.

 

I think you are tricking yourself.  Change the plot line style to something that shows points Like this

Capture.png

You can verify that the history length is increasing with a simple bit of additional code like this:

Capture1.png

 @ Tim, the MyDAQ DMM will pace the loop just fine.  I had to simulate one to test the code but... that is how Data Acquisition works... you can't get values faster than the reading are converted in hardware (Its Monday- and you need more coffee)


Re: Continuous Power spectrum density at 0.25 Hz resolution

$
0
0

 wrote:

Thanks Bob, that clarifies -Inf values. And also seems logical.

 

One query still remains, why isn't there -Inf values for 'Expected Output'.

 


Please forgive me, but that's the "wrong question".  Almost all of the values in Expected Output (as are almost all the values in the other outputs) -inf, except for rounding errors (note that -400 dB represents a value that is about as close as you can come to Floating Point Zero with a few low-order bits not quite cancelling out.  If you do the "clipping" trick that fixed up the Actual Plots, you'd see that the "Expected" plots look the same.

 

So the right question is "Why does generating a 4000-point waveform all at once different from generating 4 identical sub-waveforms and concatenating them?".  I honestly have no idea, but it could be that after 4000 iterations, the tiny rounding errors involved with adding dt to itself 4000 times "drifts" sufficiently to give the difference you are observing.  I'd bet if you generated one 10Hz waveform with 100 points and replicated it 40 times (instead of 10-at-a-time replicated 4 times, or 40-at-a-time, replicated once) you'd get yet another result.

 

Make yourself a 72-point Bold Card that says "Floating Point Values Are Necessarily Approximations".  Remember this when doing numerical computations.  In particular, when doing comparisons of Floating Point values, never use "Is Equal", instead use "Absolute Value of Difference Is Less Than Or Equal To" and put in your "epsilon", a "precision" number that makes sense with respect to your problem, but is (a) positive and (b) non-zero.

 

Bob Schor

Re: .Zip file limit

$
0
0

Any chance you've gotten around to this? I'm using the MGI library at the moment but I'd like to see a native solution so that shared code doesn't require additional externals.

 

Thanks!

Cranky

Re: Calling Silicon Labs DLL From LV2012 vs LV2017

$
0
0

One thing that strikes me immediately is the fact that you say you use the 64-bit version of LabVIEW.

And a quick check in the header files for the Silicon Labs driver DLL shows that the device handle is an opaque pointer value, which is for 64-bit environments a 64-bit value. This means that the variable can (and often does) hold a 64-bit value that will get truncated if you force it into a 32-bit variable.

 

So basically everywhere your Call Library Node tries to pass the device handle to the function you have to reconfigure that parameter to be a pointer sized integer instead of a 32-bit integer. And in the LabVIEW VIs you need to make the according control to be a 64-bit integer. LabVIEW then has enough space to store the entire 64 bits in the wire (and the Call Library Node will convert the 64-bit value properly to 32-bit when you ever should decide to use the 32-bit version of LabVIEW.

 

Why it works in previous versions of 64-bit LabVIEW is a bit of a riddle. It could just have been bad luck that it did, or maybe some flag in the LabVIEW executable did indicate to Windows that it prefers 32-bit addresses or it was just full moon or something. But the 32 bit device handle was in fact syntactically wrong when used in a 64-bit environment and something just somehow happened to not cause the Silicon Labs driver to allocate the internal data structures a handle is pointing at to be allocated above the 32-bit limit.

 

The driver you use was either developed to support LabVIEW versions prior to 2009 (which was the first to be available as 64-bit version and support a distinctive pointer sized integer in the Call Library Node) or the person developing it was not aware about the differences between 32-bit and 64-bit operation.

Re: About How to develop Xnode

$
0
0

I'm not ignoring this, I just want to spin up a VM with multiple versions of LabVIEW to do a test but Monday after traveling is busy...

Re: How to choose bits from a hexadecimal number?

Re: How to choose bits from a hexadecimal number?

$
0
0

 wrote:

Please compare the efficiency of the shifting method to using the number to binary array function.


You posted to a thread that has not seen activity in five years with a demand that you can easily test yourself. Did you try?

 

Hint: Doing bitwise operations on an integer will be about an order of magnitude faster than converting to a different data structure (boolean array) that takes up 8x more memory. Seems obvious. This is one of the situations where "going green" is not advisable. Smiley Happy

Re: How to choose bits from a hexadecimal number?

$
0
0

 wrote:

Please compare the efficiency of the shifting method to using the number to binary array function. https://zone.ni.com/reference/en-XX/help/371361H-01/glang/number_to_boolean_array/


I prefer this method because I can't picture bitwise operations in my head.  It's not second-nature to me.  However, representing the number as an array of bits and then splitting the array up as needed is exactly analogous to the description of the process, itself, and makes it a lot easier for me to understand.

 

Note that I said, "Easier for me to understand."  This is because I suspect that the vast majority of developers are more comfortable with bitwise operations than me.


Re: Calling Silicon Labs DLL From LV2012 vs LV2017

$
0
0

Please note that you should only use U64 for the LabVIEW Controls on the front panels to pass these handles through. The Call Library Node configuration should EXPLICITLY be configured to pointer-sized unsigned integer or you will get VERY BAD behaviour if you or someone else ever decides to use this library in 32-bit LabVIEW with a 32-bit compiled DLL. 

Re: How to choose bits from a hexadecimal number?

$
0
0

 wrote:

I prefer this method because I can't picture bitwise operations in my head.


Something that helps is setting the display format of your controls, indicators, and constants to be in hex or binary.  I typically use "%02x" for bytes, but could just as easily use "%08b" for the display style.  This way you can see all of the bits and get the performance boost of using bitwise arithmetic.  Of course, make sure your radix is visible when you do this.

 

Here is a tool to help set the display style for all of your controls, indicators, and constants: Format Numeric QuickDrop

Re: .Zip file limit

$
0
0

Oh, that's right. Apologies for the confusion.

 

To that end, has native 64-bit support been compiled into a more recent version of LabVIEW?

Re: About How to develop Xnode

$
0
0

 wrote:


Which features where added? Not second guessing, but I am curious. 

Again in my defense, I'm stuck in 2013, and haven't noticed much change.


In LabVIEW 2011 there were 72 Abilities, 2012 added 1, and 2013 added 2.  I know some things were added to support better localization, and I can't remember what version that was added in.  The XNode Manager which is from 8.2 has only 49 abilities, but I see some older ones are in the list so this isn't quite accurate.  I don't have access to those versions of LabVIEW in between at the moment, to see what was added and when.  It is possible my memory of a bunch being added was from going from the XNode Manager to then reading the ones in the LabVIEW resource files.  Here is some info on the 3 that were added in 2012 and 2013.

 

2012

OnResize2 - This ability VI is called by LabVIEW after the user resizes the XNode. Bounds is a rectangle whose coordinates are given relative to the old top-left corner of the XNode. You must either return a UpdateImageAndBounds reply or not return anything, otherwise LabVIEW will generate an error. If you return UpdateImageAndBounds, you may also return other reply strings.

2013

ExportStrings - This ability VI is triggered by the "Tools > Advanced > Export Strings..." menu option.
"Keys" should be an array of strings which can be imported or exported for localization.
"Values" should be an array of values for each key representing the XNode's current state.

ImportStrings - This ability VI is triggered by the "Tools > Advanced > Import Strings..." menu option.
"Keys" should be an array of strings which can be imported or exported for localization.
"Values" should be an array of values for each key representing the XNode's current state.

In addition to this a few descriptions have been updated while leaving the current ability name.  I didn't go through these but it could be minor typos or simple updates.  I created a few abilities in 2012 and 2018 to see if any of the data types have changed and they all seem the same, but I didn't check them all.  I know that GetTerms didn't always have the English Name string, but that might have been because GetTerms4 wasn't in the XNode Manager.

 

I'll admit that development has slowed and I'll say no new abilities have been added recently, but the main issue I took with your previous statement was the "no development...for a decade" comment in particular.  Internally I know NI has been looking to avoid using XNodes and to come up with native implementations of things.  But for the features to be locked for 10 years I'd expect nothing to have been changed since LabVIEW 8.6 and we can see that in 5 years ago abilities were added.  That being said, LabVIEW developers can't make native LabVIEW functions and the closest we have is XNodes.  If NI allowed me to make my own primitives I probably would, and until they do I'll keep banging the XNode drum.  If I get some time and access to older versions I'll install them to my VM and see when others were added because I'm curious myself.

Too zoomed out...

$
0
0

Hi all,

 

Any LabVIEW/NI window I have opened is too zoomed out.  I'm using 2016.  Any idea why this is happening and why it is happening?

 

 

Re: Problems with arduino labview Interface

$
0
0

Hello!!

 

Im having the same issue. What did you do to solve the problem?

 

Thank you for your help!!

Re: Too zoomed out...

$
0
0

What is your screen resolution?  Do you have any zoom settings active on Windows?


Save array (with different elements in cells) to file

$
0
0

Hello,

 

In the attached VI, I am trying to save the array given in the Input Array to a file, that can then be read in at a later time and when read, generate the same array as the Input Array. 

 

The current VI, save the data in a skewed format and I cannot view the file correctly in Excel. 

 

Is there a way to keep the formatting of the Input Array intact for saving and then for later retrieval? 

 

Thanks,

hiNI

 

2018-09-17_14-48-30.png

Re: .Zip file limit

$
0
0

I pretty much doubt it. As explained there are at least 4 different 64 bit-issues in relation to the ZLIB library with ZIP extension, with recompiling the library as 64-bit being the easiest. The problem is among other things that the ZLIB library itself is not doing anything ZIP related. The part that handles ZIP is a contributed library to the ZLIB project that needs to be added. So you have to update the ZLIB code to the latest version (for secirity fixes) and replace the ZIP code as well and change the necessary compile switches to enable full 64-bit support as well as verify that everything is using the correct system API to support full 64-bit operation, so it is quite a lot of work and even more work afterwards to verify and test everything. You can't have that adding such a feature suddenly makes something that used to work in previous version to fail.

 

So your best bet to support 64-bit ZIP file operation will be to rely on a 3rd party solution such as the .Net solution or the OpenG ZIP library (the current code in the SubVersion repository on Sourceforge supports all 64-bit issues fully but I have not been getting around to building a release package due to a few other issues that I was planning to resolve at the same time but haven't really been getting to test fully).

Re: Save array (with different elements in cells) to file

$
0
0

It looks like you're just concatenating all of the strings in your array and writing them to a file without any delimiters, which is going to smush them all together and give you no way to separate them back apart. I'd suggest using this Robust CSV package to create .csv files that can be opened in Excel or re-read in LabVIEW: https://lavag.org/files/file/239-robust-csv/

Re: Problems with arduino labview Interface

$
0
0

 wrote:

Hello!!

 

Im having the same issue. What did you do to solve the problem?

 

Thank you for your help!!


Can you be a lot more specific about your issue, preferably in a new thread?

Re: Connecting Labview and Siemens S7 1200 PLC using Snap7 library lv_snap7.dll

$
0
0

If you go to the Snap7 home page on snap7.sourceforge.net/, you should be able to download the library with LabVIEW VI's. I currently can't check it as at the site I am now, sourceforge seems to be blocked as dangerous. But I know that the download used to contain a directory with LabVIEW VIs.

Viewing all 204074 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>