Quantcast
Channel: All LabVIEW posts
Viewing all 203651 articles
Browse latest View live

Re: Calculate End of Test date/time. Live.

$
0
0

Thank you for this, however, that's pretty much what I've already figured out. 

The problem with this solution is that date constant, has to always change to "now". 

Meaning, if it were to send this morning, it would have to update to 9/16, tomorrow, 9/17 etc. 

Which I know how to get that function. 

Also, I need it to first calculate the remaining time of the test so it accurately gives the true end date. 

Again, I figured out how to get that, in seconds anyway. And I am able to spit out the "remaining time of the test" BUT I can't get it to convert all that into a future date. 

 

 

Also, the output has to be in a string as that is what my email-er is built from. 

All the inputs are strings, and that output won't give me a string. 

 

So-

1-Calculate total test time in seconds-COMPLETE (200 hours x 3600= 720,000 seconds)

2-GET current test time completed-COMPLETE (simple wire from the "total accumulated test hours say 100 hours that will give me 360,000 seconds)

3-Calculate the difference of the 2-COMPLETE (360,000 seconds remaining)

4-ADD step 3 to CURRENT or COMPUTER (to make it clearer) date.-UNKNOWN! 

 

The idea/point of this is lets say the test faults, or is paused for some reason. Actual test time is lost and that future date will need to be pushed out. 

 

Thank you

Ryan


Re: Continuous Power spectrum density at 0.25 Hz resolution

$
0
0

Actually, I am also running LV18 on VirtualBox.

For your testing, I have attached VI in LV18 version herewith. It is same as code snippets I have posted earlier.

Re: Community Nugget: Sub-millisecond timing in LabVIEW

$
0
0

 wrote:

Windows 2000 and later only! 

 

What? you say its not possible?

I too was of that opinion.  Its commonly held that we are limited in LabVIEW to the mSec timer and the resolution it offers for benchmarking our vis and creating delays.  Reciently, on a related thread, I was even involed in exploding a suggestion that LabVIEW could be taught this nifty trick.  But I went back to school because I hate saying "You can't do that with LabVIEW."

 

The attachment contains a project of vi's that use some kernel23.dll precision timer functions to access the Precision OS Timer that exists on modern processors.  there are a few caveats:

 

These vis use the basic Query Precision Timer functions so DO NOT use them in cases where you don't have a spare core to burn.  There appears to be a method to create a waitable timer as well but, that is not in the scope of this post.

 

Also- this is not a replacement for a real time OS!  There are sources of error and the OS can (and does) interrupt the process.

 

There are inherant flaws in the basic input output system that contribute to jitter in calls through kernel23.dll

 

Some coersion errors may be introduced due to the necessity of mixing U64's and DBLs in the math.  (Hey, if anyone can solve that I'd take a lesson, I hate saying "You can't do that with LabVIEW.")

 

THE VIS

The Simple Approach:

Precision Timer Wait.vi:

Is a basic stand-alone delay with a 100nSec resolution uSec to wait input.  Negative values are coereced, resolution is coerced to next higher 100nSec.  Actual resolution of the delay is dependant on HARDWARE or how fast your precision timer is updated.  Ths vi querys the PT frequency and the current count. calculates what the count will be in x uSec and enters a greedy loop untill the PT counter is equal to or greater than the target.

This vi DOES test if the hardware supports a precision timer and has standard error in funcionallity.

 

The more optomized approach:

The case structures in Precision Timer.vi requre a bit of undesired overhead so, for advanced users:

PT Init Freq.vi Querys the timer frequency and preloads the global variable Counter Frequency.vi (Globals are not evil and this is one case where their blinding fast speed is useful)

PT Lightning Wait.vi reads the global instead of the actuall timer parameter and functions simillar to Precision Timer Wait.vi except it does not even waste the FLOPs to calculate how long we were in the loop and it has no error case.

 

Benchmark.vi demonstrates the optomized approach and explores some of the precision timer sources for error

 

All vi's are fairly well documented with their execution settings (obviously default settings were undesired)

 

For further reading on Precision Timers I recommend starting HERE and google your hearts out.

 

if anyone wants to play with a waitable timer object... (I'm curious but "Time Constrained")

 

Additionally, those of you with existing Benchmark vi's.  I would be fairly interested in a benchmark benchmark comparing the two timer methods.


Now that I have finally installed LabVIEW 2018, it is time to update this thread.

 

LabVIEW 2018 Includes the new function High resolution Wait.vi that is a waitable object based on the high resolution timer exactly as mentioned above. 

 

Thank you NI!

Re: Calculate End of Test date/time. Live.

$
0
0

Look on the timing palette 

Right there you can find Format date time and get date and time.

 

Look on the string pallet and you will find date time to string with a format specified input.

 

Look on the property editor for any numeric and look at the advanced format page.  Time format specified are shown. They are also explained in the help file.

Re: Installing NXG on another drive

$
0
0

 wrote:

Thinking outside the box..

 

Rename C to T for Temporary,  rename D to C

Install NXG

Rename C to D

Rename T to C

 

Then post to the idea exchange that this should have been easier 


If I am booted into Windows 7 on my laptop,

how do I rename C to T and D to C ?

 

I can rename Local Drive (CSmiley Happy on my hard drive to Temporary (CSmiley Happy but it retains the C:

I did it just now to try it out.

 

Edit

I can go into Computer Management and change the Drive Letter and Paths for C: (Local Drive) to something else, but I am cannot go any further as I don't have another hard drive on this laptop.

 

...

 

 

My first inclination if I was the OP is to use a disk imaging software such as Acronis True Image to create an image file of the current small hard drive to an external hard drive. Next install the new larger 1 TB SSD and image it with the image file that was created. 

My PNY 120 GB SSD came included with a free product key for Acronis True Image.

I didn't use it because I use Clonezilla, but it was nice to have.

 

 

.

 

Re: measure time lapsed from reaching threshold to execution

$
0
0

After much thinking, below is what I came up with.

 

I activated "highlight execution" and watched when the timers were getting triggered. 

 

It appeared that the first timer was getting triggered when the outer case structure was touched.

 

It appeared that the second timer in the inner case structure was triggered last after the turn off command was executed. So it does seem that I am accurately measuring the time lapse from the moment the voltage is getting read and the power source turn off command is executed.  I am not sure what ensures that "the high resolution relative clock" knows when to get executed. 

 

thebesticould.png

Re: measure time lapsed from reaching threshold to execution

$
0
0

The basic idea is "Dataflow"  any node in an executing diagram or sub diagram that has all of its inputs can execute.   The error chain is often used to enforce order of execution.   For nodes with no inputs, like hi res timer, a sequence frame can be used to enforce order of execution. 

 

Highlight execution forces LabVIEW to run in a single thread.  With it off, nodes can ru. At the same time.  So, you are tricking yourself again. 

Re: measure time lapsed from reaching threshold to execution

$
0
0

The snippet attached is what I came up with to enforce the order of execution I have in mind. 

 

Thank you for your patience.

thisisit.png


Re: measure time lapsed from reaching threshold to execution

$
0
0

That's looking correct but very sloppy.  Try ctrl+U to clean up your diagram 

Re: Continuous Power spectrum density at 0.25 Hz resolution

$
0
0

Boy, this has been a "learning experience", I hope for you, but also for me, as I've made a few careless mistakes, and failed to spot the "obvious answers" until you "rubbed my nose in it".

 

So, admission, my code produces exactly the same results as yours, I just was "fooled" by the plot and therefore didn't come up with the "obvious answer" (as I had by pointing out that a value of 0, in dB, would be "-inf" if logarithms of 0 were allowed).

 

When you run your (or my) code, you see much of the plot sitting at -400 dB (which corresponds to a value around 10^(-40), a very small number).  If you do not use the dB scale, it appears that the spectrum is 0 "almost everywhere" except at the frequencies in the signal (which, after all, is the "right" answer).  Your "very reasonable question" is why, when you take dB, it isn't -inf "almost everywhere".

 

So here the final lesson, and the final part of the "solution" to your question (please, mark the Solutions so that other Community members interested in PSD computations "know" there are "answers" here).  You are doing computations using floating point numbers, which are of finite precision.  The computations involve computing sines and cosines, also involving finite precision, and arithmetic (addition, subtraction, multiplication, and division, or, as some have said, ambition, distraction, uglification, and derision).  Although the correct answers should be 0, there will be round-off errors that might result in numbers very close to, but not exactly 0.  That is what is happening here -- you can verify this for yourself by sending the values you are plotting to an ordinary indicator and looking at the array -- when I did this, every other number was 0, and in between were numbers with between 37 and 40 zeros after the decimal point (i.e. on the order of 10^(-40).

 

So here is a final piece of advice.  Add a "Filter" to the output of the PSD to keep the range from going "off the deep end" to -inf.  The attached Snippet shows one easy method -- take the points and "coerce" them to a reasonable data range.  Since you expect the value to correspond to an amplitude of 0.05, depending on exactly what is being plotted, you expect a number on the order of 0.05 in dB, or around -20 dB.  The nice thing is that now AutoScale Y works for you, and will let you see the peaks clearly.

Quarter-Hz FFT with Filter.png

Bob Schor

Re: measure time lapsed from reaching threshold to execution

$
0
0

Then I get the following image. 

 

I really appreciate your help.

 

Of course once I am happy with the program and time characterization, I am going to remove the outer case structure and directly connect to the inner case structure (like the 2nd image below).

hellothere.pngWhat I will actually run in experiments. anotherone.png

 

Re: blinking LED N number of blinks per minute

$
0
0

Altenbach is referring to our old friend from C

 

X = Bool? TRUE:FALSE;

 

Unless my C is rusty.

Re: measure time lapsed from reaching threshold to execution

$
0
0

You also should wire the error chain and place an OR before the loop conditional terminal so the logic is

 

Continue unless Stop or Error  and wire the error out of the loop to a simple error handler to let you know when and what error occurs. 

Re: measure time lapsed from reaching threshold to execution

$
0
0

I think this is what you are recommending.  Sorry, I just started reading up on handling errors. I guess it's there to stop the while loop when the system runs into a glitch. Not sure what to do about "select error."

 

thisisit2.png

Re: Installing NXG on another drive

$
0
0

The virtue to the solution that @nyc and I proposed (Clone the C: drive to a larger SSD) has the virtue that while there will be an initial cost (for the Drive), the process is "reversible" so if there's a "gotcha" (like a Serial Number check to prevent Cloning), you should be able to "put 'em back the way they was".  Of course, you do need to be able to run Cloning software (but you should be able to create a version of Clonezilla that can run from a bootable USB stick).

 

If this is a "managed" (and "locked down") machine, the "right way" to do this, of course, is to talk to the IT Gurus, tell them the problem, offer to purchase the new SSD, and beg for their assistance.

 

Bob Schor


Re: measure time lapsed from reaching threshold to execution

$
0
0

Also, don't split the DAQmx purple reference wire before the first read.  Connect the 2nd read to the 1st read's output at the top right of its connector panel.  It won't change the execution of your code, but follows standard practices of how LabVIEW block diagrams should look.

Re: measure time lapsed from reaching threshold to execution

$
0
0

 



 wrote:

I am not sure what ensures that "the high resolution relative clock" knows when to get executed. 

 

thebesticould.png



 

EDIT: Commenting on this post. Did not see the next page where most of it has been resolved since. Still, let me summarize in detail what's happening in that particular old code to maybe give you a better intuitive understanding of the magic of dataflow. While it might be confusing at first, you'll learn that it is one of the most powerful features of LabVIEW!

 

The first ticker (=high resolution relative seconds) executes once the lower outer case structure starts executing, i.e. once all inputs to the structure have received data. It will do that in parallel to the start of the AI next to it (It does not matter where in the case structure the ticker is located, you would get the same result if you would place it after the innermost case structure (e.g. right below the subtraction) as long as no wiring changes. Now the innermost case structure needs to wait until that AI has completed (slow) and the comparison made (infinitely fast) because it cannot start until the boolean output wired to it is available. Once the innermost case structure starts executing, the second ticker and the VISA will execute in parallel. (This is probably not what you want, because you are not measuring the time of the VISA call at all). Once the innermost case structure has completed, it will output the ticker taken at the start of the inner case. The subtraction will basically give you the time of the AI read, ignoring the time of the VISA call.

 

Benchmarking is very hard to get right, and easy to get wrong. Have a look at our 2016 NI Week presentation.

Re: measure time lapsed from reaching threshold to execution

$
0
0

Thank you for your explanation.

 

For my experiment, I think I am pretty much done but for learning of how to use LabVIEW, I want to better understand error handling. 

 

I looked everywhere and tried to create the error in and out I circled below (found the image online) but couldn't.  So, I just opened a few random examples and copied pasted it.  Could you tell me where I can find those units? (sorry, my lab computer just crashed so im using my personal laptop to post this question)

post-11742-126892283135.png

Re: measure time lapsed from reaching threshold to execution

$
0
0

 wrote:

 

thisisit2.png


Note that the innermost case structure belongs inside the innermost first sequence frame and should only contain the VISA call. Now you only need one instance of the ticker (i.e. nothing in the FALSE case). Avoid duplicate code! Avoid chopping up code into tiny fragments!

 

You only need one simple sequence structure, e.g. as follows:

 

(Sorry, I don't have DAQmx installed, so the AI read icon does not show)

 

You could even merge the first two sequence frames. It is unlikely that it makes a real difference here because the ticker is infinitely fast and the AI read is very slow in comparison. Within error, you'll get the same result if you start them in parallel.  Simpler code is easier to read, debug and maintain!

 

 TimeAIVISA.png

Re: measure time lapsed from reaching threshold to execution

$
0
0

Thank you!

 

 

By the way, is there a way I can create "error in" and "error out" in block diagram directly without having to create "error in" and "error out" in front panel first? (in front panel, I can go to Modern -> Array, Matrix &Cluster and find error in and error out.  I, for the life of me, wasnt able to find them in block diagram.  

 

(whatever I created, named '0: No Error' outside the while loop, I don't think that is right. I think I have to have 'Error In' instead.  I can of course create it now, by creating it on Front Panel first.  Just out of curiosity, I want to know how it can be directly created in Block Diagram first.) 

Viewing all 203651 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>