I think you can use a simple Case Structure, inside the True case you place the TDMS File Read/Write you want to happen and nothing on the False case.
You can then add a button to 'trigger' that True case or use a condition.
I think you can use a simple Case Structure, inside the True case you place the TDMS File Read/Write you want to happen and nothing on the False case.
You can then add a button to 'trigger' that True case or use a condition.
I cannot examine, test, modify, probe, etc. a picture. Attach your VI (or, if part of a LabVIEW Project, consider compressing the folder containing the Project and attaching the resulting .zip file). You'll help all of us to help you. Remember this for your next post.
Bob Schor
Hi. I am new to labview and i have been trying to run a VI on labview and i am getting errors on the window panel.It mentions that Mathscript RT module license is invalid or missing. I went to package manager to download the module listed under labview 2020. When i use my school serial numbers to activate it, it doesnt show on the list of packages installed. What do i do if i want my labview code to use mathscript nodes?
wrote: Just to re-clarify my question. I am after the best way of making code resilient to MAX database corruption issues. A lot of our code goes to customers who aren't knowledgeable with NI software/hardware and as far as they are concerned just want a magic black box. Everyones comments has confirmed that I am thinking along the right lines so I am going to have an experiment today.
wrote: We have ~20 computers using DAQ devices, and I get one with a MAX database corruption every 3-6 months. It's definitely not due to software installation/update, either. There is a reason NI put a menu item in MAX to fix it...
Mancho, I am guessing you are referring to the 'Reset Configuration Database' button?
wrote: @BS, look into my older posts about NI Device Manager. It is possible to automatically detect a new device and then associate it with everything DAQmx-ish with a simple executable and some adroit HKEY NInja like moves in the same installer. <Don't do this at home! We are professionals >
What posts are you refering to here JpB? I can't find them on your profile.
Devmon may be the best search term. The use of it is fragile! I do not recommend it! <I think that it has been useful twice in twenty years >
How do you manage the Client that thinks "Well... this is just LabVIEW, not SOFTWARE " ? ???? Should not be as "amusing " of a topic as I believe it could be.
Hello Everyone.
I am using an NI XNET module attached to a cRIO 9045.
I am reading the data from NI XNET using waveforms.
There are a fixed number of waveforms getting transmitted from the CAN module and I am using Network Streams to transfer this data to the host computer. But when i do this, i get memory leaks. The memory on the cRIO keeps increasing to a point where either there is a NI XNET Queue overload or memory in the cRIO runs out and the machine crashes.
I have tried with waveforms and just transmitting the Y component of the waveform in form of double and they both give the same issue
It turns out this only happens when i am transferring CAN data.
Other AI modules, in the same method, works perfectly file.
I have attached the relevant VIs.
Would be grateful for suggestions/solutions 🙂
If you open the target vi FP after setting the control, you will see that CtrlVal.Set actually works.
However, it has no effect on the original vi on disk. What do you actually want to do?
What sensor are you using? Do you need to turn on the IEPE for the sensor?
I got some clarification from NI support. I'll copy what they wrote here. The gist is that subpanels do work in embedded UI, but can cause problems with RT performance.
"1. Everytime there is a UI update, there is an interrupt on the UI thread and the UI update is processed and then the original process continues. This, as you can see, is not good if you have a big update to the UI and important loop that gets interrupted. This is why we provided content around best practices and UI benchmarks so you can start within "safe spaces" of performance.
2. Also, a SubPanel is just another front panel. With that in mind, that means two interrupts for every UI since LabVIEW treats them as separate front panels. This will double the impact of the previous point (1).
As I mentioned previously, "unsupported" doesn't necessarily mean it won't work. However, our recommendation is to avoid it, mainly due to the points described above."
Henrik,
I found the answer in the description of the single tone extraction. When in doubt, read the instructions... I was already way ahead of myself. No wonder all of the phase readings looked good before I started applying the tach to that page. Alll of the signals are triggered on the very first input page by the tach channel, and the vi calculated the phase for the frequency in question from time zero! So, I found my error was 10 degrees. The same 10 degrees on all channels, so since this phase is all relative to each other, that amount of phase error is not a problem. Here's a question for you. The proximity probes that are used to view shaft orbits need phase with zero error. How do you think those 'boxes' calculate phase?
Thanks again for your input.
Ron
You might want to have a look at SystemLink.
Bringing this thread back from the dead yet again as a solution is in place.
.NET uses a config file to specify dll versions if multiple versions are available in the GAC (Where .NET assemblies are registered). You can add a config file to your LabView project as well to accomplish the same thing.
1. Close LabView
2. Create a config file matching the name and extension of your project. So for a Labview project named "testProject.lvproj", create the file "testProject.lvproj.config"
2. Place the file in your project folder, alongside your .lvproj file
3. Open the config file and paste in the following XML:
<?xml version="1.0"?>
<configuration>
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<probing privatePath="relativeAssemblyPath\ver2"/>
<dependentAssembly>
<assemblyIdentity name="AssemblyName"
publicKeyToken="ccc0b22700e2ae72"
culture="Neutral"/>
<!-- assembly versions can be redirected in application, publisher policy, or machine configuration files-->
<bindingRedirect oldversion="1.5.7" newVersion="1.5.12"/>
</dependentAssembly>
</assemblyBinding>
</runtime>
</configuration>
4. Replace the "probing privatepath" value with the path to your assembly relative to your project file. If you know the assembly is in the GAC, remove this tag.
5. replace the "assemblyIdentity name" value with the exact name of your assembly
6. replace the "publicKeyToken" value with the public key of your assembly. The public key is a unique code used to identify your assembly. If the assembly is in the GAC, you can see the public key token as part of the assembly folder name.
7. Replace the "old version" and "New version" values with the version you're upgrading from, and the version you are upgrading to. You should be able to specify a version range in the "old version" tag, but I haven't confirmed this.
8. Save and close the file.
9. Relaunch your project.
10. You may get a browser window to find your assembly by path.* Select the appropriate version of your assembly.
11. Your project files referencing .NET assemblies will need to be saved, as they have updated their location to referenced files.
12. Close and reopen your project to confirm that you do not see a browser window again, thus confirming that your assembly reference has been properly updated.
13. build and release your update.
*the fact that I received a browser prompt makes me think I didn't complete the steps as Labview may have intended. But the steps above will work to update your references without having to access or delete all of your constructors.
**For each single assembly you're updating, I believe you make a copy of the "dependentAssembly" tag. I have not confirmed this.
***It is possible to place a config file alongside your executable file with the name "yourExecutable.exe.config" and force a direct to the dll. However, this is not recommended or necessary. Your project will update its references and the build will reflect this change. Furthermore, everytime you launch your application with the config file, you'll get a browser window as the reference doesn't match your source anymore. And finally, I would consider providing an assembly redirect so readily in your release to be a bit of a security risk.
Sorry, I was too busy at work... Here is the testing code, for Labview 2016
There are many ways to do this, of course (nearly stamp size ;). You don't even need "index array!)):
I did a project with ADAM modules like that recently, and I used their simple ASCII protocol instead of modbus. A very simple ASCII command served to get all the readings. It was something like the module address plus one character.
Hi @altenbach,
1. I tried to break the equation into muliple parts for z. Since my z range is -50 to 700. These are parts I have gets where 6th order polynomial fits.
-50 to 61
61 to 192
192 to 278
278 to 346
346 to 401
401 to 446
446 to 477
477 to 599
599 to 668
668 to 700
2. "test RI_2Dpolynomial-set2_break.vi" is used where small sets of data is taken to see in which regions it shows polynonimal fit. One at time. First I tried on dataset of -50 to 61 ,then ran VI again for next data set and so on. Max error allowed is +-0.0999.
3. Then I ran test on all available data-points for -50 to 61 range and saw the results. It shows a poly fit of 6th order. Need to check for rest of changes though.
4. Still is there any better method which you can suggest? It may not poly fit always, may be any other non-linear fit ?
I know the hardware works because it is currently running the previous version of the program.
It also works with NI Max.
The program runs fine on Station 2 but not on 1.
wrote: ... and "2D Interpolate.vi" would probably do this for you...
Here's all we really need to use " Interpolate 2D":
I have experience with Network Streams in LabVIEW RT applications, but have no experience with CAN and I've not heard of (or used) XNET. I'm guessing that the fact that Streams works for you when you are not using CAN and fails when you are might mean that the CAN stuff (or XNET) is interfering with the TCP/IP protocol. However, it could also be that whatever the Stream is that supports CAN is configured incorrectly.
While I really appreciate your attaching all of the relevant code, there's too much with which I'm not familiar to start poking around, unguided. Can you identify the Streams that you create, describing their name, the parameters of the data being passed, the roles at the Host (PC?) and Target (cRIO?) end (i.e. Host Reader, Target Writer), and which one makes the connection (I tend to assign this to the PC, with the RT Target starting out running in a "Wait for Connection" loop, which means that the PC needs to "know" the Target's IP), and make a table of these? I trust that once you create a Stream, you leave its data "intact" until you destroy its endpoints.
Do you have a colleague who knows a little LabVIEW (or at least a little Programming) with whom you can "walk through" your code, showing the Creation, Use, and Destruction of each of your Streams? Armed with a Table such as I suggested in the previous paragraph, as well as your code, we could also try to "follow the logic", but it would be much easier with your guidance. And you might find (as has happened with me numerous times) that as you get to Step 6, you'll say "... and here we ... oops, there's supposed to be a Wire here! ... just a second while I fix this ...".
Bob Schor
Here's the code comparison for my last two examples. Haven't tested what's more efficient.
(Stock interpolate 2D (bilinear) sure has a lot of code under the hood, see for yourself! I probably would go with the triple stack ;))
Hi jth,
then check each and every error wire related to digital output on station 1...