Sounds promising. You mean I can display the simulation process, but it's simulated in the v_rep, which provides the data?Wilsonator wrote:Sounds like your last question would be best solved from the Remote API. See http://www.coppeliarobotics.com/helpFil ... erview.htm. The simulation will run in V-rep window, but you will have control and access to data in your c++ application (I assume you meant a visual studio project programmed in c++).
how to get vrep run
Re: how to get vrep run
Re: how to get vrep run
And I'm also curious can I save a scene in XML format or whatever related, since I noticed that XML is more or less involved with serialization.coppelia wrote:Before it crashes, does the debugging info tell you something? Make sure to enable the output of debugging info in file system/usrset.txt.
Cheers
Thanks!
Re: how to get vrep run
Hello again,
if you want to have a simulation run and be visualized in your own application, you have several possibilities, to summarize:
if you want to have a simulation run and be visualized in your own application, you have several possibilities, to summarize:
- You customize the V-REP source code. That is quite difficult, but allows most flexibility.
- You have V-REP run in headless mode, and connect to it via the remote API for instance. Then you can remotely load a scene, query the various object's positions/orientations, and also query the meshes. Then you can stream all that information back to your own application and visualize it there, using your own visualization routines. Obviously you will not have to stream the meshes all the time, only when a new shape was added.
- You have V-REP run in headless mode, and connect to it via the remote API for instance. Then you can remotely load a scene, load a prepared vision sensor model, and start streaming the image the vision sensor acquires back to your own application, where you can display that image.
Re: how to get vrep run
太牛了! Thanks for your detailed recommendations!coppelia wrote:Hello again,
if you want to have a simulation run and be visualized in your own application, you have several possibilities, to summarize:
Cheers
- You customize the V-REP source code. That is quite difficult, but allows most flexibility.
- You have V-REP run in headless mode, and connect to it via the remote API for instance. Then you can remotely load a scene, query the various object's positions/orientations, and also query the meshes. Then you can stream all that information back to your own application and visualize it there, using your own visualization routines. Obviously you will not have to stream the meshes all the time, only when a new shape was added.
- You have V-REP run in headless mode, and connect to it via the remote API for instance. Then you can remotely load a scene, load a prepared vision sensor model, and start streaming the image the vision sensor acquires back to your own application, where you can display that image.