%load_ext wurlitzer
#^^^ the wurlitzer extension is used to capture C/C++ output to be displayed in the notebook
#^^^ this is very useful for debugging, but it doesn't work on windows
import time
from klampt import *
from klampt.math import vectorops,so3,se3
from klampt import vis
from klampt.vis.ipython import EditConfig,EditPoint,EditTransform
world = WorldModel()
world.loadFile("../data/simulation_test_worlds/sensortest.xml")
vis.add("world",world)
robot = world.robot(0)
cam = robot.sensor('rgbd_camera')
vis.show()
#Controls:
#left mouse click to rotate the view
#right click or ctrl+click to pan the view
#mouse wheel or shift+click to zoom the view
vis.edit(("world",robot.getName()))
#this call renders the scene... since we are in Jupyter notebook this falls back to slow software rendering
cam.kinematicSimulate(world,0.01)
#this converts the sensed measurements to Numpy arrays
from klampt.model import sensing
rgb,depth = sensing.camera_to_images(cam)
%matplotlib inline
from matplotlib import pyplot as plt
fig,axs = plt.subplots(1,2,figsize=(14,4))
axs[0].imshow(rgb)
axs[1].imshow(depth)
plt.show()
Hmm... nothing was showing up... let's try to debug where the sensor is located!
xform = cam.getTransformWorld()
vis.add("cam_xform",xform)
Scroll back up to the visualization window... Aha, the camera is pointing upright! Let's edit the robot configuration so that the end effector is pointing toward the ball. On the sliders above, choose Link 3 and set the joint position to -1.5.
Now, re-run the cell with cam.kinematicSimulate(), and re-run the cells generating matplotlib windows. You should now see the ball! Try moving the link around some more, and re-generate the images.
Software emulation is fine for small images, but let's see what happens when the sensor is larger:
settings = cam.settings()
for k in settings:
try:
print(k,":",cam.getSetting(k))
except Exception:
pass
cam.setSetting('xres',str(640))
cam.setSetting('yres',str(480))
import time
t0 = time.time()
cam.kinematicSimulate(world,0.01)
t1 = time.time()
print("Rendered 640 x 480 image in time",t1-t0)
Hmm... this is much too slow. Let's try something else.
Let's set up hardware rendering using OpenGL. Before you call any sensor simulation calls, you will need to set up an OpenGL window. Klampt comes with PyQt and GLUT visualization backends that will do this for you. Note that you will need to do this on a local machine, it won't work in Colab or Jupyterhub.
Restart the kernel, then run the below cells. The first time the sensor is simulated, it might take a little more time as the data structures on your GPU are set up for the first time. But afterwards, rendering should be quite fast (tens of milliseconds).
import time
from klampt import *
from klampt.math import vectorops,so3,se3
from klampt import vis
from klampt.vis.ipython import EditConfig,EditPoint,EditTransform
world = WorldModel()
world.loadFile("../data/simulation_test_worlds/sensortest.xml")
vis.add("world",world)
robot = world.robot(0)
q = robot.config
q[3] = -1.6
robot.config = q
cam = robot.sensor('rgbd_camera')
cam.setSetting('xres',str(640))
cam.setSetting('yres',str(480))
#key call: using GLUT or PyQt to open an OpenGL window
#vis.init("GLUT")
vis.init("PyQt")
vis.add("world",world) #not really necessary -- the visualization window is not what the simulated camera "sees"; it only sees what's in the world
vis.show()
#since GPU packages don't work nicely with multithreading, we will need to call the sensor
#simulation inside the visualization thread using vis.threadCall(func)
def simCam():
t0 = time.time()
cam.kinematicSimulate(world,0.01)
t1 = time.time()
print("Rendered 640 x 480 image in time",t1-t0)
vis.threadCall(simCam)
%matplotlib inline
from klampt.model import sensing
rgb,depth = sensing.camera_to_images(cam)
from matplotlib import pyplot as plt
fig,axs = plt.subplots(1,2,figsize=(14,4))
axs[0].imshow(rgb)
axs[1].imshow(depth)
plt.show()