Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 46 articles
Browse latest View live

displaying camera on rviz.

$
0
0
hi all, i'm trying to display a camera on rviz, the source is a camera on [Gazebo](http://www.ros.org/wiki/gazebo). rviz shows me that:>>CameraInfo -> No CameraInfo received on [/finger_tip_cam/camera_info]. Topic may not exist.>>image -> No image received.>>Transform [sender/=Gazebo] -> No CameraInfo received on [/finger_tip_cam/camera_info]. Topic may not exist. What could the problem be?

Camera calibration and different resolutions

$
0
0
Under Fuerte it seems that if you change the resolution of a camera the camera info is not working any more. It thought the ROI area would be changed but does not seem to happen. Like described here: http://answers.ros.org/question/11613/camera-calibration-and-roi/ If the camera is calibrated again the calibration file in .ros/camera_info is just overwritten. I am using Fuerte with uvc_camera. How can I easily switch from resolutions?

Getting Camera_Info from IP Camera

$
0
0
Hello, Kindly need your help, Currently I am working on AR Tags detection using IP Camera. I already succeed published the ROS Image messages from my IP Camera by streaming from Camera URL using OpenCV. The image can also be shown in rviz image node. However, I have no idea how to generate camera_info message from this IP Camera, since the way to access this images are using URL directing to its MJPG images. Any ideas?

nao camera transform problem rviz

$
0
0
Hello, i am currently working with the nao and i am getting some really strange behavior, when i use the camera of the nao. But step by step: 1.) If i only start the nao camera, more specific the top camera, everything works fine. I am able to display the data in rviz, as camera data. I use as fixed frame the correct frame "CameraTop_frame". The camera_info is published and recivied correctly. Everything works as expected. Now the problem: If i add a static_transform to the launch file, between the frames "world" and "CameraTop_frame" (The rest stays the same!!) nothing works. Rviz ist not able to display the incoming data as camera data. Even if i choose as fixed frame again "CameraTop_frame" like in the step before. Rviz is able to display the data as image data, using the world or the CameraTop_frame. My conclusion is that something is wrong with the /camera_info data, which is needed for displaying the data as camera data. But how is it possible that a static transform, somehow destroys the /camera_info data completely? The topic /camera_info is still there and data is published, like in step 1.). Even Rviz is complaining about the /camera_info stuff: Warning: No Camerainfo recieved on ..... Hoping for some help, because i really don't know what to do. EDIT: I solved the problem by changing a line in the nao_camera.py script. It seems that it was some kind of a timing problem. I changed the line "img.header.stamp = rospy.Time(stampAL)" to "img.header.stamp = rospy.Time.now()". After that everything works perfect.

how to find the 3d location of a given pixel (fixed monocular camera)

$
0
0
Given that I have the following: - known pixels from an undistorted image (taken by monocular camera) - fixed angle of inclination of camera - fixed height of camera and flat terrain (so the height from camera to ground is fixed) - intrinsic parameters of the camera (published over /camera_info), and also field-of-view angle How can I find the 3-d (well, 2-d really since height is fixed) location (in the real world relative to the camera) of each pixel? i.e. what ROS packages/nodes should I use that have that functionality? I've been searching for a solution within ROS for weeks (and I'm surprised that no one has needed this before). I attempted my own solution using some trigonometry, but it doesn't seem to work well (my equations must be off). Is something like [projectPixelTo3dRay](http://docs.ros.org/api/image_geometry/html/python/#image_geometry.PinholeCameraModel.projectPixelTo3dRay) in image_geometry what I'm looking for? Perhaps I simply don't know WHAT to look for when I'm searching... Can anyone point me in the right direction? I can't seem to figure this out on my own...

problem with camera info

$
0
0
Hi guys I tried to load my camera info from one yaml that I created with camera calibrate. I wrote this code: if (info_manager_->validateURL("$(find camera)/webcam.yaml")) { ROS_INFO("sucsses"); } else { ROS_ERROR("NEGETIVE"); } but I couldn't access the yaml and I got negetive msg and this is my yaml: image_width: 640 image_height: 480 camera_name: webcam camera_matrix: rows: 3 cols: 3 data: [628.605769204058, 0, 330.525636957696, 0, 626.873332341329, 252.463246920169, 0, 0, 1] distortion_model: plumb_bob distortion_coefficients: rows: 1 cols: 5 data: [-0.0709132085265088, 0.102063002021842, -0.00113116095948083, -3.15823206599607e-05, 0] rectification_matrix: rows: 3 cols: 3 data: [1, 0, 0, 0, 1, 0, 0, 0, 1] projection_matrix: rows: 3 cols: 4 data: [620.937438964844, 0, 329.896849376601, 0, 0, 620.437438964844, 251.769990192599, 0, 0, 0, 1, 0]

xtion depth image size problem

$
0
0
Hi All, I can't work out why, when I use ROS Topic I'm seeing that the depth frame size is "height: 120xwidth: 160" but the camera info is reporting "height: 480xwidth: 640" This is causing problems when running depthimage_to_laserscan (as per my question here: http://answers.ros.org/question/190671/depthimage_to_laserscan-image-convertion-error/) Does anyone know why this might be? Or indeed how I can set it to be correct. Many Thanks Mark

Implementing RegionOfInterest and do_rectify in camera driver for use with image_proc?

$
0
0
I'd like to implement support for RegionOfInterest for my camera driver but I'm not sure how to do it correctly. I'd like image_proc to be able to subscribe to the roi image and operate correctly, though it's not clear what correct operation would look like. There is some helpful description at http://www.ros.org/reps/rep-0104.html#guidelines-for-camera-drivers 'Raw and Rectified ROI' section but it isn't always clear where responsibility lies for implementing a feature, and what numbers should be published in CameraInfo for use by image_proc. If do_rectify is true, does that mean the associated roi is in rectified coordinates? Should I calculate a much bigger raw roi to send to the camera using the bounding box of the distorted desired roi boundary to make sure there are no black pixels when the undistort happens later, but then publish the original desired roi on the camera_info topic for use by image_proc which will cause image_proc to select a window within the undistorted image that has no border? But then how would image_proc know where to place the distorted image in the distorted coordinate frame? It seems like there isn't enough information present (unless image_proc is running it's own distort/undistort to get a mapping). Right now I'm trying to set the camera_info roi to be the distorted roi sent to the camera, but for some reason I have to divide the camera_info roi x and y offset by 2 otherwise the image will get increasingly offscreen for larger x and y offset (could be a bug on my part), and I'm still not eliminating the undistort border- that could also be me improperly generating the distorted roi, but I still suspect I'm publishing the wrong thing to camera_info. The way I like to imagine the process is that image_proc would create a large black image the size of the sensor width and height, place the subwindow image from the camera into it using distorted coordinates, then perform an undistort on the whole image, then window out a new roi image where the user originally requested and publish that. But again it seems like two sets of xy offsets have to change hands from camera driver to image_proc to accomplish that- or is the responsibility of some other node downstream to compute the window within the image_proc generated rectified image to match the desires of the original roi request with do_rectify set to true? Should I be able to toggle do_rectify on and off and see the same parts of the image that are visible stay put, while the distortion border moves around?

Viso2_ros without camera_info

$
0
0
Hi all, I'm trying to compute visual odometry using a stream of images (left and rigth) from a server These images come without camera_info, but i have a calibration file > cam0: T_cam_imu: - [0.01486554298177506, 0.9995572490081733, -0.025774436697406162, 0.06522290953548025] - [-0.9998809296982875, 0.014967213324687367, 0.003756188357965682, -0.020706385492743767] - [0.004140296794218436, 0.025715529947969162, 0.999660727177951, -0.008054602460032234] - [0.0, 0.0, 0.0, 1.0] cam_overlaps: [1] camera_model: pinhole distortion_coeffs: [-0.28340811217029355, 0.07395907389290132, 0.00019359502856909603, 1.7618711454538528e-05] distortion_model: radtan intrinsics: [458.6548807207614, 457.2966964634893, 367.2158039615726, 248.37534060980727] resolution: [752, 480] rostopic: /cam0/image_raw cam1: T_cam_imu: - [0.012555267089114197, 0.9995987811512158, -0.025389800891792866, -0.04490198068250312] - [-0.9997550997230579, 0.013011905181515238, 0.017900583825283624, -0.02056977125889907] - [0.018223771455430235, 0.025158836311545446, 0.999517347077773, -0.008638135126028613] - [0.0, 0.0, 0.0, 1.0] T_cn_cnm1: - [0.9999972564778362, 0.002312067192393073, 0.00037600810233414045, -0.11007380812712879] - [-0.0023171357232427163, 0.9998980485068706, 0.014089835846681338, 0.0003991215470570765] - [-0.00034339312053936386, -0.014090668452710595, 0.9999006626379018, -0.0008537025033537094] - [0.0, 0.0, 0.0, 1.0] cam_overlaps: [0] camera_model: pinhole distortion_coeffs: [-0.283683654496499, 0.07451284309294819, -0.00010473894909809367, -3.555907002742513e-05] distortion_model: radtan intrinsics: [457.5874266035361, 456.13442556023665, 379.9994465203525, 255.2381853862733] resolution: [752, 480] rostopic: /cam1/image_raw Well, i know that i need to rectify the images (before send everything to viso2) with stereo_image_proc, but if i don't have the camera_info stream i can't do it. I just have the image_raw pair, nothing else. Do you have any suggests??? thanka a lot

camera_info override

$
0
0
Hi! I've been using an android app to pipe the camera image into ROS. This works moderately well, even though there is some lag to get the image. The problem is that the published /camera/camera_info is all empty. (see below) Is there a way I can override the camera_info topic with the calibration data I've gathered, and bypass what the AndroidCameraViewer driver reports ? Thank you Output of : $ rostopic echo /camera/camera_info header: seq: 7485 stamp: secs: 1412290798 nsecs: 453000000 frame_id: camera height: 960 width: 1280 distortion_model: '' D: [] K: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] R: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] P: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] binning_x: 0 binning_y: 0 roi: x_offset: 0 y_offset: 0 height: 0 width: 0 do_rectify: False

SetCameraInfo successfully called but no change in /camera_info topic

$
0
0
Hello there, I am trying to write a little C++ package to apply custom made calibration files to my indigo system using the following code (inspired by [this](http://answers.ros.org/question/43623/how-to-call-service-set_camera_info-of-gscam-package/) thread): #include #include #include #include #include #include #include #include using namespace std; int main(int argc, char **argv) { ros::init(argc, argv, "apply_camera_info"); if (argc != 3) { ROS_INFO("usage: apply_camera_info "); return 1; } ros::NodeHandle n; std::ostringstream oss; oss << argv[1] << "set_camera_info"; std::string s = oss.str(); ros::ServiceClient client = n.serviceClient(s); std::string camera_name = argv[1]; std::string filename = argv[2]; sensor_msgs::CameraInfo camera_info; camera_calibration_parsers::readCalibration(filename, camera_name, camera_info); sensor_msgs::SetCameraInfo srv; srv.request.camera_info = camera_info; if (client.call(srv)) { std::ostringstream sss; sss << srv.request.camera_info; ROS_INFO("%s", sss.str().c_str()); ROS_INFO("Calibration successfully applied."); } else { ROS_ERROR("Failed to call service /set_camera_info"); return 1; } return 0; } Actually the code seems to work fine. ROS_INFO returns the correct camera_info values after running the command. However, when I run `rostopic echo /camera_info` in another tab, I still get the old values. So no update is actually taking place. Would you have any suggestions for me? Btw: I am using a ueye camera. Thanks Ben

Best way to publish camera_info information after stereo calibration

$
0
0
Hello all; I am using a stereo setup with a VrMagic camera [VrMagic D3 intelligent camera](https://www.vrmagic.com/imaging/camera-platforms/d3-intelligent-platform/). This camera runs a version of Ubuntu Linux inside, and publishes both image topics via Ros Bridge server. With the camera calibration node, it is possible to calibrate this stereo setup and obtain distortion parameters for each camera. The thing is, once you have these parameters, which is the best way to publish them as a camera_info topic to be accessed along with the images for further processing? Is there a node providing such functionality, or it is needed code to do it? thank you very much in advance,

Camera_info for a camera publishing already rectified images

$
0
0
I have a camera which gives me images which are already rectified. I'm writing the ROS driver for it and I'm wondering what I should do with the camera_info message. Should I : - Calibrate the camera and publish the intrinsic parameters in the camera_info message - Calibrate the camera and publish the projection matrix but set the distortion parameters to 0 - Not calibration and set all the camera_info parameters to 0 For the record, I do not have access to the prerectified image, only the final result.

how to publish camera_info manually

$
0
0
I have a rosbag file which publishes the stereo images (left_image and right_image) but there is no camera_info attached to it.I want use the stereo_image_process package[link text](http://wiki.ros.org/stereo_image_proc) to get the point clouds.This node needs the image as well as camera_info to process the disparity and point cloud. I have calibrated file in.yaml file format but i don't know how to publish the information in it in a camera info topic for left and right images (camera) Also,what is the best way to do the above issue with less computation? Many thanks in advance

Time synchronization when using multiple cameras with image_transport

$
0
0
Hello, I am using multiple USB cameras for a project and am facing problems with time synchronization. I am currently using image_transport library's subscribeCamera function to subscribe to camera_info and image_rect_color published by image_proc. While one of my cameras is working fine, the second camera is dropping frames due to camera_info being published at a lower rate than the image messages. The following is the warning I am receiving: [ WARN] [1467164006.770029982]: [image_transport] Topics '/usb_cam_2/image_rect_color' and '/usb_cam_2/camera_info' do not appear to be synchronized. In the last 10s: Image messages received: 120 CameraInfo messages received: 94 Synchronized pairs: 6 I have seen a few related answers such as this one link: http://answers.ros.org/question/69925/sync-problem-on-image-and-camera_info-over-wireless/ but in my case one of the cameras is working fine with just subscribeCamera. I was wondering what the reason could be specially for the camera_info being published at a lower rate than the image messages and whether this is something that I can rectify without ApproximateTime message filters as suggested in the answer in the link. Thanks in advance!

publish camerainfo: plumb_bob error! (python)

$
0
0
I'm trying to write a node that publish a camera_info topic: #!/usr/bin/env python import cv2 import numpy as np import rospy from sensor_msgs.msg import CameraInfo import time import yaml def imput_yaml(calib_file): with file(calib_file, 'r') as f: params = yaml.load(f) cam_info = CameraInfo() cam_info.height = params['size']['height'] cam_info.width = params['size']['width'] cam_info.distortion_model = 'plumb_bob' cam_info.K = params['cameraMatrix']['data'] cam_info.D = params['distortionCoefficients']['data'] cam_info.R = params['rotation']['data'] cam_info.P = params['projection']['data'] return cam_info def publisher(cam_info,cam_pub): stamp = rospy.Time.from_sec(time.time()) cam_info.header.stamp = stamp cam_pub.publish(cam_info) if __name__ == '__main__': rospy.init_node('tester') cam_info = imput_yaml('test.yaml') cam_pub = rospy.Publisher('testing', CameraInfo) try: while not rospy.is_shutdown(): publisher(cam_info,cam_pub) rospy.sleep(1.0) except rospy.ROSInterruptException: pass but then i got this error message: File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/topics.py", line 856, in publish raise ROSSerializationException(str(e)) rospy.exceptions.ROSSerializationException: : 'pack expected 12 items for packing (got 16)' when writing 'plumb_bob' can anyone please help me?

Camera Display is not working

$
0
0
Hi everyone, I know this question has been asked quite a few times, but none of the answers seemed to help me. I have quite a few bagfiles containing images. Unfortunately, the CameraInfo message wasn't recorded, so I had to write a subscriber to the image and then a publisher, which publishes the image and the camera info. All is fine, I can `rostopic echo` both topics and they are published fine. But unfortunately, I still can't see it in rviz. For camera info it says `No CameraInfo received on [/camera/camera_info]. Topic may not exist.`, which is exactly my camera info topic, that is being published. And for the image it says `No image received` even though I can clearly see it using an image display. In my publisher I'm using the image_transport and camera_info_manager and in my callback function I do: ci->header.stamp = msg->header.stamp; ci->header.frame_id = msg->header.frame_id; image_pub.public(msg, ci); so they should be sychronized, right? Any idea how to solve this is highly appreciated!

How to use camera_info_manager to publish camera_info

$
0
0
I am trying to calibrate my ov7251 camera using the camera_calibration package. I adapted a sample camera driver that I found which use the UV4L API so that I can try to publish images to the ROS topic /camera/image. I am also publishing to /camera/camera_info. However, what I need now is to be able to run the /camera/set_camera_info service so that I can use the calibration package. Here is my code for publishing to /camera/image and /camera/camera_info: int main(int argc, char **argv) { dev_name = "/dev/video2"; namedWindow( "Camera Preview", WINDOW_AUTOSIZE );// Create a window for display. ros::init(argc, argv, "image_publisher"); ros::NodeHandle nh; image_transport::ImageTransport it(nh); image_transport::Publisher pub = it.advertise("camera/image", 1); ros::Publisher pub_info = nh.advertise("camera/camera_info", 100); sensor_msgs::CameraInfo cam_info; open_device(); init_device(); start_capturing(); while (nh.ok()) { read_frame(); if (!prev.empty()) { cv::Mat image = prev; sensor_msgs::ImagePtr msg = cv_bridge::CvImage(std_msgs::Header(), "bgr8", image).toImageMsg(); ros::Rate loop_rate(5); cam_info.header.stamp = ros::Time::now(); pub_info.publish(cam_info); pub.publish(msg); ros::spinOnce(); loop_rate.sleep(); } } ... return 0; } When I ran rostopic list, this is what I get: /camera/camera_info /camera/image /camera/image/compressed /camera/image/compressed/parameter_descriptions /camera/image/compressed/parameter_updates /camera/image/compressedDepth /camera/image/compressedDepth/parameter_descriptions /camera/image/compressedDepth/parameter_updates /camera/image/theora /camera/image/theora/parameter_descriptions /camera/image/theora/parameter_updates /camera/image_raw /rosout /rosout_agg /statistics /svo/dense_input /svo/image /svo/image/compressed /svo/image/compressed/parameter_descriptions /svo/image/compressed/parameter_updates /svo/image/compressedDepth /svo/image/compressedDepth/parameter_descriptions /svo/image/compressedDepth/parameter_updates /svo/image/theora /svo/image/theora/parameter_descriptions /svo/image/theora/parameter_updates /svo/info /svo/keyframes /svo/points /svo/pose /svo/remote_key /tf And when I run rosservice list: /camera/image/compressed/set_parameters /camera/image/compressedDepth/set_parameters /camera/image/theora/set_parameters /image_publisher/get_loggers /image_publisher/set_logger_level /rosout/get_loggers /rosout/set_logger_level /rqt_gui_py_node_13721/get_loggers /rqt_gui_py_node_13721/set_logger_level /rqt_gui_py_node_13807/get_loggers /rqt_gui_py_node_13807/set_logger_level /svo/get_loggers /svo/image/compressed/set_parameters /svo/image/compressedDepth/set_parameters /svo/image/theora/set_parameters /svo/set_logger_level As you can see, I don't have a /set_camera_info service, which I think is the main problem I am having with trying to run the camera_calibration package. If anyone can please provide me with a simple sample code for using the camera_info_manager, it would be much appreciated. EDIT: For now, I wish to simply initialize a camera_info_manager object so that I can use its functions to run the set_camera_info service, but I am not well-versed with C++ so I am having some problems understanding the tutorial for camera_info_manager.

How to get raw stereo camera calibration from left and right camera_info messages

$
0
0
Hi guys, I am trying to obtain the stereo un-rectified calibration (that is un-rectified extrinsics R and t; and calibration matrix K for each camera) from the left_camera_info.yaml and the right_camera_info.yaml configuration files. The only unrectified information that I can get straightforward from the configuration files is the instrinsic matrix K and distortion parameters D. But I do not know how to get the un-rectified extrinsic R and T. I think I should use R rotation rectification matrix and Projection matrix from right configuration file right_camera_info.yaml but I am not sure how should I use them. Any suggestions? thank you very much This is my **left_camera_info.yaml** image_width: 672 image_height: 376 camera_name: narrow_stereo/left camera_matrix: rows: 3 cols: 3 data: [348.522264, 0.000000, 344.483596, 0.000000, 348.449870, 188.808062, 0.000000, 0.000000, 1.000000] distortion_model: plumb_bob distortion_coefficients: rows: 1 cols: 5 data: [-0.174497, 0.027127, -0.000359, 0.000457, 0.000000] rectification_matrix: rows: 3 cols: 3 data: [0.999873, 0.000708, 0.015892, -0.000683, 0.999999, -0.001578, -0.015893, 0.001567, 0.999872] projection_matrix: rows: 3 cols: 4 data: [347.652079, 0.000000, 339.375946, 0.000000, 0.000000, 347.652079, 198.590004, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000] This is my **right_camera_info.yaml** image_width: 672 image_height: 376 camera_name: narrow_stereo/right camera_matrix: rows: 3 cols: 3 data: [349.569635, 0.000000, 340.836585, 0.000000, 349.390781, 206.105440, 0.000000, 0.000000, 1.000000] distortion_model: plumb_bob distortion_coefficients: rows: 1 cols: 5 data: [-0.174606, 0.027855, -0.000108, -0.000141, 0.000000] rectification_matrix: rows: 3 cols: 3 data: [0.999997, -0.000440, 0.002268, 0.000436, 0.999999, 0.001573, -0.002269, -0.001572, 0.999996] projection_matrix: rows: 3 cols: 4 data: [347.652079, 0.000000, 339.375946, -41.272771, 0.000000, 347.652079, 198.590004, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000]

Images from fisheye camera are displayed correctly with BGR8 encoding but not with MONO8

$
0
0
Hi, I want to output grayscale images from a fisheye camera (FOV < 180 degree). I understand that ROS currently doesn't support fisheye [distortion models](http://docs.ros.org/kinetic/api/sensor_msgs/html/namespacesensor__msgs_1_1distortion__models.html). However, I see that if I use a BGR8 image encoding (by setting sensor_msgs::image_encodings::BGR8), I can still get a "normal" distorted image, like this: ![normal_BGR8](/upfiles/1524389791658917.png). But if I change to MONO8 encodings, the image is unusable: ![weird_MONO8](/upfiles/15243898507167106.png) These 2 images were captured with my camera firmly attached to the table. So I don't understand why I can get a "normal" color image but not a grayscale one?
Viewing all 46 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>