ReviewEssays.com - Term Papers, Book Reports, Research Papers and College Essays
Search

Blur Spot Limitations in Distal Endoscope Sensors

Essay by   •  February 21, 2011  •  Research Paper  •  2,556 Words (11 Pages)  •  1,524 Views

Essay Preview: Blur Spot Limitations in Distal Endoscope Sensors

Report this essay
Page 1 of 11

Blur Spot Limitations In Distal Endoscope Sensors

Avi Yaron

Visionsense Inc, Orangeburg, New York

Abstract

In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of the image sensor can be superior to the resolution of the optical system in imaging systems such as those employed in medical endoscopy. This "excess resolution" is utilized by Visionsense to create stereoscopic vision in surgical endoscopes that meet the needs of the medical market. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides a 3-dimensional perspective of intra-operative sites that is crucial for successful minimally invasive surgery. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.

Keywords: plenoptic camera, stereoscopy, lenticular array, resolution

Introduction

Two physical factors determine the resolution of a digital camera: the camera objective lens and the pixel-size of the image sensor. In general, the optical resolution of well-designed lenses (in the absence of size constraints, such as a digital camera) is superior to the resolution of the most advanced image-sensors, which therefore requires the use of anti-aliasing filters. When size and costs are constraints, the image sensor cost can be reduced by reducing the chip-area. On small format sensors, it is necessary to reduce the pixel size in order to maintain acceptable image resolution. For instance, PAL/NTSC signals are provided by ≤1/6 inch format chips with pixel size of ≤ 3μ.

In comparison, there are applications where the resolution of the image sensor is superior to the resolution of the optical system. These are usually low cost / low quality optics in hand-held devices such as cell-phone cameras, or high depth-of-field applications such as medical endoscopy. This "excess resolution" is used to great advantage by Visionsense technology to create stereoscopic vision in surgical endoscopes that meet the needs of the medical market.

Medical endoscopy

Minimally invasive surgery (MIS) began when surgeons classically trained in "open" surgical procedures started to use long, thin imaging devices and surgical instruments to perform operations through small incisions. Early on, many technical factors were down played or overlooked because endoscopic surgical procedures seemed, at the time, intuitively similar to open procedures. Experience, however, has shown that the use of endoscopic instruments requires unique eye-hand coordination. In particular, the use of video cameras and monitors greatly affects the perception of physical reality and therefore performance.

Vision systems in currently available MIS instruments provide 2 dimensional (2D) images that lack depth perception, which restrict the surgeon's perspective and ability to perform complex manipulations. Recently published clinical papers [1][2], have documented that severe errors made during laparoscopic procedures are due to a critical misinterpretation of the video image, not simply errors in surgical technique. A review of surgical injury to the common bile duct found that the damage was caused by misinterpretation of the endoscopic image and incorrect decisions based on false perceptual information. In defense of the surgeons, many of us assume that our eyes are a reliable tool to interpret reality, and we overlook the crucial fact that video images have many limitations that can create a false sense of genuineness.

The flatness of conventional 2D imaging does nothing to augment a surgeon's performance, whereas 3 dimension (3D) stereoscopic vision can enhance image understanding and improve performance in laparoscopic surgery. Our system enables natural vision without discomfort, and it affords better results and enhanced confidence among less-experienced surgeons. In addition, stereoscopic vision paves the way for the development of new MIS procedures that use sophisticated articulating surgical instruments.

Stereoscopic plenoptic camera

The idea of a stereoscopic camera based on a lenticular array ("integral fly-eye") was first proposed by the French physicist G. Lippmann in 1908 [3]. The Visionsense adaptation of this stereoscopic plenoptic camera is shown in Figure 1: The imaging objective is represented by a single lens (L) with two pupils openings at the front focal plane (P). This arrangement generates a telecentric objective, in which all the light rays passing through the center of each pupil emerge as a parallel beam behind the lens. The CCD chip is covered by a lenticular array (LA) -- an array of zero power cylindrical microlenses with their axis perpendicular to the paper plane. Each lenticule covers exactly two pixel-columns. Rays that pass through a point at the left aperture (l) are emitted as a parallel beam (dashed lines in the drawing) behind the imaging lens. These rays are focused by the lenticular array on the pixels on the right side under the lenslets (designated by dark rectangles). Similarly, rays that pass through the right aperture (r) (dashed-dotted lines) are focused by the lenslets on the left ("white") pixels. Thus a point O on the object is imaged twice: Once through the upper aperture generating an image on pixel O1, and once through the lower pupil generating an image on pixel O2 . The pixels O1 and O2 are upper and lower views (in the real world -- left and right views) of the point O on the object. The distance between a pixel of the left view to the that of the right view (disparity) is a function of the distance of the corresponding point from the camera. The drawing in Figure 2 emphasizes the relative alignment of the pupils, LA, and the image sensor pixels.

Figure

...

...

Download as:   txt (15.4 Kb)   pdf (176.8 Kb)   docx (15.7 Kb)  
Continue for 10 more pages »
Only available on ReviewEssays.com