Friday, April 30, 2010

Paper watch: photodetector research at IBM

Two recent papers from IBM's T.J. Watson Research Center on photodetectors for optical communications at Nature and Nature Photonics:

"Reinventing germanium avalanche photodetector for nanophotonic on-chip optical interconnects"
Integration of optical communication circuits directly into high-performance microprocessor chips can enable extremely powerful computer systems1. A germanium photodetector that can be monolithically integrated with silicon transistor technology is viewed as a key element in connecting chip components with infrared optical signals. Such a device should have the capability to detect very-low-power optical signals at very high speed. Although germanium avalanche photodetectors (APD) using charge amplification close to avalanche breakdown can achieve high gain and thus detect low-power optical signals, they are universally considered to suffer from an intolerably high amplification noise characteristic of germanium. High gain with low excess noise has been demonstrated using a germanium layer only for detection of light signals, with amplification taking place in a separate silicon layer. However, the relatively thick semiconductor layers that are required in such structures limit APD speeds to about 10 GHz, and require excessively high bias voltages of around 25 V . Here we show how nanophotonic and nanoelectronic engineering aimed at shaping optical and electrical fields on the nanometre scale within a germanium amplification layer can overcome the otherwise intrinsically poor noise characteristics, achieving a dramatic reduction of amplification noise by over 70 per cent. By generating strongly non-uniform electric fields, the region of impact ionization in germanium is reduced to just 30 nm, allowing the device to benefit from the noise reduction effects that arise at these small distances. Furthermore, the smallness of the APDs means that a bias voltage of only 1.5 V is required to achieve an avalanche gain of over 10 dB with operational speeds exceeding 30 GHz. Monolithic integration of such a device into computer chips might enable applications beyond computer optical interconnects1—in telecommunications, secure quantum key distribution, and subthreshold ultralow-power transistors.

"Graphene photodetectors for high-speed optical communications"
Although silicon has dominated solid-state electronics for more than four decades, a variety of other materials are used in photonic devices to expand the wavelength range of operation and improve performance. For example, gallium-nitride based materials enable light emission at blue and ultraviolet wavelengths, and high index contrast silicon-on-insulator facilitates ultradense photonic devices. Here, we report the first use of a photodetector based on graphene, a two-dimensional carbon material, in a 10 Gbit s−1 optical data link. In this interdigitated metal–graphene–metal photodetector, an asymmetric metallization scheme is adopted to break the mirror symmetry of the internal electric-field profile in conventional graphene field-effect transistor channels, allowing for efficient photodetection. A maximum external photoresponsivity of 6.1 mA W−1 is achieved at a wavelength of 1.55 µm. Owing to the unique band structure of graphene and extensive developments in graphene electronics and wafer-scale synthesis, graphene-based integrated electronic–photonic circuits with an operational wavelength range spanning 300 nm to 6 µm (and possibly beyond) can be expected in the future.

Paper watch: Sharper focus by random scattering

The paper at Nature Photonics: "Exploiting disorder for perfect focusing" (subscription required).

The paper is also summarized here. I extract some paragraphs:

In the field of optics, three different approaches have recently been demonstrated that focus monochromatic waves in a strongly scattering medium. The first is an elegant and versatile method that is based on the use of spatial light modulators, which allow the phase of an optical field to be controlled over thousands of pixels (64 × 64 square segments in the experiment) through a learning feedback algorithm. As a lens focuses a beam of light through a strongly scattering medium onto a CCD camera, the spatial light modulator shapes the wavefront of the light that impinges on the lens. The algorithm then adjusts the relative phases of the segments so that the transmitted light interferes constructively at a given focal point. Maximizing the intensity at the focal spot, the feedback process turns into a matched filter of the wave transfer function, and therefore the shaped wavefront is practically identical to the one given by the time-reversal solution — a wavefield phase conjugation for a monochromatic wave. The second approach is based on the measurement of the transmission matrix of the scattering medium with light modulators. A third approach uses optical phase conjugation through nonlinear optics, in which a transmitted light field is forced to retrace its trajectory through a strongly scattering material in order to reconstruct the source.

Implementing the wavefront correction method using the spatial light modulators and feedback algorithm mentioned above, Vellekoop and colleagues now clearly demonstrate how the width of the focus is reduced in the presence of a random scattering layer. They carry out the experiment at a wavelength of 632.8 nm, and use a single 6.45 μm × 6.45 μm CCD pixel as a target. A lens with a diameter D1 of 2.1 mm and a focal length of 200 mm is used. In a clean environment, the lens has a diffraction-limited spot size of 76 μm at full-width half-maximum. A disordered medium made of a 6 μm layer of opaque white airbrush paint is placed after the lens.

To analyse the effect, the researchers place the random layer at different distances f2 from the CCD camera and apply the wavefront correction to the spatial light modulator to shape the wavefront of the light transmitted through the lens. They observe a decrease in the width of the focus spot as the opaque layer is moved closer to the CCD camera. At distances of 25 mm or smaller, the focus spot becomes smaller than a single camera pixel. More specifically, the width of the spot is approximately one-tenth of the diffraction limit of the lens in a clean environment [...]. The team show both experimentally and theoretically that it is the scattering medium, rather than the lens or the quality of the reconstruction process, that determines the width of the focus — a surprising property of scattered light that agrees with primary observations in ultrasound experiments. More quantitatively, they demonstrate that the new focus is always an Airy disc with a full-width half-maximum determined by an effective numerical aperture f2/D2, where f2 and D2 are the distance and diameter of the illuminated area of the scattering layer, respectively. In essence, the whole illuminated area of the scattering medium behaves as a coherent focusing lens.

Wednesday, April 28, 2010

The first negative index metamaterial to operate at visible frequencies

Press release at Photonics Online.

Paper at Nature Materials (subscription required):"A single-layer wide-angle negative-index metamaterial at visible frequencies".

From the abstract:
Metamaterials are materials with artificial electromagnetic properties defined by their sub-wavelength structure rather than their chemical composition. Negative-index materials (NIMs) are a special class of metamaterials characterized by an effective negative index that gives rise to such unusual wave behaviour as backwards phase propagation and negative refraction. These extraordinary properties lead to many interesting functions such as sub-diffraction imaging and invisibility cloaking. So far, NIMs have been realized through layering of resonant structures, such as split-ring resonators, and have been demonstrated at microwave to infrared frequencies over a narrow range of angles-of-incidence and polarization. However, resonant-element NIM designs suffer from the limitations of not being scalable to operate at visible frequencies because of intrinsic fabrication limitations, require multiple functional layers to achieve strong scattering and have refractive indices that are highly dependent on angle of incidence and polarization. Here we report a metamaterial composed of a single layer of coupled plasmonic coaxial waveguides that exhibits an effective refractive index of −2 in the blue spectral region with a figure-of-merit larger than 8. The resulting NIM refractive index is insensitive to both polarization and angle-of-incidence over a ±50° angular range, yielding a wide-angle NIM at visible frequencies.

TSMC earnings call

Via Seeking Alpha.

Extracts:
[...] by technology, total wafer sales from 0.13 micron and below accounted for 71% of our total wafer sales representing 1 percentage point increase from last quarter. Meanwhile, the combined revenue from 40 nanometer and 65 nanometer already accounted for 41% of our total wafer sales. For 40 nanometer alone, revenue grew strongly in the first quarter as a result of strong customer demand and continued year improvement. 40 nanometer contributions jumped to 14% of our total wafer sales in the first quarter from 9% in fourth quarter '09. For 65 nanometer, revenue contribution was 27%. Meanwhile, 90 nanometer and 0.13 micron represented 17% and 13% of our total wafer sales, respectively.
[...]
As you can see, 40 nanometer is already in production. You probably have heard of TSMC's struggle with a year issue last year. Now, all these problems are behind us. We are doing very well. The defect in end stage is good or better than previous technology at this time – the same time after we released the technology.
[...]
For up to 20 nanometer we will continue to use planar transistor. Starting from 14 nanometer we will begin to – we will shift to the so-called FinFET transistor structure; the three dimension structure.
[...]
Moving forward, it is likely we will add more trend engineering and we will begin to use Germanium or Gallium Arsenide as channel material to enhance mobility.
[...]
So from physics point of view, especially from transistor, Moore's Law will be able to extend it to 7 nanometer based on everything we already know today.
[...]
CMOS image sensor. We are working with customers on 1.19 micron pixel, and TSMC is the first one to introduce this thing called BSI technology, backside illumination. There are certain advantages if we shine light from the backside, and in this case we had to slim down the wafer to 3 microns thick. The handling, the technology, extremely difficult. We already shipped products using this BSI technology in 8-inch wafers and we are working on 12-inch wafers right now.
Next. For the power device, we show example of this technology, give a breakdown voltage from 750 volts – that's not, I'm sorry, 700 volts to 850 volts. And in this area, there are many, many different applications for 12 volts, for 16 volts and just it requires a different way of optimizing these devices.
And next one is the MEMS technology. TSMC takes a special approach. We make CMOS on one wafer, MEMS another wafer and package on the third wafer and then we bond them together, and this particular case allowed us to optimize all three of them independently and then finally, we put them together.
Next please. On the package side I would just like to show you one example. We are – we began to work on 2D and 3D integration. This looking forward, I think before – especially after we – after the Moore's Law began to slow down, we still need a solution for system integration, and as 2D, 3D integration use silicon as a substrate, allowed us to make the entire system into a very small package with high performance and a low power. So we began to work on 3D stacking and a silicon interposer for 2D integration.
[...]
Related: EETimes on UMC's numbers.

Towards Variable Post-Capture Space, Angle and Time Resolution in Photography

At the Nuit Blanche blog there's this post with an overview of a project for a camera having variable post-capture space, angle and time resolution in image acquisition. Seems not too far away from some EDoF techniques. Here's a video explaining the paper:

And here is the link to the project's homepage.

STI's advanced packaging technology

Via "Advanced Packaging", an article from STI Electronics on their embedded packaging technologies: "Advanced packaging technologies: embedding components for increased reliability".

Saturday, April 24, 2010

Paper watch: Charge diffusion and crosstalk in BSI CCDs

From SPIE's Optical Engineering (subscription required): "Charge diffusion in the field-free region of charge-coupled devices". The abstract reads (my highlight):
The potential well in back-illuminated charge-coupled devices (CCDs) does not reach all the way to the back surface. Hence, light that is absorbed in the field-free region generates electrons that can diffuse into neighboring pixels and thus decreases the spatial resolution of the sensor. We present data for the charge diffusion from a near point source by measuring the response of a back-illuminated CCD to light emitted from a submicron diameter glass fiber tip. The diffusion of electrons into neighboring pixels is analyzed for different wavelengths of light ranging from 430 to 780 nm. To find out how the charge spreading into other pixels depends on the location of the light spot; the fiber tip could be moved with a piezoelectric translation stage. The experimental data are compared to Monte Carlo simulations and an analytical model of electron diffusion in the field-free region. The presented analysis can be used to predict the charge diffusion in other back-illuminated sensors, and the experiment is universally applicable to measure any type of sensors.

The world's smallest, lightest telemedicine microscope

Unfortunately, I do not have access neither to the paper nor to other papers listed at the research group's website. I will have to wait for the accepted Applied Physics Letters to check details. But in any case, this one sounds intriguing. Here's the press release. And the paper's abstract (my highlights):
Despite the rapid progress in optical imaging, most of the advanced microscopy modalities still require complex and costly set-ups that unfortunately limit their use beyond well equipped laboratories. In the meantime, microscopy in resource-limited settings has requirements significantly different from those encountered in advanced laboratories, and such imaging devices should be cost-effective, compact, light-weight and appropriately accurate and simple to be usable by minimally trained personnel. Furthermore, these portable microscopes should ideally be digitally integrated as part of a telemedicine network that connects various mobile health-care providers to a central laboratory or hospital. Toward this end, here we demonstrate a lensless on-chip microscope weighing 46 grams with dimensions smaller than 4.2 cm × 4.2 cm × 5.8 cm that achieves sub-cellular resolution over a large field of view of 24 mm2. This compact and light-weight microscope is based on digital in-line holography and does not need any lenses, bulky optical/mechanical components or coherent sources such as lasers. Instead, it utilizes a simple light-emitting-diode (LED) and a compact opto-electronic sensor-array to record lensless holograms of the objects, which then permits rapid digital reconstruction of regular transmission or differential interference contrast (DIC) images of the objects. Because this lensless incoherent holographic microscope has orders-of-magnitude improved light collection efficiency and is very robust to mechanical misalignments it may offer a cost-effective tool especially for telemedicine applications involving various global health problems in resource limited settings.

Friday, April 23, 2010

Paper watch: Ultra high speed image signal accumulation sensor

From Sensors: "Ultra-High-Speed Image Signal Accumulation Sensor". The abstract reads:
Averaging of accumulated data is a standard technique applied to processing data with low signal-to-noise ratios (SNR), such as image signals captured in ultra-high-speed imaging. The authors propose an architecture layout of an ultra-high-speed image sensor capable of on-chip signal accumulation. The very high frame rate is enabled by employing an image sensor structure with a multi-folded CCD in each pixel, which serves as an in situ image signal storage. The signal accumulation function is achieved by direct connection of the first and the last storage elements of the in situ storage CCD. It has been thought that the multi-folding is achievable only by driving electrodes with complicated and impractical layouts. Simple configurations of the driving electrodes to overcome the difficulty are presented for two-phase and four-phase transfer CCD systems. The in situ storage image sensor with the signal accumulation function is named Image Signal Accumulation Sensor (ISAS).

Towards large area graphene photodetectors

At Applied Phyiscs Letters: "Position dependent photodetector from large area reduced graphene oxide thin films".
We fabricated large area infrared photodetector devices from thin film of chemically reduced graphene oxide (RGO) sheets and studied their photoresponse as a function of laser position. We found that the photocurrent either increases, decreases, or remain almost zero depending upon the position of the laser spot with respect to the electrodes. The position sensitive photoresponse is explained by Schottky barrier modulation at the RGO film-electrode interface. The time response of the photocurrent is dramatically slower than single sheet of graphene possibly due to disorder from the chemical synthesis and interconnecting sheets.
Not very practical yet (2.5 seconds time constant!).

imec's virtual camera

The announcement was made last week, and there's a video demo:

SPIE Optics and Photonics 2010

The complete program is available online.

More on the reliability of scientific publications

Continuing on the topic of the last entry, and again from PLoS: "Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data". The abstract reads:
The growing competition and “publish or perish” culture in academia might conflict with the objectivity and integrity of research, because it forces scientists to produce “publishable” results at all costs. Papers are less likely to be published and to be cited if they report “negative” results (results that fail to support the tested hypothesis). Therefore, if publication pressures increase scientific bias, the frequency of “positive” results in the literature should be higher in the more competitive and “productive” academic environments. This study verified this hypothesis by measuring the frequency of positive results in a large random sample of papers with a corresponding author based in the US. Across all disciplines, papers were more likely to support a tested hypothesis if their corresponding authors were working in states that, according to NSF data, produced more academic papers per capita. The size of this effect increased when controlling for state's per capita R&D expenditure and for study characteristics that previous research showed to correlate with the frequency of positive results, including discipline and methodology. Although the confounding effect of institutions' prestige could not be excluded (researchers in the more productive universities could be the most clever and successful in their experiments), these results support the hypothesis that competitive academic environments increase not only scientists' productivity but also their bias. The same phenomenon might be observed in other countries where academic competition and pressures to publish are high.

Thursday, April 22, 2010

Why Most Published Research Findings Are False

From PLoS medicine: "Why Most Published Research Findings Are False". The abstract:
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

Thursday, April 15, 2010

Typo of the day

Switzerland, Sweden... what's the difference?

Mark LaPedus on the TSMC technology symposium

At EETimes: "Seven things that surprised us at TSMC event".
Cost and complexity is still a problem with TSVs. Or perhaps TSV has been delayed at TSMC and elsewhere.

DATE 3D Integration workshop digest online

The recent 2010 edition is here. And the 2009 here.

Additionally, the proceedings of the DATE conference are also online here.

Wednesday, April 14, 2010

Phase noise analysis in a sampled PLL

The Planet Analog Newsletter, useful as it is, only links to the second installment of the series. Here are the three articles so far: From Microwaves and RF: Analyze phase noise in a sampled PLL part 1, part 2 and part 3.

From the opening of the first part:
Phase locked loops (PLLs) have been used for years to stabilize signal sources such as oscillators. In the past, loop bandwidths tended to be small compared to the sampling frequency, but with modern communications systems, requirements for faster switching times mean that this is no longer the case. Narrow-bandwidth PLLs can be effectively modeled and simulated by means of linear analysis, but these same approaches fall short for wide-bandwidth-sampled PLLs. In a sampled PLL, when the sampling frequency is large compared to the loop bandwidth, a linear simulation provides a fairly close approximation of the PLL’s behavior. But when the loop bandwidth is a considerable percentage of the sampling frequency, as in fast-switching frequency synthesizers, linear analysis may not provide accurate predictions. This opening installment of a three-part article will explore a nonlinear approach to the analysis of the effects of sampling on PLL performance.

Thursday, April 8, 2010

Brain monitoring with NIR light

Related to this post, although a quite different method: "Hitachi to Launch Wearable Brain Analyzer".
The new encephalometer was developed based on the "optical topography method (NIRS: near infrared spectroscopy)," a brain monitoring method that Hitachi developed in 1995. Far-red light passes through a skull, etc and is scattered and reflected inside the brain. By measuring the degrees of the scattering and reflection, the encephalometer estimates the change in blood flow inside the brain. Specifically, it monitors hemoglobins.
[...]
The wavelengths are 705 and 830nm. The encephalometer has eight irradiation parts, eight light-receiving parts and 22 channels.

ITRS 2009 online

Somehow this escaped me: the 2009 edition of the International Technology Roadmap for Semiconductors is online.