8.21.2014

Analyzing healthcare's big data of real-world evidence

  • A custom-produced pill engineered for your unique genetic makeup, designed to fight the exact genotype of cancer in your metabolism.
  • A living avatar incorporating a genetic replica of your cancer, allowing tests to be conducted on the avatar before real-world application.
  • Differentiating between Munchausen Syndrome and domestic abuse by collecting data on a patient from various parts of a health system.
None of the above is fiction, but an emerging reality - and they are the tip of the iceberg of innovations and discoveries emerging from an unexpected source: Big Data. Like so many other medical advances today, these are examples of how hugely powerful computer processing and analyzing vast amounts of data are changing the way healthcare is conducted and administered.

“Genome testing of cancer is critical because the disease is driven by genetic mutations,” said Ya’ara Goldschmidt, leader of IBM’s Healthcare Analytics work at the company’s Research lab in Haifa, Israel. “That’s why a great deal of the genomics work that has been done in the past decade focuses on the sequencing of this particular disease.” 

Goldschmidt was speaking at the conclusion of two days of workshops discussing clinical genomic analysis and medical informatics innovations that brought together a full mix of academia, industry, health providers and policymakers from Israel’s healthcare ecosystem, as well as from abroad.

“With the enormous amounts of genomic data available today, the challenge is to analyze it methodically to better understand and control disease, ultimately providing treatment recommendations at the point of care. With the technological breakthroughs of the last few years and sequencing costs dropping rapidly, today we can do things we couldn't have imagined just a decade ago,” said Michal Rosen-Zvi, Senior Manager of Analytics at IBM Research - Haifa.
Recent years have seen a dramatic increase in the availability of data collected in the practice of healthcare. Analyzed properly, these data (known as real world evidence or RWE) are transforming healthcare for everyone, from providers to practitioners to patients. IBM's Haifa lab is playing a leading role – both in the analysis itself, and in bringing players together to engage in dialog and the exchange of ideas.

Scientists at the lab have developed decision support solutions that blend cloud, and mobile technologies with advanced analytics to gather, manage, analyze, and visualize data on different kinds of cancer and disease.  These technologies include machine learning to infer the complex  associations between genetic factors, demographic data, disease progression, and treatment options.

“Even the policymakers are conscious of the tremendous benefit we can derive from data analysis,” Rosen-Zvi said. “The legal and ethical obstacles that obstructed progress are slowly being resolved, and this is allowing us to make headway. The fact that the head of Israel’s Ministry of Health attended the event illustrates the legislature's awareness.”

A keynote presented by Isaac Kohane of Harvard Medical School explained how data analysis of patients “bouncing around” (or constantly checking into hospitals with a variety of issues) a health system can pinpoint likely domestic abuse. He also explained how medical informatics could have identified the dangers of the painkilling drug Vioxx by collecting information about heart attacks from different hospitals earlier on. The drug was removed from the market after clinical studies.

“It's imperative that we improve data sharing among all partners in our health systems,” said IBM's Ranit Aharonov, who organized the Genomic Analysis Workshop with the Safra Center for Bioinformatics at Tel Aviv University. “Technological barriers exist, as well as legal, and of course, commercial. Drug companies, for example, spend enormous sums on research so they're not enthusiastic about sharing their data freely. But even they are recognizing that improved data exchange is for their benefit too.”

Among the many presentations, Fresenius Medical Care, the world's largest integrated provider of products and services for people undergoing dialysis, explained how they gather data about alternating-day visits of their dialysis patients. Dozens of factors in the data are analyzed and processed, then guidelines are provided to the attending physician during each subsequent patient visit. Studies showed that when physicians adhere to the guidelines, patient outcome is improved.

Prof. David Sidransky of Johns Hopkins University explained how human cancer cells are transferred to a mouse - a process called xenografting - allowing genetic testing to be conducted without harming the patient. When the correct formula is found to counter that particular cancer, it can be administered to the patient. 

“In past gatherings, we'd talk about what it will be possible to do in the future,” Rosen-Zvi said. “But these two days showed us that the future is already here. The data is now available and we've begun using it to improve healthcare for everyone.”

Many of the current IBM healthcare advances address chronic care and cancer because of its potential impact on society.  With new possibilities for genomic sequencing, cognitive computing and other analytics technologies, IBM is providing decision support to enable more reliable diagnosis and care plan, including treatment options. The company is also working with healthcare partners across the globe on exciting technologies for medical training and other chronic-care areas, such as diabetes, heart disease and mental health.

8.19.2014

The energy to innovate

IBMer and MIT TR35 honoree is making electricity accessible and available

Tanuja Ganu grew up in a small town in India about 400 kilometers south of Mumbai, where – like much of the country – energy outages happen all the time. 

“The voltage was often so low that the lights were dim and the refrigerator would burn out.

“I studied for exams by candlelight, and endured summers without working fans. To deal with this as children, we learned to time-shift critical things we needed electricity for – like cooking and cleaning," Tanuja said.

Now an engineer at IBM Research, MIT recognized her as a “2014 Innovator Under 35” for building solutions that begin to solve these challenges. Her collaboration with the University of Brunei Darussalam led to the inventions of SocketWatch, nPlug, and iPlug.

The Indian electricity sector, despite having the world's fifth largest installed capacity, suffers from a 12.9% peaking shortage. This shortage could be alleviated, if a large number of deferrable loads could be moved from on-peak to off-peak times.
Q&A With Tanuja Ganu: Experience to expertise

IBM Research: How did the experience of dealing with electrical outages influence your decision to work in this field?

Tanuja Ganu: Knowing the inconvenience of time-shifting, I was particularly fascinated with the idea of democratizing the Demand Side Management (DSM) of energy. It’s something that average citizens can make a difference doing by simply reducing their consumption during peak hours and avoiding other energy wastage (like leaving the TV and other appliances on standby).

IR: But you studied computer science and machine learning at university. How did you connect that expertise with energy and utilities – and eventually your solutions of nPlug, SocketWatch and iPlug?

TG: I first learned practical engineering from my father (also an engineer) when we had to fix appliances at home. These projects got me interested in engineering and particularly influenced my thinking about inventing and applying knowledge to solve real-world problems.

Though, I graduated with a degree in computer science, and completed graduate studies in data mining and machine learning, I looked for domains where I could address real societal problems using data insights and technological change. And during campus interviews, I came to know about the Smarter Energy group at IBM Research-India. It was the perfect combination of computer science and electrical engineering techniques specifically addressing energy issues. After an internship with them, I joined as an engineer in 2011.

IR: Where did your ideas for nPlug, SocketWatch and iPlug come from?

TG: My first project was nPlug, or “Smarter Planet in a Plug.” It is aimed at alleviating peak usage loads through inexpensive autonomous DSM. Working with a team of engineers with backgrounds in embedded systems and power optimization, we developed a device that fits between appliances such as hot water heaters and even electric vehicles, and wall sockets. nPlug senses line voltage and line frequency (how much energy the device uses and how often), and then uses machine learning techniques to infer peak periods as well as supply-demand imbalance conditions. It then schedules usage for the attached appliances in a decentralized manner to alleviate peaks whenever possible – without violating the requirements of consumers.  


SocketWatch is another device that fits between an appliance and the wall socket. It autonomously monitors the appliance’s usage – and based on the appliance’s power consumption, SocketWatch alerts the end consumer of the device’s proper usage (preventing energy waste). For example, it can switch off a TV if it is on standby mode, or alert the consumer about the energy “leaking” from a refrigerator due to a malfunction, like a leaking gasket.

Our most-recent project, iPlug, will help distributed energy sources such as a home’s rooftop solar panels. It – like our other devices – autonomously decides how to route electricity from solar panels back to the grid (on the most loaded phase during peak times), or to store or use the energy locally, based on the home’s usage needs.

IR: How do machine learning, data mining and analytics play a role in these energy projects?

TG: Thanks to advances in embedded systems and sensor technologies, a lot of high frequency data related to energy parameters, such as line voltage, frequency, active power, and reactive power is available for analysis – like finding irregularities in the operations of energy systems. My skills in machine learning and data mining help analyze and bring insight from the data by writing learning pattern algorithms.

Once the patterns are analyzed, optimization skills help in coming up with optimal strategies to solve specific issues at hand. For example, in the case of nPlugs, we apply machine learning techniques to line voltage and frequency data to understand the times of peak demand and supply-demand mismatches. Then we apply optimization techniques to determine preferred times to schedule an appliance in a decentralized manner such that they follow user-defined deadlines, but do not over-load the grid. 

IR: What stage have these projects reached? And what results have you been able to show?

TG: Though we have not evaluated these devices in large scale pilots yet, we have evaluated prototypes of nPlugs and SocketWatches in real-life settings.

We’re able to show that nPlugs correctly defer loads such as storage water heaters to off-peak hours without inconveniencing their owners. We have also studied the collective behaviors of thousands of nPlugs using simulations. They are able to reduce peak loads by up to 45 percent with a realistic mix of deferrable loads.

And we can show that SocketWatches are able to accurately pinpoint malfunctions in appliances, such as air conditioners (blocked air filters and obstructed fans) and refrigerators (gasket leakage). 


IR: How do you envision these devices being used in the future?

TG: I think there are multiple ways these devices could roll out to consumers and the industry. Utility companies can subsidize nPlugs for high consuming deferrable loads, like electric vehicle charging, to alleviate peak demand.

In the case of SocketWatch, since it provides alerts for reducing electricity waste, helps in preventive maintenance, and lowers a home’s electric bill, it could be directly commercialized to end consumers. And we could also partner with appliance manufacturers since these devices could be integrated within an appliance.

Read more about Tanuja and her work in MIT Technology Review’s 2014 Innovators Under 35.

8.18.2014

Oil Applies Brakes to Molecules under STM at Room Temp

IBM scientists Marilyne Sousa, Peter Nirmalraj, Heike Riel and Bernd Gotsmann
Since the first microscope was invented, researchers and scientists around the world have searched for new ways to stretch their understanding of the microscopic world. In 1981, two IBM researchers, who went on to become Nobel Laureates, Gerd Binnig and Heinrich Rohrer, broke new ground in the science of the miniscule with their invention of the scanning tunneling microscope (STM), which enabled scientists to visualize the world all the way down to its molecules and atoms using a a quantum phenomenon called tunneling.

Tunneling atoms escape the surface of a solid to form a kind of cloud that hovers above the surface; when another surface approaches, its atomic cloud overlaps and an atomic exchange occurs. By maneuvering a sharp metal conducting tip over the surface of a sample at an extremely small distance, Binnig and Rohrer found that the amount of electrical current flowing between the tip and the surface could be measured. Variations in this current could provide information about the inner structure and the height-relief of the surface. And from this information, one could build a three-dimensional atomic-scale map of the sample’s surface to reveal what atoms look like for the first time.

IBM scientists in Zurich continue to push the boundaries of this instrument and in a paper appearing in Nature Materials today titled Nanoelectrical analysis of single molecules and atomic-scale materials at the solid/liquid interface
they have developed a new and frugal technique which enables the direct imaging and stable electrical read outs of single-molecules in a liquid environment using STM at room temperature.

In the unique and ultra-controlled environment of the Noise Free Labs of the Binnig and Rohrer Nanotechnology Center, scientists are using a high-density liquid called silicone oil to serve as liquid-brakes to nearly freeze single molecular motion and ultra-thin organic spacers to electronically decouple the molecules from the contact metals.

In-situ STM image of individual fullerene
molecules adsorbed on spacer-coated gold substrate.
The combinatorial effect allows the scientists to record high resolution real-space images and decode the intrinsic electronic structure of single-molecules.

The technique has been successfully extended to further resolve the atomic-lattice, quantify topological defects and map the band structure of monoatomic graphene.

In addition to applications in hybrid electronics, where electronically active molecules (organic switches) are embedded into 2-D crystal matrices, these findings provide new pathways and insights in mapping DNA electronic structure and dynamics as they interact with graphene nanopores, which has direct implications in engineering genome sequencing devices.

This research was done in collaboration with chemists from IMDEA-Spain and Theoritical Physists from University of Limerick, Ireland. 


7.29.2014

Cloud-to-Cloud Connectivity Becomes Elastic


Editor’s note: This article is by Douglas Freimuth, Senior Technical Staff Member and Master Inventor at IBM Research.

The true power of cloud computing is in its ability to connect between clouds and share resources and compute power. Clouds make large cloud-to-cloud data transfers for critical administrative functions like data center backups and workload balancing. But a typical private cloud can also connect to a public cloud to access a specific service or type of data to create a “hybrid” cloud. All of that data sharing takes networks – and bandwidth. My team, along with AT&T and Applied Communication Sciences, funded by the DARPA CORONET program, created the technology to make cloud-to-cloud connectivity “elastic” in order to make using the cloud (and paying for that usage) more flexible. Not every service, at every moment of the day, needs peak network availability, so why have the volume turned up all the time? 

Part of the reason for today’s “always on” approach is to always have a secure, reliable network for clouds to connect and operate. Making the bandwidth flexible means being able to adjust the bandwidth on existing connections, which in turn requires making extremely fast decisions, using data center software, about adjustments between cloud-to-cloud connections. With network carriers moving their hardware to software on the cloud, elastic scaling will become commercially available. For example, moving the network infrastructure that manages our smartphone data from a physical box to virtualized software in the cloud can help make elastic connectivity possible – and less expensive for the carrier.

What this partnership has shown in a proof of concept, and now wants to deliver commercially, is a cloud system that monitors and automatically scales the network up or down as applications need. It works by the cloud data center sending a signal to a network controller that describes the bandwidth needs, and which cloud data centers need to connect. The key technology in the cloud IBM will provide is the intelligent orchestration capability that knows when and how much bandwidth to request and between which clouds. The cloud data center orchestrator will continue to get more intelligent in its utilization of the network. Longer term an application on your smart phone might be smart enough to request bandwidth from the network controller.

Today, a truck drives out to install new network components and administrators set up the Wide Area Network connectivity. Physical equipment has to be installed and configured if you want to turn up a WAN signal. We could do all that virtually by using intelligence in the cloud to request bandwidth from pools of network connectivity when needed by an application. When the peak requirement has been met, the cloud can signal the network carrier to release available bandwidth back to the pool.

The difference in set up time between today’s cloud-to-cloud networks, to what we have demonstrated, is days or months versus seconds.

Going from always on to always elastic

To make this all work, we demonstrated a cloud platform running in the cloud data center that manages connections and has the intelligence to make fast decisions to signal a controller in the core network for connectivity at the right time (to make the cloud-to-cloud connection elastic). Then, our partners’ controller orchestrates the requests from our cloud to ensure the requests get to the correct layer in the network. The carriers will provide a multi-layer network with different bandwidth capabilities to service different request from the cloud, such as a request to synchronize a critical application database so that a smartphone user gets up-to-the-moment information, or a full data center backup in the event of a catastrophic event.

The connection request might be set up on an IP network, sub-wavelength or possibly a full wavelength layer for demanding applications depending on the bandwidth requested. Wavelength in this context means optical carrier signals multiplexed over an optical fiber. Each wavelength represents a high bandwidth connection to carry our application data. Those wavelengths can be sub-divided to carry lower bandwidth traffic like a video stream to a mobile device. In the instances of full wavelength requests, all parties involved might utilize a specialized protocol to dynamically set up a high bandwidth service in order to set up the proper routing.

Making cloud-to-cloud computing elastic over WAN augments everything that’s already great about the cloud. Businesses spend less because of more effective sharing of network resources, enabled by virtualized hardware. Operating costs drop because of automated processes controlled by cloud - and network-level orchestrators. Businesses that move to set up cloud-to-cloud connections via WAN will notice further cost savings and faster service setup and delivery.

For you and me, as individuals, more dynamic cloud computing means new applications we never dreamed could be delivered over a network – or applications we haven’t even dreamed of yet.

Douglas M. Freimuth is a Senior Technical Staff Member and Master Inventor in the Cloud Based Networking group at the IBM Thomas J. Watson Research Center where he has focused on the research, design and development of server networking technologies. He is a co-author of the IO Virtualization (IOV) specifications in the PCI SIG. He has also participated in the Distributed Management Task Force (DMTF) for activities related to deployment of Virtual Machines and cloud networks. Doug has 60+ disclosures and patents in the domain of cloud networking, and has also published related papers, developed products and contributed to open source.

7.24.2014

Smaller chip with bigger bandwidth, thanks to E-Band and SiGe

Dr. Danny Elad
Editor’s note: This article is by Danny Elad, Manager of Analog and Mixed Signal Technologies at IBM Research – Haifa.

Backhaul, bottlenecks, integrated circuits, and high-speed streaming are all concepts that refer to traffic – mobile device data traffic, to be precise.

As the number of smartphones and tablets increases alongside a host of new cloud services, bandwidth traffic is becoming a big problem, causing the data transmission to slow down. With massive amounts of multimedia content and growing demands for 4G and soon 5G streaming, the current infrastructure just isn’t set up to meet the demands. By the end of this year, the number of mobile-connected devices will exceed the number of people on earth.

IBM researchers in Haifa have developed the first low-cost Silicon Germanium (SiGe) complete transceiver chipset solution to support the coming high bandwidth requirements for wireless backhaul data transfer. In other words, these chips can move data over the network faster than today’s infrastructure, and keep up with mobile device growth for years to come. 

E-Band 

Several years ago, when the radio frequencies used to channel data from handheld devices to wireless radio base stations, things started to get overcrowded. At that time, a new range of frequencies, known as E-Band, were defined to support larger bandwidths.   

The goal of E-band for wireless communication was to find a lower cost alternative to optical fiber, but one which did not require large investments in infrastructure, like laying new fiber cable. However, E-Band is based on still-expensive Galium Arsenide (GaAs) technology, and until now there has not been a highly integrated low cost solution that telcos can use for cell to cell communication. 

Silicon - A Single Chip Solution 

IBM’s new SiGe chips could soon replace more expensive off-the-shelf components used today, like GaAs. Silicon offers a lower cost solution that incorporates a high level of integration to support more advanced transceiver functionality. The new chipset offers broadband wireless communications in the 71-76 GHz and 81-86 GHz E-Band millimeter-wave frequency range supporting a high modulation scheme of 64QAM.

E-band is used inside the cellular network and allows improved bandwidth between cells. In this way, our mobile devices, which communicate with those cells, can benefit from  higher speed performance
Tiny E-Band chip shown near an Israeli Shekel
(about the size of a dime)
SiGe offers a single chip platform with the functionality normally provided by several chips, including mixed signal, digital, and analog circuits and is therefore a very competitive solution for mobile service providers to enable ultra-high-speed information streaming.

Wireless networking companies like BridgeWave and GigOptix have already turned to IBM Research to license these E-Band solutions.

7.14.2014

Profile of an IBM Scientist: Giorgio Signorello

Who: Giorgio Signorello
Location: IBM Research - Zurich
Nationality: Italian

Focus: Materials Integration and Nanoscale Devices

“Some kids grow up and want to be a fireman or police officer - When I was kid, I wanted to be a scientist. Later on I wanted to become an architect or a designer - I loved the creative side of both - but my parents suggested that I should be a chemist or physicist. They thought it wasn't normal for someone so young to be so strong in these areas. Years later I started working in a clean room in Sweden and for the first time I felt that I made the right choice. It may sound crazy, but I feel at home in a laboratory.”

“What fascinates me the most about science is that it is not based on any one individual — it’s a collective contribution. What I discovered in my most recent Nature Communications paper will hopefully lead to another discovery years from now. In that sense, science is a giant puzzle without fixed borders and each time we publish we add one piece to the picture. 150 years ago James Clerk Maxwell described for the first time the physics of light and today we are manipulating light to send data at the nanoscale. Thousands of publications had to happen to reach this point. It’s inspiring, crazy and not far from being magical.”

Insider Tip:

“I applied to the IBM Research lab in Zurich five times over two years. Finally, my persistence paid off after emailing Heike Riel, who is now an IBM Fellow. But this taught me a lesson, which is applicable to any aspiring scientist — be persistent. Whether you are having challenges with a difficult experiment or dealing with a referee who keeps refusing your paper, don’t give up and push yourself.”

Last week Giorgio was awarded a 5000 CHF ($5600 USD) prize from the Swiss Physical Society for his his outstanding scientific achievements investigating the effect of strain on semiconducting nanowires, which can make it possible to transfer data between chips at the speed of light.

Read
Giorgio's publications and connect with him on LinkedIn.

7.07.2014

40+ Year Old Challenge Solved for Phase Change Materials


Phase change materials, were first considered for storing data in the 1970s, where the two metastable states or phases of these materials, are used to store data in the form of millions of lines of binary code made of up billions of 0s and 1s. 

The concept eventually reached the consumer market, and today the most common use of these materials is in optical storage, where the phase transition is induced by heating the material with a laser beam - this is how a Blue-ray disk stores a video.

The cross-sectional tunneling
electron microscopy (TEM) image of
a mushroom-type PCM cell
is shown in this photo.
In addition to a laser, it is also possible to heat the phase change material through electrical means by placing it between two electrically conducting electrodes. This forms the basis for a novel concept called phase-change memory (PCM), a nonvolatile memory technology that promises to bridge the performance gap between the main memory and storage electronics, spaning from mobile phones to cloud data centers.


The nanometric volume of phase change material in the PCM cell can be reversible switched from the amorphous phase (logic “0”) and the crystalline phase (logic “1”) by the application of suitable voltage pulses. The resulting data can be read out by applying a much lower read voltage.

But for more than 40 years scientists have never measured the temperature dependence of crystal growth, due to the difficulties associated with the measurements which are taken at both a nanometer length and a nanosecond time scale. That was until earlier this year when, for the first time, IBM scientists in Zurich were able to take the measurements, which is today being reported in the peer-review journal Nature Communications.


On the eve of the publication of this important result, the authors answered a few questions from their lab in the Binnig and Rohrer Nanotechnology Center at IBM.

IBM scientists Abu Sebastian, Manuel Le Gallo and Daniel Krebs

Let’s start with the obvious decades old question, what is the temperature corresponding to maximum crystal growth?

Daniel Krebs: The optimum crystal growth temperature is 477 degrees Celcius (750 Kelvin), but that it really just one point on the chart (figure B) – holistically it gets much more interesting. 

What is more useful to scientists studying phase change materials is that we were able to model the entire growth velocity curve in addition to this maximum. Prior to this paper, scientists knew some of the points, but not across such a wide temperature and time scale.

It is also worth noting that we took these measurements within the cell. Typically, experiments took place outside the cell, which then had to be extrapolated. Now scientists have an excellent reference point.

Can you describe the eureka moment?

Abu Sebastian: Let me start by saying that these phase change materials are very fascinating and possess unconventional crystallization kinetics. Just by changing the temperature by a few hundred degrees, you change the crystal growth rate by 17 orders of magnitude (that is beyond a trillion). This is why it  has been so difficult to probe experimentally.

Only in the last 18-24 months have scientists begun to probe the crystallization rate within a reasonable temperature range, until this point the measurements were at very low temperatures (close to room temperature).

Our key insight was in exploiting the nanoscale dimensions and the fast thermal dynamics of the phase change memory cell to expand the temperature range all the way up to the point at which the material melts – more than 600 degrees Celsius.


Daniel: It’s called the time-temperature dilemma. At room temperature you want stability of the material to retain the data for at least 10 years, but when you want to write to the material it needs to crystallize in nanoseconds. And that is what makes this material so interesting, but it’s also what makes it challenging – particularly in how it can be accurately measured.

Manuel Le Gallo: I came to IBM to do my Masters thesis work on electrical transport in phase change materials. One of the requirements was to achieve the same amorphous volume at all temperatures. This involved a deeper understanding of melting and crystallization in the PCM cells. As we delved more into the subject, the focus of the thesis gradually shifted, culminating in the fascinating results we present in the paper.

What inherent challenges in phase change memory does this achievement address and what are the potential applications?

Daniel: If we break down the challenges of PCM into read and write operations, in this work, we are addressing the write operation. Our measurements will help devise ways to write data faster and with better retention.  


Abu: In the context of PCM, this research will help us in estimating how fast we can write, how much power is required and what the real retention time is. Going beyond memory, yet another emerging application of phase change materials is in neuromorphic engineering,  creating chips based on the biological architectures of the nervous system. So understanding the phase change mechanism is critically important for a number of applications.

Manuel: Crystal growth and subsequent change in electrical conductance has the potential to emulate the biophysics of neurons and synapses. This will also form part of my doctoral thesis work which I am currently pursuing jointly with the Institute of Neuroinformatics at ETH Zurich.

What will you study next?

Abu: It will be interesting to look at different materials and compare the temperature dependence of crystal growth. We also discovered that the crystal growth rate reduces over time, which we want to expand on further.

Daniel: The reduction in growth over time is actually very interesting for me. In the amorphous phase the materials are a glass. Like a glass window becomes thicker when it is at rest over a long period of time, like 100 years, also our amorphous material will change. In fact, it changes in such a way that it becomes more viscous. This viscosity is one of the characteristics which determines how fast the material can crystalize. Therefore it effects the write operation. It cannot crystallize as fast anymore, which is a good thing for data retention. On the other hand the glassy nature also causes the inherent problem of resistance drift in phase change memory.