334 results found
- Explained: Fracking for OilThe idea for fracking dates back to 1862 and has been credited to Edward A. L. Roberts. During the earlier days, explosives were used for shattering the surrounding rocks in the oil well but in the 1940s, explosives were replaced with high-pressure blasts of liquids, and so “hydraulic” fracking became the standard in the oil and gas industry. It wasn’t until the 21st century that fracking saw a massive boom and that was because of two innovations, first, the use of a certain type of fracturing fluid: slickwater, a mix of water, sand, and chemicals to make the fluid less viscous and second, horizontal drilling, a technique that increases the productive potential of each well. What is Fracking? Hydraulic fracturing, also commonly known as fracking, is a good stimulation technique involving the fracturing of bedrock formations by a pressurized liquid. It is a technique used for the extraction of natural gas or oil from shale and other forms of impermeable rock formations that lock in oil and gas and make fossil fuel production difficult. The process involves the high-pressure injection of 'fracking fluid' into a wellbore to create cracks in the deep-rock formations through which natural gas, petroleum, and brine will flow more freely. When the hydraulic pressure is removed from the well, small grains of hydraulic fracturing proppants (either sand or aluminum oxide) hold the fractures open. How Does Fracking Work? It involves blasting fluid deep below the earth’s surface to crack sedimentary rock formations—this includes shale, sandstone, limestone, and carbonite—to unlock natural gas and crude oil reserves. Hydraulic fracturing requires an extensive amount of equipment which usually consists of a slurry blender to mix the fracking fluids, high pressure and high volume fracturing pumps such as triplex or quintuplex pumps, and monitoring equipment. Source: US Patent No. 4390067A Once a shale formation is targeted, an energy company will set up a drill pad, or home base, for drilling. The first step is to drill (with a drill bit typically 18-20 inches in diameter) vertically down past younger layers of rock that may surround a water table or contain younger types of gas. Once the hole is about 1,000 feet deep, a steel casing that’s thinner than the hole itself is inserted. Next cement is pumped down the casing, followed by high-pressure air, which pushes the cement to the bottom of the drill hole and up into the gap, or strings, between the steel casing and the surrounding rock. This becomes the vertical wellbore. The process is repeated until the drill hole is deep enough to reach the shale, which could be as deep as 10,000 feet below the surface, but typically caps out around 7,000-8,000 feet. The invention of horizontal drill bits allows the wellbore to change direction once it is at a target depth. From a single drill pad, then, energy companies are able to drill multiple vertical wellbores within five feet of each other that are able to reach a very wide radius, sometimes miles wide, from the drill pad, thus eliminating the need to set up multiple drill pads across a landscape. The next step is to send small explosives or a perforation gun down to the targeted section of the horizontal wellbore to punch holes through the steel casing and cement. Once the holes are made, between 3 million and 5 million gallons of water containing a mix of sand and chemicals are pumped at extremely high pressure down into the wellbore. This fracking fluid explodes out of the holes, pulverizing the shale rock and creating multiple fractures or fissures throughout the formation. (FracFocus, a nonprofit website, keeps lists of chemicals used in fracking across the country, but disclosure is not legally required and companies listed submit their fracking solution “ingredients” voluntarily). The sand and chemicals of the fracking fluid sneak into the cracks created in the rock and hold them open, allowing for the trapped natural gas to flow back into the horizontal wellbore. The rock can be held open by just a few tiny grains of sand. Source: US Patent No. 7569523B2 Associated equipment may include fracturing fluid tanks, proppant storage tanks, high-pressure treating iron, chemical additive unit, low-pressure flexible hoses, and various gauges and meters for flow rate, fluid density, and treating pressure. On the high end of the spectrum, the pressure used for hydraulic fracking may be as high as 15,000 psi and the injection rate could be as much as 100 barrels per minute. The overall environment for fracking operations is very harsh, requiring equipment that can withstand extreme conditions. As hydraulic fracturing has become more widespread, the development of new and improved technologies has also ramped up. Growing Demands The demand for energy – and natural gas – has been increasing steadily. In 2019, the United States set a new record using 85.0 billion cubic feet per day of natural gas, up 3% from the previous year. From 2007 to 2016, annual U.S. oil production increased 75 percent, while natural gas production increased 39 percent, thanks to the advancements in horizontal drilling and fracking technology. In 2009, the American Petroleum Institute estimated that 45% of the United States’ natural gas production and 17% of oil production would be lost within 5 years without the usage of hydraulic fracturing. Of US gas production in 2010, 26% came from tight sandstone reservoirs and 23% from shales, for a total of 49%. As production increased, there was less need for imports: in 2012, the US imported 32% less natural gas than it had in 2007. In 2013, the US Energy Information Administration projected that imports will continue to shrink and the US will become a net exporter of natural gas sometime around 2020. Increased US oil production from hydraulically fractured tight oil wells was mostly responsible for the decrease in US oil imports since 2005. The US imported 52% of its oil in 2011, down from 65% in 2005. Hydraulically fractured wells in the Bakken, Eagle Ford, and other tight oil targets enabled US crude oil production to rise in September 2013 to the highest output since 1989. Source: U.S. Energy Information Administration, Annual Energy Outlook 2013 Early Release Top Players Halliburton (HAL) was the first company to carry out a fracturing operation back in 1949. Since then, they’ve been at the forefront of innovation and have now fractured over a million wells in the USA alone. Schlumberger (SLB) is the largest oilfield services company in the world. It’s no wonder that they have a large pressure pumping section. Like Halliburton, they’ve been at the forefront of technological advancement. Through deals such as licensing agreements with Exxon Mobil (XOM), and acquisitions, they’ve stayed ahead of the pack. Just recently (Jan 2018), Schlumberger acquired Weatherford’s US fracturing assets for $430 million in cash. Baker Hughes (BHI) is another big player in the hydraulic fracturing/pressure pumping space. They can also take credit for many of the technological advances in recent years. They run the Pressure Pumping Technology Center (PPTC) where their researchers focus on ways to frac more stages per day and reduce non-productive time (NPT). FTS International (FTS) is lesser known than the first 3, but is one of the largest well completion companies in North America, specializing in fracturing. They have also entered into a joint venture with Sinopec in China to further their fracturing programs. Together, these companies dominate around three-quarters of the fracking market in the USA. The above graph shows a count of patent families the assignees have in the field related to Hydraulic Fracturing, and it can be seen that Schlumberger Technology, Halliburton Energy Services, and Baker Hughes are the three US companies that own most of the patent families. As energy companies adopt technologies to equip themselves and their clients to take full advantage of the advanced technologies, they need to think about how to protect their native technologies and protect themselves against patent infringement lawsuits. Copperpod provides IP consulting services such as Infringement Claim Charts, Prior Art Search, Reverse Engineering and advises clients on patentability to give a clear picture of the state of the art to navigate away from the potential prior art and monetize IP assets. References https://en.wikipedia.org/wiki/Hydraulic_fracturing https://www.nrdc.org/stories/fracking-101#history https://www.eia.gov/todayinenergy/detail.php?id=43035 http://www.eia.gov/energy_in_brief/article/about_shale_gas.cfm http://www.bseec.org/technological_improvements_to_hydraulic_fracturing https://en.wikipedia.org/wiki/Hydraulic_fracturing_in_the_United_States https://drillers.com/pressure-pumping-which-are-the-biggest-fracking-companies/ https://www.ipaa.org/fracking/ https://www.worldsciencefestival.com/2014/08/frack-hydraulic-fracturing-101/ https://www.orbit.com/ Uday is a research analyst at Copperpod IP. He has a bachelor's degree in Electronics and Communication Engineering. His interest areas are Microcontrollers, IoT, Semiconductors, and Memory Devices. 
- What is TEM (Transmission Electron Microscopy)?With a significant role in material sciences, physics, (soft matter) chemistry, and biology, the transmission electron microscope is one of the most widely applied structural analysis tools to date. It has the power to visualize almost everything from the micrometer to the angstrom scale. Technical developments keep opening doors to new fields of research by improving aspects such as sample preservation, detector performance, computational power, and workflow automation. For more than half a century, and continuing into the future, electron microscopy has been, and is, a cornerstone methodology in science. Transmission Electron Microscopy (TEM) has long been used in materials science as a powerful analytical tool. In transmission electron microscopy (TEM), a thin sample, less than 200 nm thick, is bombarded by a highly focused beam of single-energy electrons. The beam has enough energy for the electrons to be transmitted through the sample, and the transmitted electron signal is greatly magnified by a series of electromagnetic lenses. Transmission Electron Microscope (TEM) combined with precession 3D electron diffraction tomography technique has produced very promising results in the field of crystal structure determination and has the great advantage of requiring very small single crystals (from 25-500 nm) and very small quantity of material. How does TEM work? Transmission Electron Microscope (TEM) uses an electron gun to fire a beam of electrons. The gun accelerates the electrons to extremely high speeds using electromagnetic coils and voltages of up to several million volts. The electron beam is focused into a thin, small beam by a condenser lens, which has a high aperture that eliminates high angle electrons. Having reached their highest speed, the electrons zoom through the ultra-thin specimen, and parts of the beam are transmitted depending on how transparent the sample is to electrons. The objective lens focuses the portion of the beam that is emitted from the sample into an image. Another component of the TEM is the vacuum system, which is essential to ensure electrons do not collide with gas atoms. A low vacuum is first achieved using either a rotary pump or diaphragm pumps which enable a low enough pressure for the operation of a diffusion pump, which then achieves a vacuum level that is high enough for operations. High voltage TEMs require particularly high vacuum levels and a third vacuum system may be used. The image produced by the TEM, called a micrograph, is seen through projection onto a screen that is phosphorescent. When irradiated by the electron beam, this screen emits photons. A film camera positioned underneath the screen can be used to capture the image with a charge-coupled device (CCD) camera. This technology can tell us about the structure, crystallization, morphology and stress of a substance whereas scanning electron microscopy (SEM) can only provide information about the morphology of a specimen. However, TEM requires very thin specimens that are semi-transparent to electrons, which can mean sample preparation takes longer. Where can TEM help? 1. The transmission electron microscope (TEM) is used to examine the structure, composition, and properties of specimens in sub-micron detail. Aside from using it to study general biological and medical materials, TEM has a significant impact on fields such as: materials science geology environmental science 2. The investigation of the morphology, structure, and local chemistry of metals, ceramics, and minerals is an important aspect of contemporary material science. It also enables the investigation of crystal structures, orientations, and chemical compositions of phases, precipitates, and contaminants through diffraction pattern, characteristic X-ray, and electron energy loss analysis. Transmission electron microscopy can : Image morphology of samples, e.g. view sections of material, fine powders suspended on a thin film, small whole organisms such as viruses or bacteria, and frozen solutions. Tilt a sample and collect a series of images to construct a 3-dimensional image. Analyze the composition and some bonding differences (through contrast and by using spectroscopy techniques: microanalysis and electron energy loss). Physically manipulate samples while viewing them, such as indent or compress them to measure mechanical properties (only when holders specialized for these techniques are available). View frozen material (in a TEM with a cryostage). Generate characteristic X-rays from samples for microanalysis. Acquire electron diffraction patterns (using the physics of Bragg Diffraction). Perform electron energy loss spectroscopy of the beam passing through a sample to determine sample composition or the bonding states of atoms in the sample. 3. Environmental forensic microscopy identifies sources of indoor and outdoor contaminants and improves estimates of total human exposure in residential and office settings. Samples of particles (dust, dirt, soil or suspensions in liquid) are collected and analyzed by transmission electron microscopes to identify the particle size and shape, and determine possible sources. Building materials including floor tiles, roofing tars and dust samples are analyzed by transmission electron microscopes to determine surface contamination resulting from settling asbestos and fiberglass dust. 4. TEM is used as both a resistivity sounding tool and a deep metal detector, and the decay time constants may eventually prove useful in target characterization. It is now used extensively by an increasing number of earth scientists for direct observation of defective microstructures in minerals and rocks. Copperpod IP helps attorneys evaluate patent infringement and uncover hard-to-find evidence of use through public documentation research, product testing and reverse engineering, including reverse engineering techniques outlined above. Please contact us at info@copperpodip.com to know more about our reverse engineering capabilities. 
- What is EDX (Energy Dispersive X-Ray Spectroscopy)?EDX systems are attachments to Electron Microscopy instruments ( Scanning Electron Microscopy (SEM) or Transmission Electron Microscopy (TEM)) where the imaging capability of the microscope identifies the specimen of interest. The data generated by EDX analysis consist of spectra showing peaks corresponding to the elements making up the true composition of the sample being analyzed. In a multi-technique approach EDX becomes very powerful, particularly in contamination analysis and industrial forensic science investigations. The technique can be qualitative, semi-quantitative, quantitative and also provide spatial distribution of elements through mapping. The EDX technique is non-destructive and specimens of interest can be examined in situ with little or no sample preparation. An electron beam is focused on the sample in either a scanning electron microscope (SEM) or a transmission electron microscope (TEM). The electrons from the primary beam penetrate the sample and interact with the atoms from which it is made. Two types of X-rays result from these interactions: Bremsstrahlung X-rays, which means ‘braking radiation’ and are also referred to as Continuum or background X-rays, and Characteristic X-rays. The X-rays are detected by an Energy Dispersive detector which displays the signal as a spectrum, or histogram, of intensity (number of X-rays or X-ray count rate) versus X-ray energy. The energies of the Characteristic X-rays allow the elements making up the sample to be identified, while the intensities of the Characteristic X-ray peaks allow the concentrations of the elements to be quantified. How does EDX work? When the sample is bombarded by the SEM's electron beam, electrons are ejected from the atoms comprising the sample's surface. The resulting electron vacancies are filled by electrons from a higher state, and an x-ray is emitted to balance the energy difference between the two electrons' states. The x-ray energy is characteristic of the element from which it was emitted. The EDS x-ray detector measures the relative abundance of emitted x-rays versus their energy. The detector is typically a lithium-drifted silicon, solid-state device. When an incident x-ray strikes the detector, it creates a charge pulse that is proportional to the energy of the x-ray. The charge pulse is converted to a voltage pulse (which remains proportional to the x-ray energy) by a charge-sensitive preamplifier. The signal is then sent to a multichannel analyzer where the pulses are sorted by voltage. The energy, as determined from the voltage measurement, for each incident x-ray is sent to a computer for display and further data evaluation. The spectrum of x-ray energy versus counts is evaluated to determine the elemental composition of the sampled volume. Where does ED help? 1. Electrical/Electronic Material EDXRF Analysis of Chlorine in Plastic (PE) Materials Screening Analysis with EDX-7000 Navi Software 2. Automobiles and Machinery Automobile Evaluation Instruments 3. Ferrous/Non-Ferrous Metals QC Analysis of Magnesium Alloy Die Castings by EDXRF EDXRF Analysis of Lead, Cadmium, Mercury and Chromium in Zinc Alloy EDXRF Analysis of Lead, Cadmium, Silver, Copper in Lead-Free Solder Materials Measurement of Lead in Lead-Free Solder by ICP-AES, FAAS and EDX 4. Ceramics Quantitative Analysis of Cement by EDX 5. Oil and Petrochemical Analysis of Inorganic Additives in Resin by FTIR and EDX EDXRF Analysis of PM2.5 (Particulate Matter) Analysis of Sulfur in Oil Using Energy Dispersive X-Ray Fluorescence Spectrometer Quantitative Analysis of Antimony (Sb) in Plastics by EDXRF Quantitative Analysis of Waste Oil by EDX-7000 6. Chemicals Analysis of Black Rubber Diaphragm by FTIR and EDX Quantitative Analysis of Elements in Small Quantity of Organic Matter by EDXRF - New Feature of Background FP Method 7. Environmental/Mining Analysis of Aqueous Solution by EDX-LE - Performance in Air Atmosphere - Determination of Arsenic and Lead in Earth and Sand Using EDXRF [JIS K 0470] 8. Pharmaceuticals EDXRF Analysis of Arsenic and Lead in Dietary Supplement 9. Agriculture and Foods EDXRF Analysis of Arsenic in Foods Confirmation of Raw Material Quality -Dealing with "Silent Change" Counterfeiting- FTIR/EDX Food Contaminant Analysis System Qualitative and Quantitative Analysis of Seafood by EDXRF Copperpod IP helps attorneys evaluate patent infringement and uncover hard-to-find evidence of use through public documentation research, product testing and reverse engineering , including reverse engineering techniques outlined above. Please contact us at info@copperpodip.com to know more about our reverse engineering capabilities. 
- Is Embedded SIM (eSIM) a Solution For IoT Devices?A subscriber identity module or subscriber identification module, popularly known as SIM card was invented in 1991 in Munich by a SIM card maker Giesecke & Devrient. It is an integrated circuit chip that is intended to securely store the international mobile subscriber identity (IMSI) number and key related to it. Both of these IMSI numbers and keys are used to identify and authenticate subscribers on mobile telephony devices such as mobile phones, tablets, and computers. SIM cards these days are also used to store contact information. Talking about the technical specification of the SIM card, it usually has a unique serial number (ICCID), IMSI number, security authentication and ciphering information, temporary information related to the local network, a list of the services the user has access to, and two passwords: a personal identification number (PIN) for ordinary use, and a personal unblocking key (PUK) for PIN unlocking. The SIM cards were firstly sold to Radiolinja, which was a Finnish Wireless Network operator. In 2017, the SIM card market was valued at 3440 Million USD and is expected to grow to 3620 Million USD by 2025. But in today’s modern era, there are some machine-to-machine (M2M) applications that require no change of SIM cards whatsoever. This provides an advantage to many electronics device manufactures, to avoid including SIM connectors in their products, and thus an embedded SIM or embedded universal integrated circuit card (eUICC), popularly known as eSIM was invented. This provides manufacturers with the profound advantage of not only reducing the design complexity of their products but also provides greater reliability and network security. Rather than putting a SIM card externally, an eSIM is an integrated part of the design of the product, and end-users can add or remove operators without the need to physically swap a SIM from the device and it provides all the services and subscriptions which are provided by traditional removable SIM cards. The eSIMs are also standardized by recognized industrial bodies such as GSMA, ETSI, Global Platform, and SIMalliance. At present, there are more than 200 network operators which have or are planning to launch eSIM services, which covers 90 countries over 5 continents. According to a research study, eSIM enabled smartphone shipment will be doubled from 255 Million units in 2020 to 781 Million in 2021. eSIM Compliant Products How Does SIM and eSIM Work? Before we know how an eSIM works in a general use case scenario, let’s take a look at how physical SIM works. Let’s just take an example here, Alice just bought a mobile phone that connects via physical SIM. Following are the steps through which it would work:- Alice would firstly choose a company as a carrier and choose a plan from it. The carrier will send her a physical SIM card with their network-specific data stored in it. Alice then puts SIM into her mobile phone and installs it into the phone. The phone will use data stored on the SIM to connect to the carrier. If Alice wants to switch her carrier, she would remove the SIM from her phone and put it into her phone again and install it into her phone. The carrier data is stored on the physical SIMs and devices need that data to access the network. On the other hand, eSIM is just another SIM card with empty data slots i.e. pre-installed and embedded in the device itself. Instead of using a physical SIM to send the data required to connect to the network, the carrier can send that data over the internet, which the eSIM can use. Let’s see how it will work in the case of Alice if she purchases a device with eSIM:- Alice will purchase a mobile with eSIM service in it and also picks up a plan that she likes and orders it to buy. The carrier will send her a QR code instead of sending a physical SIM. Alice will scan the code, activating the plan which triggers the next step. The provisioning system will send the SIM profile into an eSIM slot on her mobile phone. The SIM profile has the same data which is stored on the physical SIM. Once it's installed, the mobile phone will use the eSIM and data stored in that slot just like a physical SIM card. However, unlike physical SIMs, eSIMs can store the data for multiple carriers. For example, if Alice wants a different number for any specific purpose, she just has to download new plans on the same eSIM but in some different slot. In this way, she can retain both of her numbers without removing any SIM cards. The physical SIM comes with different standards which are discussed below. The standards vary from Full-sim which was introduced in 1991 to the Nano-sim in early 2012. Industry standards The standards vary from Full-sim which was introduced in 1991 to the Nano-sim in early 2012. The size of the physical SIM also decreased from the Full-sim to Nano-sim. Finally, the eSIM was introduced in 2010. Advantages of eSIM The onboarding experience is very friendly and straightforward for end-users. The eSIM allows an electronic device to be used as soon as it is switched on. The end-users can also pick up a local prepaid phone number while traveling abroad as eSIMs are rewritable and thus, avoids premium roaming charges. Logistics and support for service providers are simplified as there will be no more SIM cards to manage at the customer level. New business opportunities are being created for eSIM carriers, as eSIM is extending mobile connectivity to many new consumer-connected devices. New designs are more reliable, smaller, dust resistant, and waterproof. Disadvantages of eSIM It’s not as easy to quickly switch devices. Right now, if your mobile device stops working, you can easily remove the SIM card from it and put it into another device, and use it. But this is not in the case of an eSIM. You have to wait until the device is repaired. You cannot hide your eSIM device. If a person is concerned about following his/her location movements then eSIM might not be a device for them Right now, eSIM functionalities are being provided by some of the top-end Bands and their products. No doubt, in some time it can get along with regular devices but users need to wait. Technical support for eSIM chip malfunctioning is not readily available at this point in time and also not in every location. Likewise, every coin has two sides, every technology has its advantages and disadvantages. It’s up to the users if the disadvantages can be coped up and provides a great deal of services out of the advantages of the new technology. As the technology will evolve in the coming days, I think inventions will be made to eradicate or administer the disadvantages in a competent manner to make the user experience more friendly and I think in the upcoming years this technology will be seen on even regular phones rather than pertaining to expensive devices at this point of time. Patent Analysis Technology Trend Every technology is marked by the inventions made on it. Presently, there are 51,606 patent families which refer to eSIM or eUICC from which 11,407 being active and 40199 dead patent families. Below provided is a trend chart of the previous 10 years of how patent filing has changed on eSIM. The first version of the standard was launched in March 2016 and subsequently followed by the second version in November 2016. This is marked by the rapid increase in the number of patent families filed in the year 2015 and keeps on increasing till 2016. With the evolution of technology and with every new standard, these trends are going to be changed every time when a new method or system has been introduced in this technology area. Top 10 Players The chart above demonstrates the total number of patents assigned in the eSIM industry to different players in the market. With 2528 patents, Samsung Electronics is leading the chart by being the top player in the industry. Followed by this, Valmet ranks number 2 with 838 patents. Players like Nubia Technology, Metso Paper, Hoescht, and Pfizer are almost on the same level with less difference in the number of patents assigned to them. Apple (370 patents) lack the chart with the lowest number of E-waste management patents. Conclusion In 2018, over 360 Million eSIM based devices were shipped globally, which is expected to reach 2 Billion by the year 2025. These devices include mobile phones, tablets, digital watches, and IoT devices. Cellular connectivity is a core necessity of the mobile phone, like SIM, the market for eSIM will flourish in the coming years. This is marked by the number of devices shipped and the expected number of devices in upcoming years. Moreover, the market value of eSIM is also increasing every year. IoT, is covering most of the technical and non-technical fields in today’s world like agriculture, smart vehicles, global shipping, asset and vehicle tracking, etc., requires connectivity and eSIM chips are delivering what is required. With more enhanced security and reliability and providing the advantage of design flexibility to designers, eSIM technology is making its way in IoT also. With 5G making its way into the market, the user friendly advantage of eSIM cards described above in the article will add a boost to the infrastructure required for 5G standard services like IoT and M2M and will prove to be a game changer in the market. References https://patents.google.com/patent/US9439062B2/en?oq=US9439062 https://www.forbes.com/uk/advisor/mobile-phones/esims/ https://bestphoneplans.com/blog/is-esim-right-for-me-pros-and-cons-of-esim-technology/ https://www.thalesgroup.com/en/markets/digital-identity-and-security/mobile/connectivity/esim/what-is-an-esim https://www.podgroup.com/resources/insights/what-is-euicc/ https://en.wikipedia.org/wiki/SIM_card http://www.three.co.uk/hub/sim-card-answers/#:~:text=Despite%20the%20complicated%20name%2C%20it's,PIN)%20to%20protect%20against%20theft. https://www.usmobile.com/blog/esim/ https://www.gsma.com/esim/wp-content/uploads/2018/12/esim-whitepaper.pdf Sukhjeet is a research analyst at Copperpod IP. He has a Bachelor’s degree in Electronics and Communications Engineering. His areas of interest are Wireless Communication, Internet of things (IoT), Embedded systems, 3D-Prototyping and Control and Automation. Copperpod helps attorneys evaluate patent infringement and uncover hard-to-find evidence of use through prior art search, product testing and reverse engineering. Please contact us at info@copperpodip.com to know more about our reverse engineering capabilities. Keywords: SIM, telecom, patents, eSIM, embedded SIM, IMSI, esim technology, smartphones, dualsim 
- Patents For Students | Episode 3 | How to File a Patent?Welcome to the 3rd episode of our series “Students and Patents - Intellectual Property”. In the previous two episodes, we discussed the importance of filing patents for students and what precisely are the eligibility standards for them to go ahead with it. Now that we know the responses to those questions, we will proceed forward and learn “How Can A Student File A Patent?” 
- Patents for Students | Episode 4 | Electronic Filing of Indian Patent ApplicationsPresenting you the 4th video of the series “Students and Patents - Intellectual Property” In our previous videos, we discussed how a student can develop a mindset of filing a patent, perform research on patents, and whether he is eligible to do so or not. In continuation to those videos, Chandan Aggarwal (VP - Operations) presents the 4th video from the series “Students and Patents” where he discusses the interface of Indian Patent Office website and explains the step by step procedure of how a student or a university can indulge in “Electronic Filing Of An Indian Patent Application”. 
- How does React work?Brought to existence by Jordan Walke and maintained by Facebook, React is the most widely used front-end JavaScript library in the web development domain. Some of the examples of commonly used JavaScript libraries are TensorFlow, Angular, Node etc. React takes a declarative approach to application development that makes it simple to reason about the program while simultaneously aiming for efficiency and flexibility. It is a component-based, open-source front-end library that is exclusively responsible for the application's view layer. It creates basic views for each state in the project, and when the data changes, React updates and renders the appropriate component quickly. The declarative approach simplifies debugging and makes the code more predictable. Let’s have a look at an Instagram page built entirely with React to better understand how it works. React splits the user interface into several components, as seen in the picture, making the code easier to debug. Each component has its attribute and function. Fundamentals of React Components: Components are the fundamental building elements of every React application, and most apps include several components. A component is essentially a user interface element. React divides the user interface into distinct, reusable components that may be handled independently. It uses 2 types of components: · Functional Components : These components are also known as stateless components because they have no state of their own. As props, they may extract data from other components (properties). · Class Components : These components have a distinct render function for returning JSX to the screen and may keep and control their state. Because they can have a state, they are sometimes termed as stateful components. State: The state object is a built-in React object that stores information or data about the component. A component's state can change over time, and when it happens, the component must be re-rendered. The component's state may change as a result of user action or system-generated events, and these changes have an impact on the behavior of the component. Properties (Props): Properties are abbreviated as props. It's a built-in React object that saves the value of a tag's attributes and functions similarly to HTML attributes. It allows you to send data from one component to another in the same way as arguments are passed in a function. Features of React JSX JavaScript Syntax Extension- JSX stands for JavaScript XMLJSX which is used by React to specify how the user interface should look. In React, JSX simplifies the process of writing and adding HTML. HTML structures can be written in the same file as JavaScript code. This avoids the need for complicated JavaScript DOM structures, making the code easier to comprehend and debug. Virtual DOM - Virtual DOM, which is a lightweight replica of the actual DOM (a virtual representation of the DOM), is used by React. In React Virtual DOM, there is an object for every object that exists in the actual DOM. It's identical, except it can’t modify the document's layout directly. Manipulation of the DOM is slow, but manipulation of the Virtual DOM is quick since no graphics are rendered on the screen. As a result, if the state of our application changes, the virtual DOM is updated first, rather than the real DOM. One-Way Data Binding - The one-way data flow is one of the most compelling reasons to choose React for the next projects. The data flow in React is unidirectional. As a result, developers are unable to change any component directly. To make modifications to the components, they must use the callback function. Flux, a JavaScript app architecture, is used by React to govern data flow from a single point. React developers may get more control over their online or mobile applications by using a unidirectional data flow. This increases the application's flexibility while also increasing its efficiency. React Native - React Native is a React renderer that is unique to the platform. Instead of using web components, React Native employs native components. React Native is a tool that transforms React code to make it compatible with Android and iOS. Furthermore, it gives users access to these platforms' inherent functionality. Declarative UI - React is the best for creating engaging and interactive user interfaces for mobile and web applications. In an event of data modification, React adequately renders and updates just the right components. It creates a basic view for each application state. This feature improves the readability of the code and makes debugging easier. Component-Based Architecture - React is built on a component-based architecture. In other words, the user interface of a React-based mobile or web application is split into several components. Each component follows its logic. Instead of utilizing templates, the logic is written in JavaScript. This allows React developers to transfer data across the application without having to worry about the DOM being affected. The components of React play a significant role in shaping how apps interact and look. Key Benefits of React Easy Building of Dynamic Applications: React makes it easier to create dynamic web applications by requiring less coding and providing more functionality, as opposed to JavaScript, which can quickly become complicated. Improved Performance: React leverages Virtual DOM, which speeds up the development of web applications. Virtual DOM checks the components' prior states and updates only the items in the Real DOM that have changed, rather than updating all of the components again like standard web applications do. Reusable Components: Components are the basic building elements of every React application, and a single app includes a number of them. These components have their own logic and controls, and they can be reused across the application, reducing the development time significantly. Unidirectional Data Flow: Data flow is one-way in React. As a result, developers frequently nest child components within parent components when developing a React app. Debugging faults and detecting where a problem emerges in an application at any given time becomes easy because data flows in a single path. Easy To Learn: React is easy to learn since it integrates basic HTML and JavaScript fundamentals with a few useful enhancements. In fact, an experienced JavaScript developer may pick up React development in a matter of days or week. Creating Both Web and Mobile Applications: We already know that React is used to create online applications, but that's not the only thing it can do. React Native, derived from React, is a popular framework for building mobile apps. Thus, React can be used to create both web and mobile applications. Rich Toolset: React has a robust ecosystem that includes tools like Flux and Redux. On the backend, it also uses Nodejs. The focus of Node.js development trends in 2020 will be on improving the performance of your application. In React, Facebook has provided React developer tools as well as Chrome developer tools. Developers may use these tools to find the child and parent components, inspect component hierarchies, and much more. The correct knowledge of React aids a person in the process of Source Code Review as well as many famous applications use React only. Famous Applications Using React Facebook Instagram Netflix New York Times WhatsApp Discovery VR Myntra Discord Airbnb Khan Academy Alternatives To React Below are the top alternatives to React Technology: Inferno Preact Backbone JS Aurelia Ember JS Svelte Riot JS Mithril Vue JS Angular References https://www.geeksforgeeks.org/react-js-introduction-working/ https://www.simplilearn.com/tutorials/reactjs-tutorial/what-is-reactjs https://2019.stateofjs.com/front-end-frameworks/ https://www.thirdrocktechkno.com/blog/why-choose-reactjs-for-your-next-project-features-and-benefits/ https://dzone.com/articles/10-famous-apps-using-reactjs-nowadays 
- Deciphering the PCT Application ProcessWhile there is no such thing as a world-wide patent, there is something that approximates a world-wide patent application that can result in a patent being obtained in most countries around the world. This patent application is known as an International Patent Application, or simply an International Application. The international treaty that authorizes the filing of a single patent application to be treated as a patent application in countries around the world is the Patent Cooperation Treaty, most commonly referred to as the PCT. What is the Patent Cooperation Treaty (PCT)? The PCT is an international treaty with more than 150 Contracting States. The PCT makes it possible to seek patent protection for an invention simultaneously in a large number of countries by filing a single “international” patent application instead of filing several separate national or regional patent applications. The granting of patents remains under the control of the national or regional patent offices in what is called the “national phase”. The Patent Cooperation Treaty, or the PCT as it is typically referred to, came into existence in 1970. It is open to States party to the Paris Convention for the Protection of Industrial Property (1883). The Treaty, which like any other Treaty is a legal agreement entered into between various countries. The purpose of the PCT is to streamline the initial filing process, making it easier and initially cheaper to file a patent application in a large number of countries. By filing through the PCT process, patent protection for an invention in every country that is a member to the Treaty can be enforced. PCT Patent Application Procedure: 1. Filing: An international patent application may be filed by anyone who is a national or resident of a Member Country. A Member Country, also referred to sometimes as Contracting States, are simply those countries that are members to the international treaty. The appeal of the PCT process is that it enables patent applicants to file a single patent application and have that single, uniform patent application be treated as an initial application for patent in any Member Country. This single, uniform patent application is what is referred to as the international application. The international application must be filed in an authorized Receiving Office. The Receiving Office functions as the filing and formalities review organization for international applications. The Patent Offices of the countries that are members to the PCT are called Receiving Offices (For e.g. USPTO in US). 2. International Phase: An “International Searching Authority” (ISA) or International Preliminary Examining Authorities (IPEA) (one of the world’s major patent Offices) identifies the published patent documents and technical literature (“prior art”) which may have an influence on whether the invention is patentable, and establishes a written opinion on the invention’s potential patentability. The purpose of the international search is to discover relevant prior art. “Prior art” consists of everything which has been made available to the public anywhere in the world by means of written disclosure (including drawings and other illustrations); it is “relevant” in respect of the international application if it can help determine whether or not the claimed invention is new, whether or not it involves an inventive step (in other words, whether it is or is not obvious), and whether the making available to the public occurred prior to the international filing date. The international search is made on the basis of the claims, with due regard to the description and the drawings (if any) contained in the international application. The results of the international search are set out in the international search report. International Searching Authority may refuse to search certain subject matter: The International Searching Authority is not required to perform an international search on claims which relate to any of the following subject matter: (i) scientific and mathematical theories, (ii) plant or animal varieties or essentially biological processes for the production of plants and animals, other than microbiological processes and the products of such processes, (iii) schemes, rules or methods of doing business, performing purely mental acts or playing games, (iv) methods for treatment of the human or animal body by surgery or therapy, as well as diagnostic methods, (v) mere presentation of information, and (vi) computer programs to the extent that the Authority is not equipped to search prior art concerning such programs. However, certain International Searching Authorities do, in practice, search these fields to varying extents - for example, several International Searching Authorities search subject matter which is normally searched under the national (or regional) procedure. International Search Report: The international search report must be established within three months from the receipt of the search copy by the International Searching Authority or nine months from the priority date, whichever time limit expires later. The international search report contains, among other things, the citation of the documents considered relevant, the classification of the subject matter of the invention (according to the International Patent Classification) and an indication of the fields searched (those fields being identified by a reference to their classification) as well as any electronic data base searched (including, where practicable, the search terms used). Citations of particular relevance must be indicated specially. Citations which are not relevant to all the claims must be indicated in relation to the claim or claims to which they are relevant. If only certain passages of the document cited are relevant or particularly relevant, they must be identified, for example by an indication of the page on which, or the column or lines in which, the passage appears. It is important to note that an international search report must not contain any expression of opinion, reasoning, argument or explanation of any kind whatsoever. Supplementary International Search (Optional): Supplementary international search permits the applicant to request, in addition to the international search carried out (the “main international search”), one or more supplementary international searches each to be carried out by an International Authority (the “Authority specified for supplementary search”) other than the International Searching Authority that carries out the main international search. Requesting supplementary international search reduces the risk of new prior art being cited in the national phase. The increasing diversity of prior art in different languages and different technical fields means that the Authority carrying out the main international search is not always capable of discovering all of the relevant prior art. Requesting one or more supplementary international searches, during this early phase of the patent prosecution, expands both the linguistic and technical scope of the search. In addition, it may also be possible to have the supplementary search carried out in a State where they are likely to enter the national phase later on. A supplementary search request must be filed with the International Bureau and not with the Authority specified for supplementary search. The International Bureau will transmit the request to the Authority specified for supplementary search once it has verified that all formal requirements have been complied with. International Publication: As soon as possible after the expiration of 18 months from the earliest filing date, the content of the international application is disclosed to the world. International applications are published by the International Bureau. Publication of international applications filed under the PCT takes place wholly in electronic form. The published international application will include any declaration filed under Rule 4.17 and, if available at the time of publication, the international search report or declaration by the International Searching Authority to the effect that no international search report will be established, and also any amendment, including any statement, under Article 19. Each published international application is assigned an international publication number consisting of the code WO followed by an indication of the year and a serial number (for example, WO 2004/123456). The published international application in electronic form is available on PATENTSCOPE. International Preliminary Report on Patentability (IPRP) (Optional): International preliminary examination is a second evaluation of the potential patentability of the invention. If you wish to make amendments to your international application in order to overcome documents identified in the international search report and conclusions made in the written opinion of the ISA, international preliminary examination provides the only possibility to actively participate in the examination process and potentially influence the findings of the examiner before entering the national phase – you can submit amendments and arguments and are entitled to an interview with the examiner. At the end of the procedure, an international preliminary report on patentability (IPRP Chapter II) will be issued. 3. National Phase: The national phase is the second of the two main phases of the PCT procedure. It follows the international phase and consists in the processing of the international application before each office of or acting for a Contracting State that has been designated in the international application. In each designated State the international application has the effect of a national (or regional) application as from the international filing date, and the decision to grant protection for the invention is the task of the Office of or acting for that State (the “designated office”). Designated Office: The national office of a Contracting State is a “designated office” if the State is “designated” in the international application for national protection. The filing of a request constitutes the designation of all Contracting States that are bound by the Treaty on the international filing date. Elected Office: Where a demand for international preliminary examination is filed, the term “elected office” is used - instead of the term “designated office” - to denote the office of or acting for a State in which the applicant intends to use the results of the international preliminary examination. Since only designated States can be elected, all elected offices are necessarily also designated offices. Time Limit: The time limit for entering the national phase before a designated office is 30 months from the priority date. In respect of certain designated offices, the applicable time limit is 20 months, not 30 months because of the incompatibility, for the time being, of the modified PCT provision (PCT Article 22(1)) with the relevant national law; those offices made a declaration of incompatibility which will remain in effect until it is withdrawn by the respective offices. The time limit for entering the national phase before an elected office is 30 months from the priority date. The time limit is normally 30 months from the priority date, the same time limit for entering the national phase as that which applies in the case of a designated office which has not been elected. In respect of the designated offices, for which the 20-month time limit applies, the time limit is 30 months from the priority date if the applicant files a demand for international preliminary examination prior to the expiration of 19 months from the priority date. The national law applied by each elected office may fix a time limit which expires later than 30 months from the priority date. What Must be Done by the Applicant Before the Start of the National Phase? Payment of the national fee Furnishing of a translation, if prescribed In exceptional cases (if a copy of the international application has not been communicated to the designated office under), furnishing of a copy of the international application, except where not required by that office. In exceptional cases (if the name and address of the inventor were not given in the request when the international application was filed, but the designated office allows them to be given at a time later than that of the filing of a national application), furnishing of the indication of the name and address of the inventor. A translation of the international application must be furnished if the language in which it was filed or published is not a language accepted by the designated office. Gagandeep advises clients on infringement investigations related to electronics, telecommunications and software. He has a Master’s degree in Electrical, Electronics and Communications Engineering and a Bachelor's degree in Electronics Engineering. His interest areas are Internet of things (IoT), Semiconductor, Operating Systems (Android/iOS/Windows/Linux), Embedded Software and Sensor Networks. Keywords: pct application search, pct application process, patent cooperation treaty, pct procedure, pct patent search, pct countries, patent pct, wipo pct search, pct application patent #patents #wipo #pctapplications #intellectualproperty 
- State of Software Patents Around the WorldTechnology is the cornerstone of the digital world and abundant of its practicality rests in the software. As a matter of fact, all economic factors are becoming conditioned to software to leverage growth. This has important implications for IP laws. For many years now there has been a continuous debate on software patents as to which software is patentable and which are not. Till the late 20th century, the performance of various restructures, specifically relying on semiconductors, were predominantly anticipated on hardware modules. Lately, the focus has been migrating from hardware products to software products. Not only this, every country has its own set of rules to define and analyze the patentability of software. United States US Supreme Court came up with a 2-step process to justify the software patentability - Firstly that the computer implemented patent application should not be an “abstract idea” but if it is, then the patent application must assert some segments that “transform” the innovation into a claimed patent invention. (Alice v. CLS Bank, 134 S. Ct.2347 (2014)). United States Patent and Trademark Office (USPTO) in 1996 then stated “a practical application of a computer related invention is statutory subject matter. This requirement can be discerned from the variously framed prohibitions against the patenting of abstract ideas, laws of nature or natural phenomenon.” There were discrete landmark rulings in respect to rights of patent owners issued by the US Supreme Court and the Court of Appeals for the Federal Circuit. DDR Holdings, LLC v. Hotels.com, L.P. (Fed. Cir. 2014) DDR Holdings accused Hotels.com and distinct litigants in the United States District Court for the Eastern District of Texas of infringing US Patent Nos. 6,993,572 and 7,818,399. The 2014 decision of the Federal Circuit substantially justified software related patents. In this, one of the challenged patents related to techniques for website development. The patent’s claims conveyed the problem of retaining website visitors who could otherwise be enthralled away from a website by clicking on the advertisement. According to the Federal Circuit, considering the invention was “necessarily rooted in computer technology in order to overcome a problem specifically arising in the realm of computer networks”, it faced challenges re the requirements of 35 USC 101. “Although the claims address a business challenge, it is a challenge particular to the internet”, the court held. United Kingdom UK Patent Law affirms that if computer associated innovation or programs endows the art technical in nature, then that development would be patentable. Some signposts act as guidelines while building laws or norms for software patents. Whether the claimed technical effect has a technical effect on the process which is carried on outside the computer. Whether the claimed technical effect operates at the level of architecture of the computer, the effect is to be produced irrespective of the data being processed or the applications being run. Whether the claimed technical effect results in the computer being made to operate in a new way. Whether the program makes the computer a better computer in the sense of running more efficiently and effectively. Whether the perceived problem is overcome by the claimed invention as opposed to merely being circumvented. Now if the claimed innovation fails in all these guidelines, the software would not lead to patentable software. Germany German Patent Law asserts that a computer administered invention is the one which includes the use of computer, computer network, or other programmable apparatus, the innovation having more than one feature which are comprehended wholly or partly by expedient of a computer program. This law states that computer programs without “technical contribution” are singularly verbal functions and are sheltered by copyright. However, a computer implemented invention is patentable if it has technical character. People's Republic of China Chinese Patent Law discusses “No patent shall be granted to rules and methods for mental activities.” Since the computer program falls in the bracket of “rules and methods for mental activities”, so it's not patentable and if the subject matter “adopts technical means, resolves a technical problem and creates a technical effect”, then it would be in agreement with the patentable subject matter under Chinese Patent Law. India Indian Patent Law states that for a software to qualify for patentability, it should satisfy 3 major components – Novelty, Inventive Step and Industrial applicability. In addition to this, the invention should be a patentable content and its declaration must be in terms with standards of patent application. More so, patent law says “a mathematical or business method or a computer program per se or algorithms are not inventions and therefore not patentable.” (Section 3 IPA, 1970). South Korea Section 2.2 of guidelines looks for “concrete means” to justify and adhere to the patentability of software and thus it says that the innovation or invention should include both hardware and software. This law states that computer innovations can be claimed if it is in the following forms Apparatus (device) Process (Method) Computer readable medium Computer program stored on a medium It would be hard to believe that the debate on patentability of software would die down in at least the next decade. Considering the astonishing swiftness at which technology is growing, blackballing software from patent protection and immunity might hamper the technical build outs and choices thereby hampering technology alliance. As digitization sprints its way into various strata of lives, it is increasingly important for governments to reconsider the present playfield and to ponder over the eminence of patent protection for technical applications that incorporate software implemented innovations and inventions. Patent laws offer the most powerful structure for protecting an invention increasing the range of safer and efficient everyday products. Patent protection for software therefore can generate conditions that encourage innovators and engineers to increase resources for software development and improve the state of the art. Copperpod provides patentability search services that give a clear picture of the state of the art and helps navigate away from a potential prior art. We have delivered 200+ prior art search reports on multiple technology domains such as Telecommunications, Cloud Computing, Life Sciences, Cryptography & Security. As part of patentability searches, Copperpod’s technical team also provides suggestions to widen the scope and maximize future licensing opportunities for the patent, while navigating away from potential rejections due to software patentability laws around the world. Chandan provides procedural advice and assistance to the attorneys and corporations in connection with matters related to patent infringement and IP litigation. Chandan has worked in patent search and analytics domain for 6 years and has worked extensively on providing patent strategy solutions to Fortune 50 corporations. 
- Hydroxychloroquine for COVID-19: What is it and who makes it?The story of Hydroxychloroquine began in 1638 when the wife of the Viceroy of Peru acquired malaria. She was treated by an Incan herbalist with the bark of a tree and she recovered dramatically. When the Viceroy returned to Spain, he brought with him large supplies of the bark powder for general use, which at the time was controlled by the Church and was thus called “Jesuit’s Powder”. It took nearly two centuries for the active substance, Quinine, to be isolated from the bark. Over the next century, quinine became a common component in folk medicines and patent remedies for the treatment of malaria in the southern states of America. By the 1940s, quinine and its derivative chloroquine(C18H26ClN3), was recognized for its anti-malarial properties and found use among troops fighting in the Pacific during World War II. However, it was noted that this compound had significant toxicities. In 1945, a modification of this compound via hydroxylation led to the development of Hydroxychloroquine(C18H26ClN3O), which was found to be less toxic and remains in use, without change, to this day. Hydroxychloroquine was approved for medical use in the United States in 1955. Today, Chloroquine is used to prevent and treat malaria and amebiasis while hydroxychloroquine is used to treat malaria as well as rheumatic diseases such as systemic lupus erythematosus, rheumatoid arthritis, juvenile idiopathic arthritis and Sjogren's syndrome. Sold under the brand name of Plaquenil, it is on the World Health Organization's List of Essential Medicines. In 2017, Hydroxychloroquine was the 128th-most-prescribed medication in the United States, with more than five million prescriptions. Recently, Hydroxychloroquine has been floated as a potential treatment candidate to treat COVID-19. Though the result of studies related to the effectiveness of Hydroxychloroquine has not been unanimous, some studies has shown it to block viruses from binding to human cells and getting inside them to replicate. Thus, it improves the success rate of treatment and shorten hospital stay. Moreover, combining hydroxychloroquine with the antibiotic azithromycin has also generated positive patient outcomes. Further, US President Donald Trump’s campaigning for Hydroxychloroquine and US FDA’s approval for emergency use of these drugs has led to a surge in global demand for the inexpensive drug. India is the world’s largest producer of hydroxychloroquine and exported $51 million worth of the drug in FY19. Nearly half the supply of hydroxychloroquine to the U.S. comes from makers in India (Dr. Reddy’s Laboratories & Zydus). The top U.S. supplier of hydroxychloroquine, Zydus Pharmaceuticals Inc., is a subsidiary of India-based Cadila Healthcare Ltd. It sold over 167 million units of the anti-malarial in 2019 and has supplied 28 million integrated units to retail and institutional channels in the U.S. so far this year. Prasco Labs, situated in Cincinnati, is the largest producer of hydroxychloroquine in US. Seminal Patents 1. US2546658A: 7-chloro-4-[5-(n-ethyl-n-2-hydroxyethylamino)-2-pentyl] aminoquinoline, its acid addition salts, and method of preparation Application Date: July 23, 1949 Grant Date: March 27, 1951 Current Assignee: STWB Inc This patent proposes a method to prepare 7-chloro-4-(5-(N- ethyl-N - 2 - hydroxyethylamino)-2-pentyl) aminoquinoline. The compound is useful as an antimalarial agent and can be used either in the free base form or in the form of its acid-addition salts. 2. US5314894A: (S)-(+)-hydroxychloroquine Application Date: Sep 15, 1992 Grant Date: May 24, 1994 Current Assignee: Sanofi SA The invention proposes compositions of hydroxychloroquine which is useful in the treatment of acute attacks and suppression of malaria due to Plasmodium, susceptible strains of Plasmodium falciparum, systemic and discoid lupus erythematosus, and rheumatoid arthritis. 3. US6572858B1: Uses for anti-malarial therapeutic agents Application Date: May 1, 2000 Grant Date: June 3, 2003 Current Assignee: APT Pharmaceuticals, LLC The patent provides a method to treat inflammatory diseases via local delivery to the patient of a composition containing an anti-malarial agent such as hydroxychloroquine. 4. US20040167162A1: Uses for anti-malarial therapeutic agents Application Date: May 1, 2000 Current Assignee: APT Pharmaceuticals, LLC The patent application proposes a method for the treatment of an infection in a mammal of a virus by targeted delivery of aminoquinoline or hydroxyquinoline, wherein the virus is adenovirus, rhinovirus, human corona virus or influenza virus. 5. US7553844B2: Methods for treatment of HIV or malaria using combinations of chloroquine and protease inhibitors Application Date: Feb 20, 2004 Grant Date: June 30, 2009 Current Assignee: Jarrow Formulas Inc The invention relates to a drug combination capable of conferring therapeutic benefits in the treatment of both AIDS and malaria. In particular, it relates to a drug combination including at least one quinolinic antimalarial compound such as chloroquine or hydroxychloroquine, and at least one inhibitor of the Human Immunodeficiency Virus (HIV) protease enzyme. This drug combination is capable of inhibiting the replication of both HIV and Plasmodium sp. It also relates to the direct antimalarial effects of the HIV PIs. 
- Active Noise Cancellation: Innovations and ApplicationsHTC recently unveiled its latest addition to the ever-growing market of Virtual Reality headset, the Vive Pro HMD, most notably the first VR headset on the market equipped with Active Noise Cancellation (ANC). Though one tends to associate ANC with only high-end headphones, there is much scope for the technology even larger number of applications. With the noise level around us being on a constant increase, one cannot simply overlook the health hazards of continuous exposure to high levels of noise in various environments - such as boardrooms, sophisticated industrial spaces, emergency vehicles, hospitals and military vehicles. The noise cancellation technology in general has achieved great levels of perfection when it comes to headphones and microphones used in smartphones, which do not have to deal with spatial considerations, but the real barrier appears when dealing with large area. To increase the effective zone of silence in a wide area in space has been a major challenge for researchers. Researchers are constantly looking towards developing newer methods (algorithms) and improving the existing ones to address noise in a wider area efficiently. The biggest problem in implementing noise cancellation outside of a headset is the exponential increase in the target area. The more the number of points where noise needs to be eliminated, more is the processing power needed, and more complex the cancelling algorithm becomes. Noise in a given area can be reduced in two ways: 1. Passive Noise Control: In this method, the noise source is physically separated from the subject (listener) so that sound waves travelling from the source to the subject get as much attenuated as possible. The isolation can be achieved by building soundproof chambers, using sound absorbing materials like Styrofoam and jute, carpeting of floors and other methods. This method is not suitable for sounds that have a low frequency (for example the bass in music). Low frequency sounds are carried by waves of relatively lower energy and hence pass through without much attenuation. To control and reduce sounds of lower frequencies, we need active noise control. 2. Active Noise Control: Active Noise Control (or, Active Noise Cancellation) is a newer technique that works on the principle of cancelling noise by adding (superimposing) it to its phase reversed replica. For example, if we add a sine wave to an exact replica of the same sine wave, but with a 180° shift in phase. This would cause destructive interference when superimposed and hence help in noise level reduction. Fig. 1, Superimposition of noise and anti-noise (Wikipedia) One of the major concerns in cancelling noise in 3-D space would be the use of a number of error microphones, cancellation speakers and adaptive filters to enhance the accuracy which would add to the system complexity. Another barrier is to account for the different phase shifts of all the frequencies as they are reflected around the room. A speaker to cancel the sound that is coming from a different location in the room could generate a different phase shift for other frequencies as compared to the source, so the cancellation would be poor. This could substantially increase the effective noise level received at some zones, instead of decreasing it because of the occurrence of constructive interference. The possibility of this happening increases a lot when the source of the sound is at the side of the listener and not directly in front or at the back. This would generally lead to noise being cancelled at one ear, but being amplified due to constructive interference at the other ear. Commercial applications of spatial noise cancellation including noise cancellation in aircraft cabins and car interiors are primarily based on FFT analysis owing to the cyclic nature of engine vibrations. But the real challenge is to eliminate the non-periodic ambient noise having a multitude of frequencies and phase shifts so as to create a zone of silence. As appealing as the concept is in its application, the complexity of the science behind it is also appreciable. The most widely used algorithm for achieving active noise cancellation is the very basic Least Mean Squared (LMS) algorithm. The LMS algorithm is one of the first and still widely used algorithms for noise cancellation. It uses an adaptive FIR filter which estimates the desired filter coefficients such that least mean of square of error signal is obtained. The filter coefficients are adjusted to minimize the error using the steepest descent method[i]. Fig. 2, Block diagram showing implementation of LMS Algorithm. (Indian Journal of Science and Technology) The algorithm is implemented so as to reduce the amount of residual noise being received by the subject (e(n) in figure). The phase reversed anti-noise signal (y(n)) is generated in real time using a controller unit which may be a Digital Signal Processor or any other microprocessor/microcontroller programmed for the purpose. The error signal (residual noise) is often fed back to the system so as to help the algorithm adapt to the present output and change the running parameters (filter coefficients) to further decrease the residual noise level. The LMS algorithm has been widely implemented in vehicular active noise cancellation systems. These systems employ ANC to cancel out all the noise emanating from outside the vehicle and create a zone of silence for one or more persons sitting inside the vehicle[ii]. The systems may also include a mechanism to differentiate between the following sound signals: · Outside the defined voice band (to be eliminated) and inside the voice band (may or may not need to be eliminated) · Emanating from outside the vehicle (to be eliminated) and from inside the vehicle (not to be eliminated) The decisions are taken by a weighting process carried out by a discriminator which takes input from various sensors and microphones installed throughout the vehicle and decides whether the signal inputs lying in the voice band are to be cancelled or not. For example, a person’s voice coming from outside the vehicle shall be cancelled, but a person’s voice coming from inside the vehicle shall remain unaffected. Noise reduction inside a vehicle cabin can also be achieved by reading in the road and travel conditions and using a pre-determined and stored set of filter coefficients for running the LMS algorithm[iii]. The road conditions are then constantly monitored using a variety of sensors that sense data from vehicle suspension, acceleration, engine speed, etc. Whenever a change in state is detected, the filter coefficients are dynamically changed by picking up a set of coefficients from the memory. In another implementation, a modified LMS algorithm known as DXHS algorithm (Delayed X Harmonic Synthesis) has been used. This is specifically useful in applications where the noise source consists of only fixed harmonics, such as in ambulance sirens. This has been used to silence the noise of ambulance siren for the paramedic crew, via the use of ANC enabled headsets[iv]. A more recent enhancement to the LMS algorithm is the Filtered-X LMS (FxLMS) algorithm that further reduces the time taken to estimate and invert the noise component.Cancellation paths play a critical role in ANC systems, and the filtered-x LMS (FxLMS) algorithm takes them into account by filtering the reference signal with an estimate of the cancellation path transfer functions, which are often modelled online or at regular intervals in order to maintain the stability of the system[v]. This technology is already implemented in a wide range of ambient noise reduction headphones. Broadly there are two different arrangements used : Feedback Arrangement (FB) : uses a noise capturing microphone inside the ear cup. It is generally implemented in headphones with large ear cups. Feedforward Arrangement(FF): uses a noise capturing microphone outside of the earcup. It is generally implemented in ear bud style earphones. With the advent of wireless headphones based on Bluetooth and Wifi, there is an additional battery powered circuitry for wireless reception of audio data which tends to make the system bulkier. The ANC functionality can be incorporated into the wireless communication controller so as to optimize the system. One such implementation[vi] uses an ANC controller employing a fixed feedforward controller which obtains the error signal from an external microphone, a fixed feedback controller which obtains the error signal from an internal microphone and an adaptive feedforward controller to form a hybrid feedforward-feedback controller for attenuation of broadband noise. The coefficients of the adaptive feedforward controller are determined in accordance with the FxLMS algorithm. Another implementation is an electronic pillow that abates snoring and other environmental noises[vii]. It uses a multichannel feedforward ANC system using adaptive FIR filters based on the FxLMS algorithm.It creates a quiet zone centred around the user by detecting noise such as snoring or other environmental noises and generating a cancelling signal using ANC. For this purpose it has multiple embedded error microphones and multiple speakers placed at predetermined positions and a controller unit coupled to them. The advancements in technology have enabled us to see what we want to see, and even hear what we want to hear, gaining control over the otherwise involuntary sense of hearing. Algorithms that improve upon commercially used LMS algorithms (FxLMS and DXHS being just two) have been widely successful in research environments, and are waiting their turn to make a commercial impact. Yet, looking at how rapidly the consumer electronics market has grown over the last decade - and the ever increasing consumer demand for new technologies - ANC remains prime real estate for new innovation and new applications in not only consumer electronics but also in industry, transportation and healthcare. References: [i] Ambulance Siren Noise Reduction using LMS and FXLMS Algorithms, M. Sharma, R. Vig; Indian Journal of Science and Technology [ii] Noise Reduction Apparatus, US Patent US007020288B1 [iii] Vibration/Noise Active Control System For Vehicles, US Patent US005758311A [iv] An Active Control Headset For Crew Members Of Ambulance, Y .Shimada, T.Fujikawa,Y.Nishimura, T.Usagawa and M.Ebata; IEEE/TENCON 99 [v] A Review on Filtered-X LMS Algorithm, Sakshi Gaur and V. K. Gupta,International Journal of Signal Processing Systems [vi] ANC for BT Headphones, US Patent US 2012/0170766A1 [vii] Electronic pillow for abating snoring /Environmental noises, Hands free communications,US Patent US8325934B2 #electronics #emergingtech 
- NVIDIA + Arm - The Anatomy of A Semiconductor BeastTwo of the biggest players in the semiconductor industry will be merging in a $40 billion cash plus stock deal. The acquisition of Arm by NVIDIA will make this the biggest semiconductor deal of the century and will result in creating the world’s premier computing company tailored to the age of Artificial Intelligence (AI). This means that our data intensive applications will soon be running on ARM designed chips with NVIDIA support leading to world-class innovation and high growth markets. Arm is the kernel powering the world’s largest computing ecosystem and is owned solely and wholly by Japan’s SoftBank Group Corp. With this acquisition, SoftBank will remain a constituent of Arm by holding under 10 percent stake in NVIDIA. The semiconductor industry has been on the modernization trend by shifting its focus to specialized chips that serve greater performance and efficiency. In the competitive field where Intel and AMD perform, NVIDIA has maintained its name in the market with its GPU technology backed up by the AI market. NVIDIA has been on a portfolio completion hunt to build its computing ecosystem for the past year. The gaming company completed the purchase of Mellanox Technologies for $7 billion in April and by signing various acquisition deals with SwiftStack and Cumulus. The acquisition of Arm will undoubtedly be the focal point in NVIDIA’s strategy over the next decade. Technology Domain The data shows the patent strength of Arm and NVIDIA in different technology domains with the computer technology domain securing over 3,000 patents. Thus, it shows that with passing times, more computing is likely to hover the cloud specifically when more companies make use of AI for plenty of applications and manufacture new opportunities for data sharing and applications. Patent Filing Trend The graphs data show NVIDIA and Arm’s patent filing trend over the past decade. With NVIDIA having 4,724 patents and Arm having 5,410 patents to their name exhibit their focus on strengthening their IP. NVIDIA and Arm had their respective peaks in 2013 and 2016 respectively. Thus designating ARM’s highly lucrative position in the market and NVIDIA’s own licensing model. Arm being ranked the world's learning tech provider of silicone IP for chips that remain the core of billions of devices, and its complete IoT products would complete NVIDIA’s portfolio. Over 95 percent of the smartphone market is monopolistically powered by Arms Designs. Along with that its IP is essential for data centers, PCs, and IoT devices. However, its primary business is licensing its IP and thus NVIDIA managed to earn a 13% revenue when it licensed an Arm design for the Tegra CPUs. To understand the same, the graph below explains the number of patents individually held by Arms and NVIDIA geographically. Geographical Patent Strength NVIDIA, as seen in the chart, holds 600 patents in Taiwan but Arm holds 1,361. On the other hand, Arm has 2,955 patents in the United States, whereas NVIDIA has 3,849. Both the companies would be able to gain holistic control over Licensing IPs with the combined soundness of their patent counts. “This combination has tremendous benefits for both companies, our customers, and the industry. For Arm’s ecosystem, the combination will turbocharge Arm’s R&D capacity and expand its IP portfolio with NVIDIA’s world-leading GPU and AI technology,” said Jensen Huang, founder, and CEO of NVIDIA. “Arm and NVIDIA share a vision and passion that ubiquitous, energy-efficient computing will help address the world’s most pressing issues from climate change to healthcare, from agriculture to education,” said Simon Segars, CEO of Arm. Conclusion This biggest of the biggest deals is set to unite the strand of AI from NVIDIA with the vast computing ecosystem of Arm. AI technology is a huge part of NVIDIA providing plenty of hardware systems, thus bringing their research and technology into the ARM biosphere. Arm’s IP licensing portfolio will be expanded with the help of NVIDIA technology. Thus, as said, this enormous acquisition is going to be a power boost, and we might be able to see the continued innovation of this on an exponential scale. Gagandeep advises clients on infringement investigations related to electronics, telecommunications and software. He has a Master’s degree in Electrical, Electronics and Communications Engineering and a Bachelor's degree in Electronics Engineering. His interest areas are Internet of things (IoT), Semiconductor, Operating Systems (Android/iOS/Windows/Linux), Embedded Software and Sensor Networks. He loves to play soccer, badminton and cricket. He follows soccer religiously and a fan of FC Barcelona ("Messi" follower).  











