3D printing helps PEO Soldier reduce 120 lb marching load of US Army Infantry

PEO Soldier’s ‘Warrior Integration Site, ’ a laboratory where personnel are developing ways to reduce the weight carried by Us all Army Infantry, is using 3D printing to prototype new products. Lightweight body armor, new fabrics, and other items will each contribute to the weight reduction.

There are a thousand reasons why being an infantry soldier is difficult: you’re stationed away from home, you have to maintain an incredibly higher level of physical fitness, and you’re often putting your life at risk-and those are just the obvious ones. But besides those matters, there’s also the small issue of having to carry around 120 lb of equipment on extended missions. Although foot soldiers need all of the items that they carry, this huge burden reduces their mobility and effectiveness in combat. Because of this, specialists are doing all they can to reduce that total weight, and also have turned to 3D publishing and other emerging technologies to greatly help them.

Program Executive Workplace Soldier (PEO Soldier) is an American governmental organization set up to develop, acquire, and field state-of-the-art equipment for the Army in order to improve military performance. The organization has a number of subdivisions and locations, but one facility in particular is focusing its efforts on reducing the weight, cost, and power consumption of infantry uniform and equipment. The Warrior Integration Site (WinSite) bills itself as a “collaborative style environment, ” and is using 3D printing for the fast prototyping of new, lightweight tools. (foto2)

With a soldier’s “marching load” sometimes reaching around 132 lb, WinSite has been tasked with developing more integrated and lightweight uniform and equipment solutions specifically, eliminating any unnecessary products and materials while maintaining or even increasing the potential effectiveness associated with the soldier. “What we are attempting to do is get an even more incorporated and operational system. We have been considering the Soldier as an operational program, ” said Maj. Daniel Rowell, Associate Product Supervisor of Integration at Program Executive Office Soldier.

To test out a newly designed piece of equipment, prototypes can be created on the 3D printer, then fitted to an uniformed mannequin as part of a general assessment. But as well as rapid prototyping with a 3D printer, staff at WinSite are also using 3D scanning technology to “digitalize” existing Army equipment , enabling designers to integrate and modify that equipment in new CAD projects, and are developing integrated biometric sensors for uniforms which can detect heartrate, breathing, and blood circulation pressure. These sensors could be utilized to alert Army medics of wounded soldiers, saving many lives potentially. (foto3)

In its effort to harness the energy of additive manufacturing fully, the US Army offers embarked upon a true amount of other 3D printing projects recently. Earlier this full year, it teased the chance of armed service 3D printed drones, and this past year introduced a partnership with 3D Techniques to build up a 3D printing lab. While PEO Soldier’s WinSite isn’t using 3D publishing for headline-grabbing purposes particularly, its work could significantly enhance the effectiveness of infantry soldiers.

Engineers Program Human Cells to Record Analog Memories

post cells foto1
MIT biological engineers have devised a memory storage system illustrated here as a DNA-embedded meter that is recording the activity of a signaling pathway in a human cell. (Image courtesy of MIT. )

A group of biological engineers has devised a method to report complex histories in the DNA of human cells, allowing them to retrieve “ reminiscences ” of prior events, such as inflammation, by sequencing the DNA.

This analog memory storage system – the first that can record the duration and/or intensity of events in human cells – could also help scientists study how cells differentiate into various tissues during embryonic development, how cells experience environmental conditions, and how they undergo genetic changes that lead to disease.

“To enable a deeper understanding of biology, we engineered human cells that are able to report on their own history based on genetically encoded recorders, ” said Timothy Lu, an MIT associate professor of electrical engineering and computer science, and of biological engineering. This technology should offer insights into how gene regulation and other events within cells contribute to disease and development, he added.

Analog Memory:

Many scientists, including Lu, have devised ways to record digital information in living cells. Using enzymes known as recombinases, they program tissue to flip parts of their DNA whenever a particular occasion occurs, such as for example exposure to a specific chemical. However , that technique reveals only if the event occurred, not just how much publicity there was or just how long it lasted.

Lu along with other researchers have previously devised methods to record that type or sort of analog information in bacteria, but until now, it’s been achieved by no-one in human cells.

The new MIT approach is founded on the genome-editing system known as CRISPR, which consists of a DNA-cutting enzyme called Cas9 and a short RNA strand that guides the enzyme to a specific area of the genome, directing Cas9 where to make its cut.

CRISPR is widely used for gene editing, but the MIT team decided to adapt it for memory storage. In bacteria, where CRISPR originally evolved, the system records past viral infections so that cells can recognize and fight off invading viruses.

“We wanted to adapt the CRISPR system to store information in the human genome, ” said Perli.

When working with CRISPR to edit genes, scientists create RNA tutorial strands that match a focus on sequence in the web host organism’s genome. To encode thoughts, the MIT team took another approach: They designed guide strands that acknowledge the DNA that encodes the same guide strand, developing what they call “self-targeting direct RNA. ”

Led by this self-targeting direct RNA strand, Cas9 cuts the DNA encoding the direct strand, generating the mutation that becomes a long lasting record of the function. That DNA sequence, as soon as mutated, generates a new tutorial RNA strand that directs Cas9 to the recently mutated DNA, allowing further mutations to accumulate as long as Cas9 is active or the self-targeting manual RNA is expressed.

By using sensors for specific biological events to regulate Cas9 or self-targeting manual RNA activity, this system enables progressive mutations that accumulate as a function of those biological inputs, thus providing genomically encoded memory .

For example , the researchers engineered a gene circuit that only expresses Cas9 in the presence of a target molecule, such as TNF-alpha, which is made by immune tissue during inflammation. Whenever TNF- alpha exists, Cas9 cuts the DNA encoding the information sequence, generating mutations. The more the contact with TNF-alpha or the higher the TNF-alpha concentration, the even more mutations accumulate in the DNA sequence.

By sequencing the DNA on afterwards, researchers can regulate how much exposure there was.

“ This is actually the rich analog behavior that people are looking for, where, as you raise the duration or quantity of TNF-alpha, you get increases in the quantity of mutations, ” said Perli.

“Moreover, we wanted to test our system in living animals. Being able to record and extract info from live tissues in mice can help answer meaningful biological questions, ” Cui said. The experts showed that the system is capable of recording inflammation in mice.

Most of the mutations result in deletion of section of the DNA sequence, therefore the researchers designed their RNA information strands to be when compared to a 20 nucleotides longer, so that they won’t become too brief to function. Sequences of 40 nucleotides tend to be more than long sufficient to record for a complete month, and the researchers have also designed 70-nucleotide sequences that may be utilized to record biological indicators for even longer.

Tracking Development and Disease:

The researchers also showed they could engineer tissues to detect and record several input, by producing several self-targeting RNA guidebook strands in exactly the same cell. Each RNA guidebook is associated with a specific insight and is only produced when that input is present. In this study, the researchers showed that they could record the presence of both the antibiotic doxycycline and a molecule known as IPTG.

Currently this method is most likely to be used for studies of human cells, tissues, or engineered organs, the researchers say. By programming cells to record multiple events, scientists could use this system to monitor inflammation or infection, or to monitor cancer progression. It could also be useful for tracing how cells specialize into different cells during development of creatures from embryos to adults.

“With this technology you might have different memory registers which are recording exposures to different signals, and you also could see that all of those signals was received by the cell because of this passage of time or at that intensity, ” Perli said. “That real method you could get nearer to understanding what’s occurring in development. ”

You like this Post, read about programs for 3d modeling

NASA Competition to Develop Dexterous Humanoid Robots for Mars

post hu foto1
NASA’s Robonaut, R5. (Image thanks to NASA. )

NASA and global consultancy corporation NineSigma have announced the beginning of a competition to “develop humanoid robots to help astronauts on Mars. ”

The million dollar competition, aptly named the Space Robotics Challenge, aims to create a framework for a humanoid robot that is flexible, dexterous and may withstand the brutal Martian conditions.

To take home the $1M prize, teams will be required to system a virtual robot modeled after NASA’s Robonaut R5. The computer programs compiled by participants will have to guide the R5 by way of a series of tasks and also achieve this with a forced latency time period imposed on communication between plan and robot.

NASA says this latency represents enough time it could take for instructions to end up being sent from World to Mars-approximately 20 minutes typically, depending on the length between the two planets.

While NASA’s smart latency trap shouldn’t end up being a huge obstacle for programmers, the obstacles that they’ll need to face might be a little bit of challenge. NASA’s eyesight for the task is a horrific one.

Each participant will undoubtedly be asked to steer their virtual R5 by way of a Martian hellscape in which a dust storm has just damaged a habitat (no word on whether astronauts were inside, or if some of them survived). Surveying the harm, the R5 shall need to align an off-kilter communications dish, repair a damaged solar array and fix the habitat’s breached hull.

“Precise and dexterous robotics, able to work with a communications delay, could be used in spaceflight and floor missions to Mars and elsewhere for hazardous and complicated jobs, which will be essential to support our astronauts, ” said Monsi Roman, program manager of NASA’s Centennial Challenges.

According to NASA, the advancement of flexible, dexterous robotic technologies will be critical for sustaining human lifestyle off world. In fact , engineers at the agency are planning of methods to deploy these bots already, including delivering them to the crimson planet to choose landing sites, create habitats, construct life-support systems and perhaps conduct scientific missions.

You like this Post, read about program for 3d printing

Flexible Concrete Won’t Crack Under Pressure

post flex foto1
(Image courtesy of Nanyang Technological University. )

The ancient building material concrete is getting a performance boost thanks to a clever reformulation.

Sand, water, cement and gravel. Those are the things that make up concrete, probably the most ubiquitous building components on earth. Since its invention millennia back concrete has served because the foundation for structures, roadways and all types of infrastructure. Although it’s a good material, it can have its flaws. Specifically, concrete is brittle and can crack under pressure.

For two thousand decades that’s been concrete’s Achilles’ heel. But things may be changing.

In accordance with Nanyang Technological University (NTU) professor Chi Jian, “We created a fresh type of concrete that may help reduce the thickness and fat of precast pavement slabs, hence enabling speedy plug-and-play installation, where new concrete slabs prepared off-site can easily replace worn out ones. ”

Named ConFlexPave this reformulated concrete holds true to the age old recipe but adds a twist by including polymer microfibers to the cocktail. The introduction of these polymers means that loads which would traditionally cause concrete to crack can be distributed across a larger area of the material, giving ConFlexPave greater resiliency.

post flex foto2
(Image courtesy of Nanyang Technological University. )

“The microfibers, which are thinner compared to the width of an individual hair, distribute the load over the whole slab” said Associate professor Yanf En-Hua. “[Thus] producing a concrete that’s tough as steel and at least doubly strong as regular concrete under bending. ”

While table-sized slabs of ConFlexPave are actually reliable in laboratory configurations, NTU researchers will continue steadily to scale up the quantity of ConFlexPave they pour to be able to concur that the material will work as expected once it’s released in to the real world.

Though flexible concrete might seem like a mundane technical advance, the impact that it could have on worldwide infrastructure can’t be overstated. If flexible concrete can be poured far and wide, billions, if not trillions of dollars in infrastructure maintenance could be saved.

What’s more, because flexible concrete can be poured in thinner layers, less material will be needed to repave roadways, thus saving money and energy. Concrete building might also be made more resistant to cracking under the pressure of earthquakes as well. The list of benefits goes on and on.

You like this Post, read about online 3d design tool

Volvo and Uber Team Up for Self-Driving Cars

post volvo foto1
(Image thanks to Volvo. )

Volvo Uber and Vehicles have announced that they can join forces to build up next-generation autonomous cars.

Both companies have signed an agreement to determine a joint project which will develop new base vehicles which will be able to incorporate the most recent developments in autonomous driving technologies, up to fully autonomous driverless cars.

The base vehicles will undoubtedly be manufactured by Volvo and purchased from Volvo by Uber then. Volvo and Uber are usually contributing a combined USD $300 million to the project.

Both Uber and Volvo use exactly the same base vehicle for another stage of these own autonomous car strategies. This can involve Uber adding its self-developed autonomous driving techniques to the Volvo base automobile. Volvo will use exactly the same base vehicle for the next stage of its own autonomous car strategy, that may involve fully autonomous driving.

The Volvo-Uber project marks a significant step in the automotive business with a car manufacturer joining forces with a new Silicon Valley-based entrant to the car industry, underlining the way in which the global automotive industry is evolving in response to the advent of new technologies. The alliance marks the beginning of what both companies view as a longer term industrial partnership.

The new base vehicle will be developed on Volvo’s fully modular Scalable Product Architecture (SPA). SPA is currently used on Volvo’s XC90 SUV along with the S90 high quality sedan and V90 high quality estate.

SPA has been developed as part of Volvo’s $11-billion global industrial transformation program, which started in 2010, and contains been prepared from the outset for the most recent autonomous drive technologies in addition to next generation electrification and connection developments.

The development work will undoubtedly be conducted by Volvo engineers and Uber engineers in close collaboration. This project shall enhance the scalability of the SPA system to add all needed safety, redundancy and new features necessary to have autonomous automobiles on the road.

Travis Kalanick, Uber’s leader, said: “Over one million individuals die in automobile accidents every year. These are tragedies that self-driving technology can help solve, but we can’t do this alone. That’s why our partnership with a great manufacturer like Volvo is indeed important. By combining the abilities of Volvo and Uber we shall get to the near future faster, together. ”

You like this Post, read about most popular cad software

New Phononics Research Aims to Change How Sound Waves Behave

post pho foto1
This experimental laser ultrasonic setup in collaborator Nick Boechler’s lab will create phonons with nature-defying characteristics. ( Image courtesy of Nicholas Boechler. )

For decades, advances in electronics and optics have driven progress in information technology, energy and biomedicine. Now researchers are pioneering a new field — phononics, the science of sound — with repercussions potentially just as profound.

“If engineers can get acoustic waves to travel in unnatural ways, as they are starting to do with light waves, the world could look and good radically different, ” said Pierre Deymier, University of Arizona (UA) professor and head of materials technology and engineering.

Imagine a wall that lets you whisper to a person on the other side but does not let you hear that person. Or a Band-Aid that images tissue through the vibrations it emits. Or a personal computer that uses phonons, a type of particle that carries sound and heat, to store, transport and process information in ways unimaginable with conventional electronics.

“It may sound like weird science, nonetheless it is believed by me may be the wave of the future, ” Deymier said.

Breaking the Laws associated with Waves:

In common logic, the idea of reciprocity says that waves, such as for example electromagnetic, lighting and acoustic waves, behave exactly the same irrespective of their direction of travel. It is a symmetrical process — unless there is a materials barrier that breaks that symmetry.

There often is. Sound and lighting waves lose power when encountering a wall, for example , and may try to reverse training course. The nine NewLAW projects try to separate this symmetry of lighting and audio waves by making them take a trip in only one direction. Therefore, when encountering a wall, an audio wave might carry on around it, or even be completely absorbed by it.

Other researchers have created sophisticated materials that bend light in unnatural ways to render parts of an object invisible. Similarly, Deymier’s research could lead to walls that allow sound to pass more easily in one direction, or objects that remain silent when approaching from one direction.

The Power of Phonons:

Most modern technologies are based on the manipulation of electrons and photons. Deymier is one of the pioneers in the emerging discipline of phononics, which encompasses numerous disciplines, including quantum physics and mechanics, materials research engineering and applied mathematics.

He’s got developed specialized phononic crystals, elastic and artificial structures with unusual acoustic wave propagation features, such as the capability to increase the quality of ultrasound imaging with super lenses, or even to process details with sound-based circuits.

For the brand new NSF-funded study, he could be using advanced materials, chalcogenide glass, which includes mechanical properties which can be dynamically modulated in room and time and energy to break reciprocity and transmit sound within a direction.

This type of investigation could ultimately create a vast selection of products with peculiar features which could improve noise abatement, ultrasonic information and imaging processing technologies, Deymier said.

“Imagine some type of computer whose operation depends on processing details transported by sound through nonreciprocal phononic elements instead of electrical diodes, or a medical ultrasonic imaging device with extraordinary resolution. ”

” Working with phonons is incredibly fascinating, ” Deymier said. ” We’re going to change the way people think about sound and are opening an entire new world. ”

Deymier has received $1. 9 million from the National Technology Foundation’s Emerging Frontiers in Research and Advancement, or EFRI, program to guide a four-year study on manipulating how sound waves behave. His collaborators are UA professor of materials technology and engineering Pierre Lucas and Nicholas Boechler, assistant professor of mechanical engineering at the University of Washington.

You like this Post, read about modeling software for 3d printing

Revisiting Technology to Keep Astronauts on Their Feet

If you’ve in no way watched astronauts tripping over rocks on the moon, enough time should be taken by one to do so.

Then consider the danger of a suit puncture occurring while an astronaut trips over rocks on the moon, and it becomes a bit less entertaining and considerably more concerning.

In an effort to help these clumsy walkers and others here on terra firma, experts at MIT are developing special shoes that could be integrated into a navigation system to help the wearer avoid obstacles to mobility.

post astro foto1
Avoid obstacles by listening to the sole-of your shoes, that is. (Image courtesy Jose-Luis Olivares/MIT. )

This is far from a new concept. Haptic feedback in shoes ‘s been around for years, but the united group at MIT has had a different approach, heading back to the drawing plank to determine the best way to put into action this sort of technology.

By researching the areas of the foot that are almost all sensitive to the suggestions motors, Leia Stirling, an assistant professor at MIT’s Division of Aeronautics and Astronautics (AeroAstro), whose group led the work, took the technology back to basics.

“A lot of students in my own lab are considering this question of the method that you map wearable sensor info to a visual screen, or a tactile screen, or an auditory screen, in a manner that can be understood by way of a nonexpert in sensor technologies, ” said Stirling. “This preliminary pilot study permitted Alison [Gibson, a graduate college student in AeroAstro and first writer on the paper] to understand about how she could develop a language for that mapping. ”

The research shows that not merely are some certain specific areas of the foot much less receptive to the feedback, but also that folks had difficulty attending to the stimuli or identifying differences in feedback intensity while distracted.

“Trying to provide people who have more information about the environment, especially when not only vision but other sensory information-auditory as well as proprioception-is compromised, is a really good idea, ” said Shirley Rietdyk, a professor of Health and Kinesiology at Purdue University who studies the neurology and biomechanics of falls.

“From my perspective, [this work could be useful] not only for astronauts but for firemen, who have well-documented issues interacting with their environment, and for people with compromised sensory systems, such as older adults and people with diseases and disorders. ”

The work could connect with other satnav systems for the differently abled directly, such as for example MIT’s virtual “guide dog” 3D camera system. This integration and all of the output methods would allow people at any ability level to navigate as very easily as anyone else.

You like this Post, read about modeling for 3d printing

Detangling the Complexity of Waves with Acoustic Voxels

post waves foto1
Columbia Engineering researchers were able to control the acoustic response of an object when it is tapped and thereby tag the object acoustically. Given three objects with identical designs, a smartphone can read the acoustic tags in real time by recording and analyzing the tapping sound and thereby identify each object. (Image courtesy of Changxi Zheng/Columbia Engineering. )

A novel way to simplify the design of acoustic filters has been developed in a collaborative work among engineering experts via simulation methods.

The engineering research team behind this advancement decided to study a fairly simple shape (a hollow cube with holes on a few of its six faces) to be able to enable 3D printing it as their base module. This fresh technique is with the capacity of determining optimum filter styles, which enables the selective reduced amount of sounds at specific frequencies then.

This approach has been named “Acoustic Voxels” by its creators. Acoustic Voxels aids in veering away from using trial-and- error iterations in the design of acoustic filters. Instead, this program precomputes the acoustic properties of an item. It also enables the user to simulate the filter with varying properties.

Additionally, the engineering research team behind Acoustic Voxels created a technique for computationally optimizing attachments between filters in order to achieve a desired effect. Acoustic Voxels operates 70, 000 times faster than current algorithms used to predict acoustic qualities.

An interesting outcome of Acoustic Voxels was that the team could design acoustic tags into objects that seemed to be the exact same as one another. However , when tapped, each item would provide a distinctive sound. Even though frequencies affected demonstrate significant reliance on the form of the cavity often, the exact influence of the form is difficult and complex to comprehend.

Acoustic Voxels not merely sped and computationally optimized the design process up, it enabled the design of more complex geometries also. Current computational tools are limited to more simplistic shapes.

When waves are transmitted through a cavity, some of them are reflected back and forth. These reflected waves either result in a constructive superposition, which amplifies the sound, or destructive superposition, which muffles the sound. This is how acoustic filters operate.

Wojciech Matusik, associate professor of electrical engineering and computer science at the MIT Computer Science and Artificial Intelligence Laboratory explained the current state of this study: Thus far, the method is suitable for controlling impedance and transmitting loss in discrete frequencies mostly, such as for example in traditional muffler design.

However , the scope of the scholarly study only covered one form of a single material. “Extending our solution to additional materials and shapes can offer a larger palette for much better acoustic filtering control, ” said Matusik.

The engineering research team behind Acoustic Voxels was a combined, collaborative group. It was made up of members from Disney Study, the Massachusetts Institute of Technology and Columbia University. This development was supported by the National Science Foundation.

You like this Post, read about low cost cad software

Transparent Wood Windows are Cooler than Glass

post glass foto1
(Image thanks to the University of Maryland. )

Engineers have demonstrated that home windows manufactured from transparent wood could provide a lot more even and consistent natural lights and better energy effectiveness than glass.

In a papers just published in the journal Advanced Energy Materials, the team, headed by Liangbing Hu of the University of Maryland’s department of materials science and engineering, lay out research showing that their transparent wood provides better thermal insulation and lets in nearly as much light as glass, while eliminating glare and providing uniform and consistent indoor lighting. The results advance earlier published focus on their development of transparent timber.

“The transparent wood lets by way of a little bit less light source than glass just, but a whole lot less heat, ” said Tian Li, the prospect writer of the new study. ” It is extremely transparent, but still permits a small amount of privacy because it isn’t completely see-through. We furthermore learned that the stations in the wood transmit lighting with wavelengths around the selection of the wavelengths of noticeable light, but that it blocks the wavelengths that carry temperature mostly, ” said Li.

The team’s findings were derived, partly, from tests on a little model house with a transparent wood panel in the ceiling that the team built. The assessments showed that the light was more evenly distributed around a space with a transparent wood roof than a glass roof.

The channels in the wood direct visible light straight through the material, but its cell structure bounces the light around just a little bit, a property called haze. This means the light does not shine directly into your eyes, making it more comfortable to look at. The team photographed the transparent wood’s cell structure in the University of Maryland’s Advanced Imaging and Microscopy (AIM) Lab.

Transparent wood still has all the cell structures that comprised the original piece of wood. The wood is cut against the grain, so that the channels that drew drinking water and nutrition up from the roots lie across the shortest dimension of the home window. The new transparent timber uses theses natural stations to steer the sunlight through the timber.

As the direct sun light passes over a homely home with glass windows, the angle of which light shines through the cup changes because the sun moves. With windows or panels manufactured from transparent wood of glass rather, as the sun movements across the sky, the channels in the wood direct the sunshine just as every right time.

“This means your cat would not have to get up out of its good patch of sunlight every few minutes and move over, ” Li said. “The sunlight would stay in the same place. Also, the room would be more equally lighted at all times. ”

Working with transparent wood is similar to working with natural wood, the researchers said. However, their transparent wood is waterproof due to its polymer component. It also is much less breakable than glass because the cell structure inside resists shattering.

The research team has recently patented their process for making transparent wood. The process begins with bleaching from the timber all of the lignin, that is a component in the wood that means it is both strong and brown. The wood can be soaked in epoxy, which adds strength back and makes the wood clearer. The united group has used small squares of linden timber about 2 cm x 2 cm, but they have mentioned that the timber can be any size.

You like this Post, read about low cost 3d printer

New Audi Shock Absorber System Generates Electricity from Kinetic Energy

post newa foto1
(Image thanks to Audi. )

The recuperation of energy plays an important role in transportation increasingly, including in a car’s suspension. Audi is focusing on a prototype known as “eROT, today ” where electromechanical rotary dampers replace the hydraulic dampers used.

The principle behind eROT is easily described: “Every pothole, every bump, every curve induces kinetic energy in the electric motor car. Today’s dampers absorb this power, which is lost by means of high temperature, ” mentioned Dr . -Ing. Stefan Knirsch, board member for technical growth at AUDI AG. “With the brand new electromechanical damper program in the 48-volt electric system, this energy is put by us to utilize. ”

The eROT system is designed to respond sufficient reason for minimal inertia quickly. Being an actively controlled suspension, it adapts to irregularities in the road surface and the driver’s driving style. A damper characteristic that is virtually freely definable via software increases the functional scope.

It eliminates the mutual dependence of the rebound and compression strokes that limits conventional hydraulic dampers. With eROT, Audi configures the compression stroke to be soft without compromising the taut damping of the rebound stroke.

The eROT system enables a second function besides the freely programmable damper characteristic: It can convert the kinetic energy during compression and rebound into electricity. To do this, a lever arm absorbs the motion of the wheel carrier. The lever arm transmits this pressure via a series of gears to an electric motor, which converts it into electric power.

The recuperation output is 100 to 150 watts on average during testing on German roads – from 3 watts on a freshly paved freeway to 613 watts on a rough secondary road. Under customer driving problems, this corresponds to a CO2 savings of up to three grams per kilometer (4. 8 g/mi).

The new eROT technology is based on a high-output 48-volt electrical system. As currently configured, its lithium-ion battery offers an energy capacity of 0. 5 kilowatt hours and peak output of 13 kilowatts. A DC converter connects the 48-volt electric subsystem to the 12-volt primary electrical system, with a high-efficiency, enhanced output generator.

Audi reports that preliminary test outcomes for the eROT technology are promising, this means its use in upcoming Audi production models is plausible certainly. A prerequisite for this may be the 48-volt electrical program, a central element of Audi’s electrification strategy.

In the next version prepared for 2017, the 48-volt system will serve because the principal electrical system in a fresh Audi model and feed a high-performance mild hybrid drive. Based on the ongoing company, it shall offer potential gasoline savings as high as 0. 7 liters per 100 kilometers.

You like this Post, read about low cost 3d cad software

SCUBAJET – A Watersports Jet Engine

When Patrizia Giovanniello became a parent she scaled back her water activities on Lake Constance in Switzerland. With her boyfriend and daughter she enjoyed time on the stand up paddleboard (SUP) but was worried about unpredictable weather and currents stranding her family far from shore. Armin, her boyfriend, and his father found a solution to this problem by developing Scubajet, the flexible jet engine for water sports.

Scubajet can connect to stand up paddleboards, small dinghies, canoes, kayaks or divers. The engine can achieve a speed of up to six knots, runs for 1 . 5 hours on a single battery charge, and the campaign page says that these devices is free from emissions completely. The motor says that it could run around 1 . 5 kiloWatts.

post scuba foto1

Several adapters are available for connecting the Scubajet to boards, dinghies, or kayaks. Starboard, Simmer Style, SIC, JP-Australia, Sevylor, Mistral, RRD, Fanatic, Hobie, Crimson Paddle Co, and Naish possess all partnered with the business to verify that their current equipment could be installed with a Scubajet. The marketing campaign page says that testing has been done on diving devices to develop adapters which will give divers some additional propulsion power.

The machine itself is 25 centimeters longer, 80 centimeters wide and weighs 2 . 4 kilograms. The operational system was designed to easily fit into a backpack when not used, but videos on the marketing campaign page shows the machine strapped close to an user’s backpack. An auto-shutoff tells the motor to avoid right away if an individual falls into the water. The Scubajet’s remote gives the ability to start, stop, and switch the unit’s speed.

I’m viewing Scubajet with a healthy skepticism. There’s a bit of culture difference in the specifications but I’m much more comfortable knowing an engine’s horsepower than a wattage or max velocity callout. The idea of a propulsion system that’s much more compact and practical than an outboard motor is great, and the adapter system looks elegant and seamless on all of the demonstration gifs on the campaign page. The campaign will be funded on September 1 if its €150, 000 goal is met, and units will then ship in December, 2016.

post scuba foto2

You like this Post, read about low cost 3d cad

NASA Asteroid-Capture Technology Passes Major Test

post tect foto1
A demonstration of the ARM setup. Satellite, robotics and solar propulsion engine sold separately. (Image courtesy of NASA. )

NASA’s Asteroid Redirect Mission (ARM) has passed a major program review (Key Decision Point-B), paving the way for one of NASA’s almost all ambitious missions in recent history.

In the last 5 years, the idea of space mining and asteroid collection has transformed from being rooted in science fiction to becoming a reality. Although there are a number of private enterprises on the hunt for the untold riches hidden among the stars, NASA has also showed in interest in developing the technology required to capture asteroids and securely and accurately maneuver them through space.

That’s precisely the aim of ARM.

Regarding to NASA, ARM is really a robotic mission which will “visit a near- World asteroid, collect a multi-ton boulder through its surface area, and redirect it right into a stable orbit round the moon. ” Once secured inside its orbit round the moon astronauts will explore the captured rock and come back examples of the alien soil to World for study.

(Where these astronauts might result from hasn’t been made clear simply by NASA. Presumably, they’d end up being shuttled to lunar orbit from World or the ISS, but wouldn’t it become more fascinating if they were surviving in a Moon colony? )

Though the ARM continues to be in the very first stages of development (NASA hasn’t also selected what asteroid it could pluck off its path) the Agency in addition has stated that an amount of companion technologies will undoubtedly be tested through the ARM project. Among these technologies certainly are a high-power, higher throughput solar electric propulsion program, sophisticated robotics for capturing an asteroid and “ sophisticated autonomous high-speed proximity functions at a low-gravity planetary entire body ” (translation, NASA’s going to demonstrate that a tractor beam is really a thing).

As of this writing, NASA has stated that it expects the robotic portion of the ARM project to launch in December of 2021. Five years later, astronauts will be slated to inspect the asteroid as it orbits around the Moon.

You like this Post, read about list of cad software

China Launches the First Quantum Communication Satellite

post first foto1
(Image thanks to Xinhua. )

China has launched the world’s first quantum communications satellite television in a bid to generate an impenetrable walls around its communications.

During the last decade, the Chinese National Space Agency has produced great strides in modernizing the country’s usage of space which latest launch is further evidence that the world’s most significant nation gets the technological capacity to create and deploy powerful communication technology.

Named Quantum Experiments at Room Scale (QUESS), the satellite will be used to secure communications between Beijing and the high-crime capital of Xinhua province, Urumqi.

To accomplish secure communication QUESS will use the spooky interaction known as quantum entanglement to secure each bit pinged off the satellite. The way that works is pretty thoughts bending.

How Quantum Entanglement Works:

Quantum entanglement dictates that two or more particles can be brought together by entangling their quantum says. Once entanglement occurs, none of the particles inside that ensnared state can be distinguished from one another. The kicker is that if an outside observer wanted to verify or observe the entangled state, the very act of observation would cause the continuing state to collapse.

In quantum communication, the truth that entangled contaminants collapse when observed helps make them an ideal fit for an encryption essential. If a would-be snoop wished to try to crack a quantum encryption essential it could be theoretically impossible to take action. Therefore , if a satellite television possessed a quantum essential communicator, a quantum entanglement emitter and a quantum entanglement supply, and another station had exactly the same, the two places will be able to communicate without concern with interception.

But is this technology ready for the prime period really? Well, not yet just, but with the start of QUESS, secure quantum communications may have taken a big step of progress.

Securing Satellite Communications:

According the Xinhua news company, the official press agency associated with the People’s Republic associated with China, “In its two-year mission, QUESS is designed to establish ‘hack- proof ‘ quantum communications by transmitting uncrackable keys from space to the ground”.

Xinhua continued, “Quantum communication boasts ultra-high security as a quantum photon can neither be separated nor duplicated… It is hence impossible to wiretap, intercept or crack the information transmitted through it. ”

With the growing intensity and threat of cyberwarfare ratcheting up year after year, it makes sense that China, and likely other nations, have taken steps protect their most secure communications. While China has stated that QUESS is an entirely peaceful piece of technology, there is one unequivocal fact that can be gleaned from the launch: China has become a major player in the quantum communications and space game.

You like this Post, read about list of cad programs

Understanding Augmented Reality Headsets

Augmented reality can be a bit more promising than digital reality for commercial engineering applications given its essential difference-it enables you to layer electronic information directly on the surface of the physical “data, reality or ”. It’s important to remember that the nascent augmented fact market has not proven itself to be a dependable commodity for engineers. For media and entertainment, it’s impossible not to notice the success and recognition of Pokémon Proceed, the augmented fact game from Nintendo. Engineering applications are in short supply but they do exist.

In this post, we’ll cover a cross- section of augmented fact headsets and focus on the ones that have the most promise for engineering applications, such as training, maintenance, visualization and collaboration.

Differentiating Augmented Reality Products:

Augmented reality can be experienced on mobile devices like a tablet and smartphone. There are also augmented reality headsets known as head-mounted displays (HMDs), eyeglasses, visors, helmets and a set of augmented reality contacts even.

Truly Immersive Augmented Reality Takes a Big Headset:

Probably the most interesting problems with making immersive augmented the truth is the amount of physical real estate it requires from the user. There is a direct ratio that requires the amount of optics to increase as the desired display size and field of view increases. With a compact wearable like Google Cup, for example , the widest industry of view (FoV) you will achieve is around 20 to 30 degrees. Google Glass is 13 something and degrees just like the Epson MoverioBT-2000 gets around 23 degrees.

That is basically why headsets yield a far more immersive experience.

Augmented Reality Terminology:

Many of the terms such as FoV, latency, frame rate and refresh rate are similar to those you need to familiarize yourself with in order to understand virtual reality, which you can see here, in a previous post I wrote called “ Understanding Virtual Reality Headsets. ”

Virtual retinal display (VRD), which is particular to AR, beams a raster projection directly onto an user’s irises. The result is similar to seeing a display before your eyes directly, much like some type of computer or television screen. The effectiveness of VRDs has greatly increased with the advancement of LED technology, allowing users to discover them during hours of sunlight even.

Summary of Augmented Reality Applications:

Augmented reality has been found in a number of novel ways, across a variety of fields and disciplines, including archaeology, construction, medicine, emergency management, industrial design and the military.

The first three headsets featured here have the most potential uses for engineers. Afterwards, I’ll briefly explore a cross-section of augmented reality headsets and glasses with industrial and enterprise applications and possible.

1) DAQRI: The Wise and Safe Helmet:

The DAQRI Wise Helmet (DSH) is really a combination safety helmet and augmented reality headset that overlays virtual instructions, safety information, training and visual mapping over specific reality information. Workers in gas and essential oil, automation and making sectors who have to understand or follow complicated instructions to perform complex processes can look through the DSH and see digital information overlaid on a variety of different contexts-whether it is a Siemens controller, scanning device or quality control products for metrology purposes.

post un foto1
The DSH overlays electronic instructions over equipment in adjusts and realtime to the movements of the workers. (Image thanks to DAQRI. )

The helmet comes with its battery and docking station and weighs just as much as any typical industrial hardhat. The DSH varies widely in price, fetching anywhere from $5, 000 to $15, 000, because the features need to be custom built.

Powered by the sixth-era Intel Core m7 RealSense plus processor scanning technology, the DSH may be the first functional and useful HMD that uses augmented truth to greatly help human workers perform tough tasks.

The DSH’s face shield and injection-molded plastic helmet component are ANSI-compliant. The inner portion of the helmet’s shell will be a mix of cast aluminum and carbon dietary fiber composite.

post un foto2
Thermal PoV through the DSH. (Image thanks to DAQRI. )

DAQRI’s multiple cameras interact to make this the initial fully industrial augmented actuality headset. If features a13-megapixel HD camcorder to fully capture photos and videos, track objects and recognize 2D colors and targets. Intel’s RealSense technology has two infrared cameras built-in, and DAQRI integrates them having an infrared laser projector that can sense depth by measuring deflected infrared light. A low-resolution video camera is integrated with an industrial-grade inertial measurement unit (IMU), which allows the helmet to compute its relative position in space in real time via a combination of gyroscopes and accelerometers. Another high-quality IMU is available for additional applications. For sound, there are four microphones, strength and volume buttons and a good output jack for headphones.

Workers putting on the DSH can easily see augmented instructions that transformation in accordance with their actual environment. The employee can look at a machine with 100 readouts, and the DSH will draw their attention to a pressure gauge, for example , that is reading too much or too reduced. The DSH’s infrared cams can constantly monitor devices by overlaying normal thermal information and current thermal information to create distinctions and judgements on the fly. Workers built with the DSH can visually scan for out-of-tolerance thermal anomalies that could put an operation in peril.

post un foto3
The DSH’s face shield and the hard helmet itself are ANSI compliant. The outer shell is injection-molded plastic. ( Image courtesy of DAQRI. )

The DSH was used in a case study with Hyperloop in a way that illustrates the collaborative power when used between workers of a large and widely dispersed manufacturing unit. A novice operator was using a robotic welder for exact spot welding in design. A more encountered operator could tune in to the networked DSH of their less-experienced counterpart, assess what these were doing and immediately relay correct directions.

Which means that an ongoing company could purchase a custom-built series of DSHs, scale up operations with less-experienced (less-expensive) workers and also have several experts remotely monitor and guide all of them the way to production.

In accordance with DAQRI, the DSH can be acquired for purchase by Q1 2016 to its top-tier customers.

2)Metavision’s Meta 2:

post un foto4
The Meta 2 by Metavision includes a 2560×1440 FoV and display. (Image thanks to Metavision. )

The Meta 2 can be an augmented reality headset from Metavision with several features that are promising for potential industrial uses, such as a wide FoV. Compared with virtual reality, less FoV is not desirable but not prone to the same distraction as a small FoV in an augmented reality headset. In virtual reality, whatever isn’t in the FoV (which contains 3D types of different polygonal sizes) is encircled by pitch-black darkness. In augmented actuality, a low FoV equals a little translucent digital window with 3D content surrounded by real life of physical data that certain would see with out a headset.

The FoV on the initial Meta was 25 to 35 degrees, that is small compared to the average virtual reality FoV. The Meta 2 has a 90-degree FoV, which is a tremendous breakthrough, especially when considering industrial applications like training, maintenance or manufacturing. There is a trade off that allows this wide FoV. Like its predecessor, the Meta 2 is tethered. Link with a workstation limits all sorts of training applications and limitations use on a factory ground for assembly or maintenance. If the Meta will be compared by you 2 to an augmented reality headset just like the Microsoft HoloLens, which is untethered, you realize immediately that the Meta 2 reaches a disadvantage for practical uses. But this has to be interpreted as a long-term design strategy on the part of both Metavision and Microsoft. Microsoft believes it can advance its untethered AR headset through developers arranging a wireless hardware gadget, and Metavision is likely to develop the technology and untether before a customer version is popular. It is important to remember that both HoloLens and Meta 2 are usually basically developer kits rather than full-fledged consumer products.

Meta takes full benefit of the continuing miniaturization and democratization of inexpensive sensors paired with a new high-definition camera to compute your hands in the context of the digital and physical environment they exist in through the headset. The hand-tracking settings of the Meta 2 are not as sophisticated as the Leap Motion Orion controllers, but the notion of separate hardware for hand-tracking may be going the way of the dodo and only eye-tracking technology, though that is debatable. Preorders of the Meta 2 developer kit can be found today for $949, and Metavision states the gadgets will ship in Q3 2016.

It’s understood at this time that the possible killer engineering or even industrial app for augmented truth headsets just like the powerful Meta 2 are still to come.

3)Microsoft HoloLens:

Microsoft HoloLens is an augmented reality headset that was developed under the code name ProjectBarrio. It is also known as a “mixed fact ” headset, or holographic computer. “ Combined reality” is a term that is gaining momentum in the press and is sometimes used to describe headsets that may switch from virtual reality setting to augmented reality mode. Miracle Leap, the mysterious startup without products but major investments brought by Alibaba and Google, has pushed because of this linguistic distinction specifically.

post un foto5
Microsoft HoloLens costs $3, 000 and is definitely primarily for developers at this time. The advantage it has over the Meta 2 is that it is untethered, allowing for a huge degree of freedom relatively. (Image thanks to Microsoft. )

Besides semantics, the HoloLens descends from the movement detection and scanning technology hardware known as the Microsoft Kinect, which was released in 2010 2010. Microsoft uses the term hologram to describe the digital information that is overlaid on the physical world (which you can see through the visor). The hope is that headset holographic computing will eventually replace the screens (laptop, PC, cellular devices ) we use night and day today.

The HoloLens features an accelerometer, magnetometer, gyroscope, four depth-sensing cameras, a lighting sensor, four microphones and a 2-megapixel camera. Aside from the typical GPU and CPU within nearly all computing devices, the HoloLens also offers something called a Holographic Processing Unit, or HPU. The HPU is just a sort of “grand central terminal” for all of the input from the various sensors.

Microsoft is building the Windows 95 of augmented reality operating systems also, called Windows Holographic, enabling producers to spotlight developing the hardware rather than worry about the program, which, in theory, can help the development of augmented reality devices get to a tipping point with consumers and help augmented reality move mainstream.

4) A cross section evaluation of alternate augmented reality headphones. You can find dozens of augmented reality headsets available today, and this random cross-section is meant to highlight a few similarities and differences.

Google Glass: First we have Google Glass, that was discontinued after a year to be in the marketplace barely. Google Glass 2 . 0 is in growth currently, and Google is currently showcasing enterprise and industrial programs for the headset. It has a heads-up display, a microphone, a CPU, a battery, a GPS, speakers, a microphone and a projector that overlays digital information onto an user’s see by beaming it by way of a visible prism that focuses the electronic information correct onto the retina.

Google is concentrating on enterprise use cases, want Boeing using them for cable harness assembly. The headsets make use of voice commands and a member of family part panel a la GeordiLa Forge from Star Trek, but they won’t help you with your vision, unfortunately.

R-7 Smart Glasses: The form factor of these glasses from OsterhoutDesign Group separates them from the pack of giant and boxy augmented reality headsets like Microsoft HoloLens and Meta 2 . They just kind of look like awkward, oversized sunglasses.

post un foto6
To control your virtual environment on the R-7 smart eyeglasses, you may use a trackpad on the eyeglasses themselves or work with a paired controller. (Image thanks to Osterhaut Design Group. )
They run a custom made version of Android KitKat called ReticleOS, therefore you can run Android load and apps movies.

The R-7s are very light as well, weighing about 2 . 5 lbs, which is about a pound less than the HoloLens.

Vuzix M300 Smart Glasses: This headset seems like a carbon copy of Google Glass, except it has slightly better resolution and also runs IOS. The 64 GB of internal storage isn’t all that fascinating, but it’s partnership with Ubimax and used in the logistics industry will probably be worth mentioning. DHL utilizes xPick on the sooner version of the smart glasses (Vuzix M100 Smart Glasses).

Ubimax produces the Business Wearable Computing Suite, that is a band of industrial augmented reality applications much like xPick, including xMake for manufacturing, xAssist for remote assistance and xInspect for maintenance.

Moverio Pro BT-2000: Epson’s first edition of this augmented actuality headset, the BT-100, premiered before Google Glass. This latest edition is certainly targeting enterprise customers designed for remote viewing using its 5-megapixel camera, 3D mapping and gesture recognition capabilities.

Name Google Glass R-7 Smart Glasses Vuzix M300 Moverio Pro BT-2000

Company Google Osterhaut Design Group Epson Epson

Shipping Discontinued Yes Q3 2016 Yes

FoV (degrees) 15 30 20 23

Resolution 640 x 360 1280 x 720 960 x 540 960 x 540

Platform Android Android Android/iOS Android

Cost USD$1,500 USD$2,750 USD$1,499 USD$2,999

5) Magic Leap: News of this unicorn startup comes wrapped in mysterious claims of “light-field displays ” and “photonic chips” that are threatening to upend everything we know about consumer-oriented headset kits just like the Meta 2 . This startup is named Magic Leap, and it’s elevated about $1. 5 billion in funding. The funding was brought by Google (which several speculate was a reply to Facebook’s $2 billion buy of Oculus) on the effectiveness of supposedly ground-breaking technology when a special lighting apparatus beams holographic images correct onto your eyes.

post un foto7
The light-field displays could decrease the bulky and goofy industrial design that characterizes nearly all augmented reality headsets available. (Image thanks to Magic Leap. )

Magic Leap comes last inside this overview since it represents the promise, potential and global interest in the future of augmented reality as a new computing platform. This company have not released a product by yet but claims to revolutionize the industry of “mixed truth ” or augmented reality, or anything you prefer to call it.

The potential uses for engineers is there, in the DSH particularly, but a standardized platform this is the equivalent of the iPhone for augmented reality still remains elusive.

You like this Post, read about list of 3d software

Is NVIDIA’s Latest Graphics Board Too Good for You?

post nv foto1
The new Quadro P6000 may be the quickest NVIDIA graphics board ever. ( Picture courtesy of NVIDIA. )

NVIDIA is moving fast. The brand new Pascal architecture has begun shipping; last week, the brand new Titan X was released in line with the Pascal architecture; this week, the top-of-the- collection Quadro boards were launched.

This is a faster introduction across product lines than NVIDIA did for the last two major GPU releases: the Maxwell and Kepler architectures.

Alongside the Quadro P6000, NVIDIA announced the Quadro P5000. Here are the highlights:

The Quadro P6000 and P5000 are based on NVIDIA’s GP102 graphics processor

The Quadro P6000 has 24 GB of memory and 3840 compute unified device architecture (CUDA) cores- nearly 300 more cores than the Titan X

For virtual fact (VR) and 3D stereo software, simultaneous multi-projection allows for left and right vision projections to be created in one geometry pass

Loaded with GDDR5X memory, which delivers twice the bandwidth of the GDDR5 found on the previous generation table (the Quadro M6000) and is critical for GPU computing

Unified virtual memory, on Linux, will accelerate GPU-computing issues with very large data sets

Dynamic load balancing of visuals and computing applications delivers better GPU-computing and graphics mixed-mode operations

Designed to accelerate GPU-based ray tracing, video rendering and high-end color grading

Boasts 8K display resolutions with support for DisplayPort 1 . 4

The basic characteristics of the two new Quadros are laid out below.

Feature Quadro P5000 Quadro P6000

GPU Pascal, GP102 Pascal, GP102

CUDA Cores 2560 3840

Memory 16 GB GDDR5X 24 GB GDDR5X

Display Outputs 4x DP 1 . 4 & 1x DVI 4x DP 1 . 4 & 1x DVI

Display Support 4 x 4K resolution at 120 Hz 4 x 4K resolution at 120 Hz

4 x 5K resolution at 60 Hz 4 x 5K resolution at 60 Hz

Available October 2016 October 2016

Pricing Not Available Not Available

It is worth pointing out that the Quadro P6000 is absolutely the fastest, most powerful graphics board in the NVIDIA family. Not only is there 24 GB of GDDR5X memory, twice the memory space of the Titan X, it also has 3840 CUDA cores when compared to Titan X with 3584 CUDA cores. There is absolutely no quicker GPU in the NVIDIA fall into line.

Another a key point for professionals is without a doubt that the Quadro P6000 and P5000 unified memory architecture is ideal for compute applications with huge data sets running in Linux. The unified memory architecture allows tasks of unlimited size to be rendered and calculated. The unified storage architecture is possible on Linux. It generally does not exist under Windows.

Virtual reality is extremely popular in the buyer space. Professionals, however, have been fighting VR, stereoscopic 3D shows and the relevant technical complications for over two decades. The new Quadro products use a technique called single-pass multi-projection. It allows the Quadro P5000 and P6000 to process the 3D scene one time and generate two perspective views: one for the right eye and one for the left attention. This doubles the overall performance for stereo projections which, in turn, doubles your budget for complexity and fidelity in the VR, 3D stereo image.

The Pascal architecture can balance graphics and computing focus on the GPU dynamically. This enhances the Quadro’s capability to work interactively with reasonable, GPU-computed ray tracing in the application form viewport. Imagine Iray’s reasonable rendering in a Maya viewport for a quicker lighting and lighting workflow in visual results (VFX) scenes.

The professional world is moving beyond 4K. The brand new Quadro GPUs assistance DisplayPort 1 . 4 and 8K resolutions. Plus they support four simultaneous 5K resolution displays. Today that is clearly a boon to professionals in movie and special effects, and it will be for engineers, too. It isn’t hard to assume 5K displays, presently running USD$1, 600, changing 4K shows at the sub-USD$800 price in the future.

post nv foto2
NVIDIA tips for Quadro models gives applications a substantial amount of headroom. (Image thanks to NVIDIA. )

Users, those with budgets to take into account especially, may elect to check out graphics hardware an even below NVIDIA recommendations. NVIDIA seems to have taken care to spec a card for the power user for each software application, not wanting such an user to be hardware constrained. But let’s consider a typical user, say for 3ds Max, who does 3D modeling but rarely, if ever, uses the particular Iray plugin for GPU ray rendering and tracing. For that user, Quadro K1200 might succeed and be preferred on the recommended quadro M4000.

Users should also determine if their unique application can function with or even make use of the NVIDIA hardware getting recommended. For instance, ANSYS Fluent’s results are even more accurate making use of double-precision floating point operations, but the GP102 GPUs in the Quadro P6000 recommended above are optimized for single-precision floating point operations. Heavy use of simulation solvers should be steered toward the new Tesla P100 GPUs, which are optimized for double-precision floating point computing and meant for graphics cards in HPC nodes in data centers rather than in desktop workstations.

A Final Word:

NVIDIA is upgrading the Quadro family with the Pascal architecture faster than any architecture change that I can remember.

The Pascal architecture is faster than any previous architecture.

The Quadro P6000 surpasses the raw performance of its consumer/gamer counterpart, the USD$1, 200 Titan X, including 300 extra CUDA cores and doubling the graphics memory space nearly. In addition , the brand new architecture, coupled with NVIDIA’s excellent GPU-computing assistance for ray-tracing along with power to accelerate VR, create the Quadro P6000 and the P5000 worth taking into consideration as professional workstation graphics for those doing rendering or creating /viewing VR content.

NVIDIA does not expect to ship the Quadro P6000 and the P5000 until October and has not released any pricing. Keep in mind the NVIDIA Quadro M6000, which the P6000 presumably replaces, was selling for a low cost of USD$4, 000.

You like this Post, read about intersection of two lines in 3d

Engineers Target Cancerous Tumors with Nanobots

post cancer foto1
(Image thanks to the Institute of Biomedical Engingeering, Polytechnique Montréal. )

Engineering researchers are suffering from new nanorobotic agents with the capacity of navigating through the bloodstream to manage a drug with precision simply by specifically targeting the dynamic cancerous cells of tumours.

In this manner of injecting medication ensures the perfect targeting of a tumour and avoids jeopardizing the integrity of organs and surrounding healthy tissues. As a total result, drug dosages which are toxic for the human being organism could possibly be significantly reduced highly.

This breakthrough may be the total consequence of research done on mice, that have been administered nanorobotic agents into colorectal tumours successfully.

“These legions of nanorobotic brokers were actually composed of a lot more than 100 million flagellated bacteria – and for that reason self-propelled – and packed with drugs that moved by taking the most direct path between the drug’s injection point and the area of the body to cure, ” explains Sylvain Martel, director of the Polytechnique Montréal Nanorobotics Laboratory, who heads the research team’s work. “The drug’s propelling force was enough to travel efficiently and enter deep inside the tumours. ”

When they enter a tumour, the nanorobotic agents can detect in a wholly autonomous fashion the oxygen-depleted tumour areas, known as hypoxic zones, and deliver the drug to them.

This hypoxic zone is created by the substantial consumption of oxygen by rapidly proliferative tumour cells. Hypoxic zones are known to be resistant to most therapies, including radiotherapy.

Gaining access to tumours by taking paths as minute as a red blood cell and crossing complex physiological micro-environments does not come without challenges. So Martel and his team used nanotechnology to do it.

Bacteria with a Compass:

To move around, the bacteria utilized by PMartel’s team depend on two natural systems. Some sort of compass created by the synthesis of a chain of magnetic nanoparticles allows them to move in the direction of a magnetic field, while a sensor measuring oxygen concentration enables them to reach and remain in the tumour’s active regions.

By harnessing these two transportation systems and by exposing the bacteria to a computer-controlled magnetic field, researchers showed that these bacteria could perfectly replicate artificial nanorobots of the future designed for this kind of task.

“This innovative use of nanotransporters will have an impact not only on creating more advanced engineering concepts and original intervention methods, but it also throws the door wide open to the synthesis of new vehicles for therapeutic, imaging and diagnostic agents, ” Martel added.

“Chemotherapy, that is so toxic for the whole human body, could make usage of these natural nanorobots to go drugs to the targeted region directly, eliminating the harmful unwanted effects while boosting its therapeutic performance also, ” Martel concluded.

The research is published beneath the title “Magneto-aerotactic bacteria deliver drug-containing nanoliposomes to tumour hypoxic regions” in the journal Character Nanotechnology.

You like this Post, read about interior design cad software

Current Electric Vehicles Could Replace 90 Percent of Vehicles on the Road Today

post cur foto1
Nighttime image of New York City, with the red showing a large population density. (Image courtesy of Doc Searles/MIT. )

A recent study has found that the wholesale replacement of conventional vehicles with electric vehicles (EVs) is possible today and could play a significant role in meeting climate transformation mitigation goals.

“ Approximately 90 percent of the non-public vehicles on the road on a daily basis could be replaced by way of a low-cost electric vehicle in the marketplace today, if the cars can only just charge overnight even, ” said Jessika Trancik professor within energy studies at MIT and lead researcher. “[This] would a lot more than satisfy near-term U. S. environment targets for personal vehicle vacation. ”

Overall, today from the energy plants offering the electricity when accounting for the emissions, this would result in an 30 percent decrease in emissions from transportation approximately. Deeper emissions cuts will be realized if power plant life decarbonize over time.

Combining Two Huge Datasets:

The complete project took four years, including developing a method of integrating two massive datasets: one highly detailed group of second-by-second driving behavior predicated on GPS data and a broader, more comprehensive group of national data predicated on travel surveys. Together, both datasets encompass an incredible number of trips made by drivers all over the national country.

The detailed GPS information was collected by state agencies in Texas, Georgia, and California, using special information loggers installed in cars to assess statewide generating patterns. The even more comprehensive, but less comprehensive, nationwide data originated from a national household transportation survey, which studied households across the country to learn about how and where people actually do their driving.

The researchers needed to understand “the distances and timing of trips, the different driving behaviors, and the ambient weather conditions, ” said Zachary Needell, a graduate student who collaborated on the research.

By working out formulas to integrate the different sets of information and thereby track one-second-resolution push cycles, the researchers were able to demonstrate that the daily energy requirements of some 90 pct of personal vehicles on the highway in the U. S. could possibly be met by today’s EVs, making use of their current ranges.

The entire cost to vehicle owners – including both purchase and operating costs – will be no higher than that of conventional internal-combustion vehicles. The team viewed once-daily charging, at home or at the job, to be able to study the adoption possible given today’s charging infrastructure.

What’s more, this type of large-scale replacement will be sufficient to meet up the nation’s stated near- expression emissions-reduction targets for personal automobiles ’ share of the transportation industry – a sector that makes up about about a 3rd of the nation’s overall greenhouse gas emissions, with a majority of emissions from privately owned, light-duty vehicles.

Settling the EV Debate:

While EVs have numerous devotees, they also have a lot of critics, who cite range panic as a barrier to transportation electrification. “This is an issue where common sense can lead to strongly opposing views, ” Trancik said. “Many seem to believe that the potential is little strongly, and the rest think that is it large. ”

“Developing the concepts plus mathematical models necessary for a testable, quantitative evaluation is helpful in these circumstances, where so much reaches stake, ” she added.

Today those who have the potential is small cite the premium prices of several EVs available, like the rated but expensive Tesla models highly, and the still-limited length that lower-cost EVs can get about the same charge, compared to the range of a gasoline car on one tank of gas.

The lack of available charging infrastructure in many places, and the much higher amount of time required to recharge a car compared to simply filling a gas tank are also cited as drawbacks.

post cur foto2
(Image courtesy of MIT. )

Nevertheless, the team found that the vast majority of cars on the road consume no more energy in a day than the battery energy capacity in affordable EVs available today. A situation is represented by these quantities where people would do the majority of their recharging overnight in the home, or during the time at work, so for like trips having less infrastructure was not a really concern.

Vehicles such as the Ford Focus Electric or the Nissan Leaf would be adequate to meet the needs of the vast majority of U. S. drivers. Although their sticker prices are still higher than those of conventional cars, their overall lifetime costs end up being comparable due to lower maintenance and operating costs.

The Electric Vehicle Range Barrier:

The scholarly study cautions that for EV ownership to go up to high levels, the needs of drivers need to be met on all full times. For days which energy consumption is increased, such as for example for vacations, or times when an intensive dependence on heating system or cooling would sharply curb the EV’s distance range, driving needs could be met by using a different car (in a two-car home ), or by renting, or using a car-sharing service.

The study highlights the important role that car sharing of internal combustion engine vehicles could play in driving electrification. Car sharing should be very convenient for this to work, Trancik said, and requires further business model innovation.

Additionally , the days on which alternatives are needed should be known to drivers in advance -information that the team’s model “TripEnergy” is able to provide.

Even as batteries improve, there will continue to be a small number of high-energy days that exceed the range provided by electric vehicles. For these days, other powertrain technologies will undoubtedly be needed.

The study helps policy- manufacturers to quantify the “returns” to improving batteries through buying research, for instance, and the gap that may have to be filled by additional forms of cars, such as for example those fueled by low-emissions hydrogen or biofuels, to reach suprisingly low emissions levels for the transportation sector.

Another essential finding from the analysis was that the prospect of shifting to EVs is rather uniform for various areas of the country. “The adoption potential of electric automobiles is remarkably similar across cities, ” Trancik said, “from dense urban areas like New York, to sprawling cities like Houston. This goes against the view that electric vehicles – at least affordable ones, which have limited range – only really work in dense urban centers. ”

You like this Post, read about interior design 3d software

British Technology Initiative Aims to Develop New Spy Gadgets

post bi foto1
(Image courtesy of MoD/Animal Dynamics. )

I’ve never been the biggest James Bond fan. Sure, I’ve seen the Connery classics, watched the Brosnan era flicks and even seen a few of the newest movies, but Bond as a whole has never grabbed me.

Before you go off believing that I think the Bond films are pish posh, I can say that I’ve usually loved the scenes with Bond and Q, where a series of technological McGuffins are introduced as Bond’s new arsenal.

Well, in a recent statement, Britain’s Ministry of Defense (MoD) has announced a new £800m technological initiative that seems ripped straight from a James Bond film reel.

According the MoD proposal this new initiative will be led by an Development and Research Unit (IRIS) that will forecast emerging technological trends and assess what effects those developments could have on Brittan’s security.

With a general notion of the near future mapped out, the IRIS team will engage “ The very best and brightest individuals and companies” asking them to pitch technological answers to IRIS’s forecast in “Dragon’s Den-style panel”.

IRIS, Dragon’s Den. Will it have more cloak-and dagger than that preposterously?

In case a project is accepted, the IRIS team will shepherd the project through completion at a separate security and defense accelerator.

“This new approach shall help with keeping Britain safe while supporting our economy, with this brightest brains keeping us before our adversaries. ” Said Protection secretary Michael Fallon.

post bi foto2
(Image thanks to MoD/University of Birmingham. )

Though the IRIS-led initiative gets off the ground, the MoD has designated some of its newest and developing technologies as a harbingers of what may come from IRIS initiative.

Off first, the MoD is creating a drone named Skeeter. Unlike various other drones, Skeeter won’t mimic a plane or helicopter. Instead, the micro-machine will need its air travel cues from the dragonfly. Equipped with four wings, Skeeter will be nimble and small making it ideal for stealthy intelligence gathering.

Second on the MoD’s upcoming tech list is a quantum gravimeter. Developed in cooperation with the University of Birmingham, the portable machine will use quantum technology and a pair of gravimeters to accurately map tunnel networks and underground bunkers from the surface of the planet.

Not only will this technology make it easier for the military to detect hidden enemy lairs (a seriously James Bond problem ) it could are also available in handy during normal disasters, where it may be deployed to get survivors trapped amidst the rubble.

The MoD in addition has ominously suggested that it’s building “laser weapons to focus on and defeat aerial threats. ”

It really doesn’t have more Bondian than that.

You like this Post, read about interior 3d design software

Fighting Fire with AI

post fire foto1
(Image thanks to NASA. )

Fire-fighting, among the world’s most harmful professions arguably, could become much safer next year because of a newly developed AI program.

When firefighters enter a constructing, they’re likely to use their senses and education to get trapped civilians and deliver them from danger. While drilled instinct and behaviors are essential tools for every firefighter, they can’t evaluate to the insight which can be gleaned from big data.

During the last 9 months the united states Department of Homeland Security and NASA’s Jet Propulsion Laboratory have already been hard at the job developing an artificial intelligence program that may leverage big data to keep firefighters safe.

Named the Associate for Understanding Data via Reasoning, Extraction, and sYnthesis (AUDREY) this algorithm can track firefighters as they move through a structure using sensors embedded in the first-responders’ uniforms.

“As a firefighter moves through an environment, AUDREY could send alerts through a mobile device or head-mounted display, ” said Mark James of JPL, lead scientist for the AUDREY project.

Armed with a suite of sensors that can detect heat in adjacent rooms, concentrations of dangerous gases, and detailed maps of a structure, firefighters would be able to move through a structure in the safest, most efficient manner, making it possible to save more lives and protect their own.

But sensors alone aren’t enough to make AUDREY work. The brains of the AUDREY AI systems run on the cloud, leveraging computing power and the system’s ability to learn and make predictions about what first responders will need in the immediate future.

Though it’s only a few months old, the AUDREY system has already been tested in a virtual demonstration at the Public Safety Broadband Stakeholder Meeting held in San Diego. During the test AUDREY was given data from an amount of various sensors and was likely to give recommendations to several phantom first responders via cellular gadget. While JPL didn’t explicitly state the check went well, Edward Chow, supervisor of JPL’s Civil Program Workplace did say that inside a year AUDREY will start field demonstrations.

You like this Post, read about ideas for 3d printing

Berkeley Roboticist Learns Lessons About Humanity from Robots

Ken Goldberg starts his talk to a large idea: robots can inspire us all to be better human beings. In his TED Talk 4 training from robots about becoming human, Goldberg examines what goes on to humanity as robots are more woven into modern society. Four different tasks are discussed, combined with the complete life lessons that Goldberg offers pulled from the robots.

In 1993 Goldberg was subjected to the new internet by his students, and struck with the theory that anyone on the planet could use the technology to regulate the robots in his lab. The telegarden has been a robot with a camera attached that could be controlled by remote users to take a tour of a large garden table. Users could help to water the garden, and eventually be given seeds to plant in the garden .

post tele foto1

A random question from a student about whether or not the robot was real led Goldberg down a path of philosophical discovery. He coined a new term telepistemology – the study of knowledge at a distance. This project and the questioning of the project’s reality taught Ken to always question assumptions, both society’s and his own.

The second project discussed was born in the robot garden ideas and project concerning the robot interacting with people. The united group created a tele-actor, a person with wires, microphones and cameras that could act as a robot. The tele-actor would get into remote environments and folks watching online could knowledge what the actor was viewing and hearing, and determine what activities the tele-actor would take. Once the online group couldn’t decide how to proceed the tele-actor would move from gut instinct to do something. This taught Goldberg another lesson: When In Question, Improvise.

The 3rd lesson was learned when Goldberg’s father was in a healthcare facility and undergoing chemotherapy. Brachytherapy had been also being done at a healthcare facility and Goldberg caused his students to build up a robot that would focus on tumors with radiation and steer clear of the body’s organs. The task taught him the lesson that whenever your path is blocked, you pivot.

Finally Goldberg discussed the da Vinci surgical robot and giving a surgeon freedom to concentrate on the complicated parts of surgery while automating the non-essential tasks. Taking several human motion captures, dynamic time warping, iterative learning and Kalman filtering, Goldberg was able to teach the movement sequences to a robot that could, over time, work at ten occasions the speed of a human. This project taught the lesson that there’s no substitute for practice, practice, practice.

Ken Goldberg is a compelling speaker and does a great job of framing his projects and ideas in basic and easily understandable conditions. This TED Talk is really a few years outdated but filled with incredibly interesting concepts about human-robot interactions.

post tele foto2

You like this Post, read about ideas 3d modeling software

Whether you're an apprentice, skilled craftsman, designer, retired engineer or simply someone with an interest in engineering the Association offers something for everyone.