MUHAMMAD RIFQI ZULHELMI
11 min readSep 8, 2019

Future Trends of Technology

Humans have been storing, retrieving, manipulating, and communicating information. Human is familiar about technology because human have use technology every time as smartphone. Technology is developing because of increased human thought and create a new innovation about technology that can useful in terms of usage and efficiency. The technology created must be inviromental to reduce the impact of globalization, this is the technology that will be in the future.

Machine Learning:

This is a field of computer science that gives computer systems the ability to “learn” (i.e., progressively improve performance on a specific task) with data, without being explicitly programmed.

The name machine learning was coined in 1959 by Arthur Samuel.[2] Evolved from the study of pattern recognition and computational learning theory in artificial intelligence, machine learning explores the study and construction of algorithms that can learn from and make predictions on data — such algorithms overcome following strictly static program instructions by making data-driven predictions or decisions, through building a model from sample inputs.

Internet of Things:

The Internet of Things (IoT) is the network of physical devices, vehicles, home appliances and other items embedded with electronics, software, sensors, actuators, and connectivity which enables these objects to connect and exchange data.

Each thing is uniquely identifiable through its embedded computing system but is able to inter-operate within the existing Internet infrastructure.

The figure of online capable devices increased 31% from 2016 to 8.4 billion in 2017. Experts estimate that the IoT will consist of about 30 billion objects by 2020. It is also estimated that the global market value of IoT will reach $7.1 trillion by 2020.

The term “the Internet of things” was coined by Kevin Ashton of Procter & Gamble, later MIT’s Auto-ID Center, in 1999.

Blockchain:

A blockchain, originally block chain, is a continuously growing list of records, called blocks, which are linked and secured using cryptography.

Each block typically contains a cryptographic hash of the previous block, a timestamp and transaction data. By design, a blockchain is inherently resistant to modification of the data.

It is “an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way”.[8] For use as a distributed ledger, a blockchain is typically managed by a peer-to-peer network collectively adhering to a protocol for inter-node communication and validating new blocks. Once recorded, the data in any given block cannot be altered retroactively without the alteration of all subsequent blocks, which requires collusion of the network majority.

Blockchain was invented by Satoshi Nakamoto in 2008 for use in the cryptocurrency bitcoin, as its public transaction ledger.

Quantum Computing

Quantum computers are incredibly powerful machines that take a new approach to processing information.

Built on the principles of quantum mechanics, they exploit complex and fascinating laws of nature that are always there, but usually remain hidden from view.

By harnessing such natural behavior, quantum computing can run new types of algorithms to process information more holistically. They may one day lead to revolutionary breakthroughs in materials and drug discovery, the optimization of complex manmade systems, and artificial intelligence. We expect them to open doors that we once thought would remain locked indefinitely.

Acquaint yourself with the strange and exciting world of quantum computing.

3D Printing

3D printing refers to processes in which material is joined or solidified under computer control to create a three-dimensional object, with material being added together (such as liquid molecules or powder grains being fused together). 3D printing is used in both rapid prototyping and additive manufacturing (AM). Objects can be of almost any shape or geometry and typically are produced using digital model data from a 3D model or another electronic data source such as an Additive Manufacturing File (AMF) file (usually in sequential layers). There are many different technologies, like stereolithography (STL) or fused deposit modeling (FDM). Thus, unlike material removed from a stock in the conventional machining process, 3D printing or AM builds a three-dimensional object from computer-aided design (CAD) model or AMF file, usually by successively adding material layer by layer.

3D printing or additive manufacturing is a process of making three dimensional solid objects from a digital file. The creation of a 3D printed object is achieved using additive processes. In an additive process an object is created by laying down successive layers of material until the object is created. Each of these layers can be seen as a thinly sliced horizontal cross-section of the eventual object.

3D printing is the opposite of subtractive manufacturing which is cutting out / hollowing out a piece of metal or plastic with for instance a milling machine. 3D printing enables you to produce complex (functional) shapes using less material than traditional manufacturing methods.

The term “3D printing” originally referred to a process that deposits a binder material onto a powder bed with inkjet printer heads layer by layer. More recently, the term is being used in popular vernacular to encompass a wider variety of additive manufacturing techniques. United States and global technical standards use the official term additive manufacturing for this broader sense, since the final goal of additive manufacturing is to achieve mass-production, which greatly differs from 3D printing for Rapid prototyping.

Robotics

Robotics is an interdisciplinary branch of engineering and science that includes mechanical engineering, electrical engineering, computer science, and others. Robotics deals with the design, construction, operation, and use of robots, as well as computer systems for their control, sensory feedback, and information processing.

These technologies are used to develop machines that can substitute for humans and replicate human actions. Robots can be used in any situation and for any purpose, but today many are used in dangerous environments (including bomb detection and de-activation), manufacturing processes, or where humans cannot survive. Robots can take on any form but some are made to resemble humans in appearance. This is said to help in the acceptance of a robot in certain replicative behaviors usually performed by people. Such robots attempt to replicate walking, lifting, speech, cognition, and basically anything a human can do. Many of today’s robots are inspired by nature, contributing to the field of bio-inspired robotics.

The concept of creating machines that can operate autonomously dates back to classical times, but research into the functionality and potential uses of robots did not grow substantially until the 20th century. Throughout history, it has been frequently assumed that robots will one day be able to mimic human behavior and manage tasks in a human-like fashion. Today, robotics is a rapidly growing field, as technological advances continue; researching, designing, and building new robots serve various practical purposes, whether domestically, commercially, or militarily. Many robots are built to do jobs that are hazardous to people such as defusing bombs, finding survivors in unstable ruins, and exploring mines and shipwrecks. Robotics is also used in STEM (science, technology, engineering, and mathematics) as a teaching aid.

Robotics is a branch of engineering that involves the conception, design, manufacture, and operation of robots. This field overlaps with electronics, computer science, artificial intelligence, mechatronics, nanotechnology and bio-engineering.

Biometrics

Biometrics is the measurement and statistical analysis of people’s unique physical and behavioral characteristics. The technology is mainly used for identification and access control, or for identifying individuals who are under surveillance. The basic premise of biometric authentication is that every person can be accurately identified by his or her intrinsic physical or behavioral traits.

The term biometrics is derived from the Greek words bio meaning life and metric meaning to measure.

Types of biometrics

The two main types of biometric identifiers depend on either physiological characteristics or behavioral characteristics.

Physiological identifiers relate to the composition of the user being authenticated and include facial recognition, fingerprints, finger geometry (the size and position of fingers), iris recognition, vein recognition, retina scanning, voice recognition and DNA matching.

Behavioral identifiers include the unique ways in which individuals act, including recognition of typing patterns, walking gait and other gestures. Some of these behavioral identifiers can be used to provide continuous authentication instead of a single one-off authentication check.

Augmented Reality

An enhanced version of reality where live direct or indirect views of physical real-world environments are augmented with superimposed computer-generated images over a user’s view of the real-world, thus enhancing one’s current perception of reality.

The origin of the word augmented is augment, which means to add or enhance something. In the case of Augmented Reality (also called AR), graphics, sounds, and touch feedback are added into our natural world to create an enhanced user experience.

Artificial Intelligence (AI)

Definition — Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include:

  • Speech recognition
  • Learning
  • Planning
  • Problem solving

Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry.

Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:

  • Knowledge
  • Reasoning
  • Problem solving
  • Perception
  • Learning
  • Planning
  • Ability to manipulate and move objects

Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious approach.

Machine learning is another core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions. Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory.

Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with a few sub-problems such as facial, object and gesture recognition.

Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.

Virtual Intelligence

A Virtual Intelligence (VI) is a sophisticated program designed to make modern computer systems easier to use. They are not to be confused with artificial intelligence like the geth, as VIs are only utilized to assist the user and process data (although, like AIs, they can still get out of hand). Though they appear to be intelligent, they aren’t actually self aware, just made with clever programming.

Some VIs have ‘personality imprints’, with their behavior parameters, speech pattern and appearance based on specific individuals, although it is illegal to make VIs based on currently living people. The quarians were even attempting to make their ‘ancestor VIs’ (virtual intelligence preserving the wisdom and personality of their ancestors) truly intelligent, to create a kind of virtual immortality, when the geth rebellion put an end to most of their research into synthetic intelligence.

Virtual Intelligence-Function

VIs vary greatly depending on how they are deployed. They can handle search queries on the extranet, act as tour guides — as in the case of the Citadel VI, Avina — or manage sophisticated lab and database work, like Mira on Noveria. Both the humans and the elcor make extensive use of VIs in their military endeavors to process status reports, react faster than organics can or — in the case of the elcor — choose instantly between millions of gambits designed for any combat situation.

A lot of armor upgrades use VI enhancements, so the onboard computer can optimize the armor’s combat performance or dispense medi-gel to heal the user. The newest biotic implants, the L4 iteration, use VI technology to constantly monitor the biotic’s brain waves and adapt the implant’s performance to maximise biotic potential.

Drones

A gizmo you might call a “drone” could actually fall into a couple of broad categories. One is a fully autonomous vehicle that flies without any human intervention at all. The other is more like a remote-control flier: A pilot is still in charge, but they’re on the ground watching the drone, or in a room somewhere watching on a computer screen or through a pair of goggles. The two types involve different tech with different potentials, but they both count as drones. So we’ll consider them, for the purposes of this guide, one and the same.

The general idea of drones has been around for more than a century. It’s not a terribly novel concept, really: We’ve invented all these cool ways to fly around, but many of them are dangerous, so wouldn’t it be great if humans didn’t need to be sitting inside? You could point to Nikola Tesla’s 1898 demonstration of “teleautomation,” in which he remotely controlled a small boat over radio frequencies. Or to Charles Kettering, who built the “Kettering Bug,” a World War I–era automated missile. Maybe it was the Queen Bee, the first reusable unmanned aerial vehicle, which the British military used in the 1930s for military target practice.

Autonomous Vehicles

An autonomous vehicle is one that can drive itself from a starting point to a predetermined destination in “autopilot” mode using various in-vehicle technologies and sensors, including adaptive cruise control, active steering (steer by wire), anti-lock braking systems (brake by wire), GPS navigation technology, lasers and radar.

Autonomous Car

An autonomous car is a vehicle that can guide itself without human conduction. This kind of vehicle has become a concrete reality and may pave the way for future systems where computers take over the art of driving.

An autonomous car is also known as a driverless car, robot car, self-driving car or autonomous vehicle. Driverless cars, including Google’s autonomous car design, have logged thousands of hours on American roads, but they are not yet commercially available on a large scale.

Autonomous cars use various kinds of technologies. They can be built with GPS sensing knowledge to help with navigation. They may use sensors and other equipment to avoid collisions. They also have the ability to use a range of technology known as augmented reality, where a vehicle displays information to drivers in new and innovative ways.

Some suggest that significant autonomous car production could cause problems with existing auto insurance and traffic controls used for human-controlled cars. Significant research on autonomous vehicles is underway, not only in the U.S., but also in Europe and other parts of the world. According to some in the industry, it is only a matter of time before these kinds of advances allow us to outsource our daily commute to a computer.

At the same time, mass transit theories like Elon Musk’s “hyperloop” design contemplate a future world where more guided transport takes place in public transit systems, rather than with individual car-like vehicles.