Top

Seven steps into the Google vision (2)

December 8, 2015

We mentioned in the first part of this article how Google X, a direct independent subsidiary of Alphabet now, is the company’s R&D facility. Located in Mountain View, CA, this is the place where almost all Google “moonshots” projects are being developed.

We have also briefly introduced Project Loon, Project Wing and Google Self Driving Car Project – three projects migrating from the “moonshot” category into reality, with all that this encompasses. From collaborations and agreements with other tech entities to adapted legislation, the Google visionary projects are modifying the world, as we know it.

Let’s see other Google innovations that have created an online buzz :

  1. Google Project Tango

Google Project Tango is a 3D-mapping initiative that engaged various collaborators from across the globe. Bosch, Bsquare, CompalComm, ETH Zurich, Flyby Media, George Washington University, MMSolutions, Movidus, University of Minnesota MARS Lab, JPL Computer Vision Group, Ologic, OmniVision, or Open Source Robotics Foundation are just some of the research and development entities involved. An ex-Motorola division converted into Google’s Advanced Technology and Projects group (ATAP) was in charge of this project.

In palpable terms, the project resulted into an Android device, the Tango tablet, that scans and maps the 3D environment around it in order to create a 3D computerized model of that environment. The device appeared on Google Store in May 2015. Later in 2015, both Google and Intel announced the Project Tango developer kit where Intel’s RealSense 3D camera is coupled with the Tango’s advancement in order to give developers a better chance when researching depth-sensing tech.

Basically, as the Tango page for developers puts it, computer vision is being used in order to give devices the ability of space sensing and orientation. Motion tracking, area learning and depth perception are the main core technologies involved – and one may notice that such research benefits any AI (robotics) future projects and also might improve the Google car environment interactions.

Currently any interested developer can purchase a developer kit and advance their own research process by the use of this reality-to-machinery bridging device.

Tagged “a mobile device that can see how you see”, the Tango technology benefits augmented reality by translating space and motion into digital language. The device is available for purchase in the U.S. (and it features “a wide-angle camera, a depth sensing camera, accurate sensor timestamping, and a software stack that enables application developers to use motion tracking, area learning and depth sensing.”), while the Tango SDK files are available for download.

The dev kits are also available in Canada, Denmark, Finland, France, Germany, Ireland, Italy, Korea, Norway, Sweden, Switzerland, and the United Kingdom.

All those interested are therefore able to benefit from Google’s research and results in the 3D mapping field via these kits and downloadable files, in order to engage in further explorations and research in connected personal projects.

  1. Tensor Flow

Part of the Google Brain project, Tensor Flow is currently an open-sourced software library for machine learning. Released in November 2015, it represents the second-generation application-programming interface (API) used by Google for various products, such as Google Search, Google Photos or Gmail. The first generation of this API was represented by DistBelief, a large-scale software system, built by Stanford University professor Andrew Ng to incorporate all Google’s cloud computing infrastructure.

Tensor Flow introduces a very accurate and powerful form of AI, denominated “deep learning”. This is able to recognize spoken words, facial traits and expressions and to perform various other tasks. Some of Tensor Flow’s features, as its Google page explains, are deep flexibility, portability, auto-differentiation and maximized performance. By open sourcing its (yet) incomplete data, the company is looking to “create an open standard for exchanging research ideas and putting machine learning in products”.

The same AI features from Tensor Flow are behind the notorious images from the DeepDream program. The base common core consists of the neural networks research activity performed by the Google engineers. You may check here some examples on how the Google AI identifies and sorts online images by searching for patterns. Although it sounds profoundly interesting, the images produced via DeeepDream are nevertheless quite disturbing from a human point of view. Identifying patterns without the layer of human interpretation that would attribute them the proper meaning lead to rather nightmarish images generating by this form of Artificial Intelligence. But then again, the research on neural networks still has a long way to go.

  1. Google Glass

The Google optical head-mounted display (OHMD) that enables users to receive information via a small device is by now well known to any tech passionate. Another product of Google X labs, Google Glass first appeared mid-2011, when it weighed 8 pounds. By 2013 the entire headgear was considerably lighter – just another pair of glasses, one that could deliver information to those who wear it.

Despite the viral campaign, it seems that the wearable device did not register a successful market impact. The device gradually embedded new Google apps, like voice control activation, facial recognition, translation or photo manipulation. Even so, the numbers of those who individually adopted the new tech were not as high as expected. The company reoriented its OHMD towards more professional applications, such as healthcare or mass media. The latest in Google Glass is Glass at Work, which has its own dedicated webpage. Enterprise solutions seem to be the next step in promoting this particular product – it remains to be seen whether the companies would consider it essential in improving their activities.

The new Google Glass is also described as “the enterprise edition”: foldable, tweaked and featuring a more rugged appearance. In what the tech market is concerned, Google will have to compete with other more experienced companies that provide business OHMDs for years already.

Google Glass has also rebranded itself into Project Aura. The most recent development linked to Project Aura consists of a screenless headgear alongside the more traditional concept that includes a display. The businesses will just have to choose which of the two concepts would better suit their needs.

  1. Google Fiber

The Google Fiber services and products belong to Alphabet Inc, since the August 2015 restructuring. Acting as a provider of broadband and cable television via this subsidiary, Google aims to keep the same flawless, perfectionist brand image with “super-fast Internet” and “crystal clear HDTV”. The target customers range from residential to SMBs and property managers. The service is not available yet in all areas, but the company announced subsequent expansions.

Starting in Kansas City (September 2012), this project/service has built quite a name for itself, being associated with high quality Internet and faster, better download and upload times.

You may see a CIO article that details on Google Fiber here, providing both the upside and the downside, with associated customer testimonials. Others mention the upcoming 5G speeds in relation with Google Fiber – and see the carrier 5G speeds as a powerful competitor. Therefore, we may expect Google to come up with some new features at some point, if 5G will indeed be the new hot Internet service and a real competition for Google Fiber.

Other Google vision projects

are still in the works. Since the Google brand representation machine is a very efficient one and not a day goes by without a piece of news on the company and its projects, the tech enthusiasts will probably find out more on these projects’ progress rate in no time.

In this “other projects” category one might include the Google pill project (a pill that is supposed to diagnose the patient, once swallowed), or the modular phone Project Ara (which suffered a few drawbacks in the summer of 2015, but it nevertheless on the roll, although delayed until 2016).

We’ve seen how Google vision includes a future of omnipresent Internet, semi-autonomous technology based on enhanced AI capabilities and overall improved connectivity. Both software and hardware require enhanced capacities in order to act as a viable infrastructure for all this – and the company is fast-forwarding its research in this direction.