Google has finally launched its new TensorFlow object detection API. This new feature will give access to researchers and developers to the same technology Google uses for its own personal operations like image search and street number identification in street view.
The company was planning to release this new feature for quite a few time and finally, it is available to open source community. The system which the tech company has released won a Microsoft’s Common Objects in Context object detection challenge last year. The company won the challenge by beating 23 teams participating in the challenge.
According to the company, it released this new system to bring general public close to AI, and also get help from developers and AI scientist to collaborate with the company and make new and innovative things using Google’s technology.
Google is not the first company offering AI technology to the general public, user and developers. Microsoft, Facebook, and Amazon have also given access to people to use their respective AI technology. Moreover, Apple in its recent WWDC has also rolled out AI technology named as CoreML for its users.
One of the main benefits which the company is offering with this new release is giving users to use this new technology on mobile phones through its object detection system. The system is based on MobileNets image recognition models which can handle and do tasks like object detection, facial recognition, and landmark recognition.
Time is the enemy of first responders, and communication delays can cost lives. Unfortunately, during natural disasters and other crises, communications — both cellular and internet — are often overloaded by friends and family reaching out to those in affected areas. That extra network traffic has in the past impacted the ability of first responders send and receive data.
Researchers at the Rochester Institute of Technology are testing a new protocol — developed with funding from the National Science Foundation and U.S. Ignite — that will allow first responders and emergency managers to send data-intensive communications over the internet regardless of the amount of other traffic eating up the available bandwidth.
The protocol — dubbed MultiNode Label Routing, or MNLR — runs below existing internet protocols, allowing other traffic to run simultaneously. Rather than using traditional transmission protocols, it uncovers routes based on the routers’ labels, which in turn carry the structural and connectivity information among routers.
It also features an immediate failover mechanism so that if a link or node fails, it uses an alternate path as soon as the failure is detected, which also speeds transmission.
According to Nirmala Shenoy, professor in RIT’s Information Sciences and Technologies Department and principal investigator of the project, the protocol is designed to give transmissions over MNLR priority over other traffic so that critical data isn’t lost or delayed. “Because MNLR literally bypasses the internet protocol and other routing protocols, it can put other traffic on the Internet to a lower priority,” she said.
According to the RIT team, the protocol’s ability to prioritize transmissions solves problems encountered during recent major hurricanes when first-responders and emergency managers had difficulty transmitting large but critical data files such as LIDAR maps and video chats.
“Sharing data on the internet during an emergency is like trying to drive a jet down the street at rush hour,” Jennifer Schneider, RIT professor and co-principal investigator, told RIT news. “A lot of the critical information is too big and data heavy for the existing internet pipeline.”
Another communications challenge during disasters is damage to network routers. Accordingly, the team built capabilities into the MNLR protocol to get around routing limitations of the major existing internet protocols. Specifically, the team included a faster failover mechanism so that if a router link fails, the transmission will be automatically rerouted more quickly than existing protocols support. In testing the protocol during a link failure, the team found that MNLR recovered in less than 30 seconds, while Border Gateway Protocol required about 150 seconds.
While the team is continuing to refine the protocol, Shenoy acknowledged that deployment of MNLR will face two hurdles. First, the protocol has to be broadly adopted into current routers. “That depends on equipment vendors,” she said.
Secondly, heavy use of MNLR during an emergency will impact other internet traffic. While most service providers and even customers are likely to be OK with that, service agreements will likely need to be modified to account for variable service during emergencies.
Amazon Linux now available for On Premise Development/Testing
Finally, (Well from November 2016) Amazon Web Services is letting customers download its own flavour of Linux.
Cloud instances has often been suggested as the ideal test and dev environment, on cost avoidance grounds. AWS says it’s made its Linux available after customer requests to do more development on-premises. Those requests don’t represent a bursting of the cloud bubble, but it’s nonetheless notable that developers feel the need to do some testing without paying for it by the hour.
More often than not, we feel a need to deploy a local developmental or testing instance of our app/product. We have all gone through the same routine,
Developer Testing & Code Reviews
Automated Test with CI
Meticulous testing and validation by QA
Deploy in Staging
And All hell broke loose on you!!!
It could be anything ranging from a simple charset/locale to wrong version of JVM or a libriary or other. Sometimes, it could be something sinister like the AWS A/ELB handling the request other than the way your app server intended to handle!
So, many Devops, SCM and Product owners prayed to the almighty altar of AWS. And their prayers have been answered.
The cloud giant’s chief evangelist Jeff Barr made the announcement in this blog post.
The company has loosed its Linux Container Image to assist those planning a move into its cloud can test their software and workloads on-premises. Previously the image was only accessible on-cloud, for customers running virtual machine instances on AWS.
The image is available from the EC2 Container Registry (read Pulling an Image to learn how to access it). It is built from the same source code and packages as the AMI and will give you a smooth path to container adoption. You can use it as-is or as the basis for your own images.
And Here you Go!!
Image Courtesy: Amazon Inc. (c)
Ubuntu Retires Unity, Desktop Switching Back To GNOME
Approximately Six year ago, Canonical made #Unity the default shell in Ubuntu. This was acclaimed to be a step to bring Ubuntu to hitherto unavailable devices like Tablets and Mobiles. I personally was not amused. The unity shell was available in one for or the other for sometime before that. Namely in the netbook remix of the Ubuntu.
Last week, Mark Shuttleworth, Founder of Ubuntu and Canonical, confirmed in a post on the company’s official blog today that the company is giving up on Unity and that the default Ubuntu desktop will be shifted back to GNOME for Ubuntu 18.04 LTS.
Shuttleworth reiterated the company’s commitment to the Ubuntu desktop that millions of users across the globe rely on. It says that Canonical will continue to produce this open source desktop, maintain existing LTE releases, work with commercial partners to distribute Ubuntu, and provide support to corporate customers. Nothing is changing on that front.
He points out that the community viewed the Unity effort as fragmentation and not innovation even though the aim was to deliver it as a free software, an alternative to the closed alternatives currently available to device manufacturers.
It is out of respect for the market’s wishes (or mounting pressure from the community) that Canonical has decided to shelve this project and shift the desktop back to #GNOME starting next year.
Bezos sells $1 bn in Amazon stock yearly to pay for Blue Origin
Yesterday, Billionaire entrepreneur Jeff Bezos introduced the Blue Origin capsule to the press corps.
Speaking Wednesday at the 33rd Space Symposium in Colorado Springs, Colorado, Bezos vowed to lower the cost of space travel and start taking customers to space by next year. Jeff Bezos said he is selling $1 billion in stock of his retail giant Amazon each year to finance his rocket company, Blue Origin, which aims to carry tourists to space by 2018.
The entrepreneur did not say how much a ticket would cost, as he showed off the New Shepard rocket and a mock-up of the large-windowed capsule that tourists will one day ride to suborbital space — just past the Karman Line some 62 miles (100 kilometers) above Earth — and back.
Bezos did say that the next-generation New Glenn rocket, which would be powerful enough to reach orbit and is expected to start flying satellites by 2020, is expected to cost $2.5 billion to develop.
Bezos did say that the next-generation New Glenn rocket, which would be powerful enough to reach orbit and is expected to start flying satellites by 2020, is expected to cost $2.5 billion to develop.
“My business model right now for Blue Origin is that I sell about $1 billion a year of Amazon stock and I use it to invest in Blue Origin,” he said.
“It’s very important that Blue Origin stand on its own feet and be a profitable, sustainable enterprise. That’s how real progress gets made.”
Bezos, a lifelong space enthusiast, founded Blue Origin in 2000.
The Bluetooth Special Interest Group (SIG) gave the green light to Bluetooth 5 this week, a new spec that promises some pretty radicular performance enhancements over its predecessor, according to the organization.
The latest version of the ubiquitous wireless technology is said to offer twice the speed, four times the range and eight times the capacity for broadcast messages. All of those bumps are firmly targeted at Bluetooth’s increase importance as a standard for the connected home. The update also includes some fixes designed to limit its interference with other wireless technologies.
Audio looks to be less of a focus this time out – a bit surprising, perhaps, given that smartphone manufactures are rapidly pushing wireless headphone adoption by accelerating the death of the standard jack on devices like the iPhone 7. This spec, however is all about the internet of things. According to the official release, “Bluetooth continues to embrace technological advancements and push the unlimited potential of the IoT.”
This week’s adoption means we can expect to start seeing the first Bluetooth 5 devices within the next two to six months, according to the organization.
Open Source Vs Open Governance: The state and Future of Open Source Movement
Last week, DataStax announced that it was jettisoning its role in maintaining the Planet Cassandra community resource site, even as the project lead, Jonathan Ellis, made it known that DataStax would be doubling down on its commercial product, rather than Cassandra. Though the DataStax team put a brave face on the changes, the real question is why DataStax had to change at all.
Similarly, When Sun and then Oracle bought MySQL AB, the company behind the original development, MySQL open source database development governance gradually closed. Now, only Oracle writes updates, patches and features. Updates from other sources — individuals or other companies — are ignored.
These are two opposite extremes in the open-source movement. When an open source project reaches a critical threshold and following, it grows bigger than the chief contributor. A company or a group of people, it may be. But, it has taken shape in such a way that they can no longer commit resources and still the community will take care of everything, features, development, support, documentation and everything in between. This is the real essence of open source development.
However, it is absolutely necessary for an initial sponsor for the project to thrive in its infancy. MySQL is still open source, but it has a closed governance.
In the case of MySQL, the source code was forked by the community, and the MariaDB project started from there. Nowadays, when somebody says s/he is “using MySQL”, he is in fact probably using MariaDB, which has evolved from where MySQL stopped in time.
Take a look at the Github page of MySQL for reference.
All Core committers are from Oracle !
Cassandra is still open source, but now it has open governance.
The Cassandra question is ultimately about control. As the ASF board noted in the minutes from its meeting with DataStax representatives, “The Board expressed continuing concern that the PMC was not acting independently and that one company had undue influence over the project.” Given that DataStax has been Cassandra’s primary development engine since the day it spun out of Facebook, this “undue influence” is hardly new.
And, according to some closest to Cassandra, like former Cassandra MVP Kelly Sommers, that “undue influence” has borne exceptional fruits. Sommers clearly feels this way, insisting that the ASF “is really out of line in their actions with Cassandra,” ultimately concluding that the ASF might be hostile to the very people most responsible for a project’s success.
In her view, the ASF’s search for diversity in the Cassandra project should have started with expanding its existing leadership, rather than cutting it out: “The ASF forced DataStax to reduce their role in Cassandra rather than forming a long-term strategy to grow diversity around theirs,” Sommers said.
Though Sommers doesn’t directly comment on the trademark issues, she didn’t pull any punches in her disdain for project process over code results: “Politics is off the rails when focus is lost on success of the thing it runs and all that matters is process. This is how I feel ASF operates,”
So, for companies hoping to monetise open source, the Cassandra blow-up is a not-so-subtle reminder that community can be inimical to commercial interests, however much it can fuel adoption. It may also be a signal to the ASF that less corporate influence on projects could yield less code.
Now, Back to our agenda.
Open source vs. open governance
Open source software’s momentum serves as a powerful insurance policy for the investment of time and resources an individual or enterprise user will put into it. This is the true benefit behind Linux as an operating system, Samba as a file server, Apache HTTPD as a web server, Hadoop, Docker, MongoDB, PHP, Python, JQuery, Bootstrap and other hyper-essential open source projects, each on its own level of the stack. Open source momentum is the safe antidote to technology lock-in. Having learned that lesson over the last decade, enterprises are now looking for the new functionalities that are gaining momentum: cloud management software, big data, analytics, integration middleware and application frameworks.
On the open domain, the only two non-functional things that matter in the long term are whether it is open source and if it has attained momentum in the community and industry. None of this is related to how the software is being written, but this is exactly what open governance is concerned with: the how.
The value of momentum
Open governance alone does not guarantee that the software will be good, popular or useful (though formal open governance only happens on projects that have already captured some attention of IT industry leaders). A few examples of open source projects that have formal open governance are CloudFoundry, OpenStack, JQuery and all the projects under the Apache Software Foundation umbrella.
For users, the indirect benefit of open governance is only related to the speed the open source project reaches momentum and high popularity.
In conclusion it is a very delicate act of balancing Open-Source development and Open-Governance on Development. Oracle failed in getting the diversity thereby creating MariaDB while ASF ejected Datastax to avoid a repeat of the former!!!
As an Enterprise/Solution Architect, I have been called upon upteen times help make a decision on several technology platforms. Based on the company’s product/service road-map and the digital strategy (in cases defining these along the way) I have also helped them in selecting an WCMS.
Personally though, I am more inclined toward Drupal/Acquia and previously toward Zope in open-source and SiteCore and CQ5 (now rebranded as AEM) in enterprise class products. Still, I have found it very useful and at times helpful in convincing the client with the reports of 3 research corporations. Prime among them are Gartner, Forrester and IDC.
Naturally, I follow them with interest, and am sharing the latest Magic Quadrant here.
This Year, Gartner Research has downgraded OpenText, SDL and HP from the leader’s quadrant in its latest industry report on web content management (WCM).
Previous contenders Sitecore, Adobe, Acquia, EPiServer, IBM and Oracle retained their spots on the leaderboard in the Stamford, Conn.-based research firm’s Magic Quadrant for Web Content Management, which it released yesterday. According to Gartner report authors Mick MacComascaigh and Jim Murphy, Boston-based Acquia made the biggest positive move, closing the gap on Copenhagen-based Sitecore and San Jose, Calif.-based Adobe.3
But Sitecore and Adobe still lead the WCM pack, based on Gartner’s criteria of completeness of vision and ability to execute. It was the same story last year, with Sitecore edging Adobe for execution but Adobe winning in the vision department, Gartner concluded.
Acquia, a Drupal-based, open source content management system, jumped from a visionary to a leader in 2014 and hasn’t left the leaders’ spot since. It’s the lone open source vendor among the leaders.
Microsoft, rated a niche player last year, failed to make the cut this year. MacComascaigh and Murphy said Microsoft has focused its attention more on the digital workplace and less on WCM.
It places them in one of four quadrants:
Leaders: Those who “drive market transformation” and are “prepared for the future with a clear vision and a thorough appreciation of the broader context of digital business.”
Challengers: Those who may have a strong WCM product but have a product strategy that “does not fully reflect market trends.”
Visionaries: Those that are “forward-thinking and technically focused” but need to improve and execute better.
Niche Players: Those who focus on a particular segment of the market, such as size, industry and project complexity. But that, according to Gartner authors, “can affect their ability to outperform their competitors or to be more innovative.”
GitHub will release as open source the GitHub Load Balancer (GLB), its internally developed load balancer.
GLB was originally built to accommodate GitHub’s need to serve billions of HTTP, Git, and SSH connections daily. Now the company will release components of GLB via open source, and it will share design details. This is seen as a major step in building scalable infrastructure using commodity hardware. for more details please refer to the GitHub Engineering Post .
GE & Bosch to leverage open source to deliver IoT tools
Partnerships that could shape the internet of things for years are being forged just as enterprises fit IoT into their long-term plans.
As a vast majority of organisations have included #IoT as part of their strategic plans for the next two to three years. No single vendor can meet the diverse #IoT needs of all customers, so they’re joining forces and also trying to foster broader ecosystems. General Electric and Bosch did both recently announced their intention to do the same.
The two companies, both big players in #IIoT, said they will establish a core IoT software stack based on open-source software. They plan to integrate parts of GE’s #Predix operating system with the #Bosch IoT Suite in ways that will make complementary software services from each available on the other.
The work will take place in several existing open-source projects under the #Eclipse Foundation. These projects are creating code for things like messaging, user authentication, access control and device descriptions. Through the Eclipse projects, other vendors also will be able to create software services that are compatible with Predix and Bosch IoT Suite, said Greg Petroff, executive director of platform evangelism at GE Software.
If enterprises can draw on a broader set of software components that work together, they may look into doing things with IoT that they would not have considered otherwise, he said. These could include linking IoT data to ERP or changing their business model from one-time sales to subscriptions.
GE and Bosch will keep the core parts of Predix and IoT Suite unique and closed, Petroff said. In the case of Predix, for example, that includes security components. The open-source IoT stack will handle fundamental functions like messaging and how to connect to IoT data.
Partnerships and open-source software both are playing important roles in how IoT takes shape amid expectations of rapid growth in demand that vendors want to be able to serve. Recently, IBM joined with Cisco Systems to make elements of its Watson analytics available on Cisco IoT edge computing devices. Many of the common tools and specifications designed to make different IoT devices work together are being developed in an open-source context.