Author: [email protected]

How to Disable an Adblocker-blocker or Create an Anti-Adblock Killer!

How to Disable an Adblocker-blocker or Create an Anti-Adblock Killer!

History & Theory:

Digital Advertisement:

I get it. Ads are a necessary evil in content delivery game. Hell, I have been in the engineering side of content delivery for 10 yrs myself.  So, back in the days of #dotcom #bubble, we endured Banner ADs. When the #BigBrother, oops #Google came up, they swept the market clean with their (initially, atleast) non-intrusive text ADs. And people even appreciated the contextual advertisement, just when you were searching for a suspension for your car, you see 4 different ADs for OEM grade replacement suspension, grease monkeys to install them and so on.
Fast forward 10 yrs and Google is the global powerhouse of advertisement. Google knows what your mothers’ cousin` once removed does like and runs ads tailored to it in no less than 50 websites run by Google and countless other affiliates. The convenience transformed itself into a mild hindrance and a major nuisance in no time.  In its core, Google, Microsoft and Yahoo ADs were all based on a relevance relevance engine. I.E. based on the content that is currently served by the publisher (website you’re visiting) they search for the relevant ADs from their database and one that matches and has the target profile matching yours (this is where privacy advocates go crazy) they serve this AD. In its simplest form the process look something like the below diagram.

ADsense process diagram
Contextual Advertisement – Process Flow (ADSense/ADWords)

For the inquisitive lot, who want to know the technicality, it looks a lot more complex than this and it is presented below.
Tentative Process flow of How ADWords and ADSense content advertisement happens.

Enter ADBlockers:

And soon, people found a way to block the ADs. As seen above, All of these Adverts are programmed to run using great stores of data from the backend. So, when a user visits site a lot happens in the backend and a script is used get the resultant AD piece. Technically inclined people started writing custom scripts that would stop this script which renders the ADs. In no time all the bells and whistles like #blacklist #whitelist #regular-expression support all came in. Once the modern browser came with support for content filtering built-in, it was easy to supplement them with custom lists and scripts to block these ads. And ADBlockers for every device, OS, browsers became available and public knowledge of the same exploded their use in around 2013-2015 period. (see graph below) . So, All seems rosy from here.

ADBlocker-Blocker

Publishers and their representative trade bodies, on the other hand, argue that Internet ads provide revenue to website owners, which enable the website owners to create or otherwise purchase content for the website. Publishers claim that the prevalent use of ad blocking software and devices could adversely affect website owner revenue and thus in turn lower the availability of free content on websites. So, there is no wonder that publishers have begun to block or evict users found to be using #ADBlockers. (A page from my personal experience, I do not remember a time when I did not use ADblock, before Mozilla, I used MyIE (Maxthon) which had this configurable filters). But, off-lately the publishers have become more aggressive and have rolled out a slew of their own warriors. AKA ADBlocker-Blocker. Which are nifty little utilities you can embed in your site and traffic from ADB enabled users will be blocked until they disable or whitelist you.  Some majors like Economist, Wired and others have announced a novel approach, either you can disable ADB on their site or pay a small fee to see their site without the clutter of advertisements. For the sites that do not offer this feature or If you wish to  simply override them, read on.

Practice & Implementation

So, enter Anti-AdBlocker Killer — https://github.com/reek/anti-adblock-killer
It’s simple, really: it tricks sites that use #anti-adblocker technology into thinking you aren’t using an adblocker. The #adblocker-blocker lets you keep your adblocker on when you visit a page that would usually disable it by using a JavaScript file and filter list. This means you can work around bans on adblockers from common news companies, like Forbes, which lock you out when you’re detected.
It works against a number of different technologies used to detect #adblock users, and is likely to be a part of the next #armsrace as publishers work out how to block the #adblockers using #adblocker-blockers. If you’re still reading, I will conclude my narration and give step-by-step instruction on how to enable it and activate.

Step-by-step Instruction to Activate Anti-Adblock Killer

  1. Step 1 – Get a Script Manager:
    1.  Greasemonkey or Scriptish
    2.  Tampermonkey or Native
    3.  Tampermonkey or Violentmonkey
    4.  Tampermonkey or NinjaKit
    5.  Tampermonkey
        • (* After installation, depending on your browser, may require a browser restart for it to effect)
  2. Step 2 – Subscribe to a FilterList
    1. Subscribe from github.com (I prefer this)
    2. Subscribe from reeksite.com 
      • At this point, if you chose Github list, you’ll be prompted with a list of Extension and you can chose to Manualy install AAKiller. (representative screenshot is shown below) 
  3. Step 3 – Get User Scripts
    1. Install from greasyfork.org
    2. Install from openuserjs.org
    3. Install from github.com
    4. Install from reeksite.com

Once this is done, you’re on your way to enjoy AD-Blocker pop-up free browsing.

More data to support Planet Nine Hypothesis

More data to support Planet Nine Hypothesis


Last year, the existence of an unknown planet in our Solar system was announced. However, this hypothesis was subsequently called into question as biases in the observational data were detected. Now, Spanish astronomers have used a novel technique to analyse the orbits of the so-called extreme trans-Neptunian objects and, once again, they report that there is something perturbing them—a planet located at a distance between 300 to 400 times the Earth-sun distance.
Like the comets that interact with Jupiter.
At the beginning of 2016, researchers from the California Institute of Technology (Caltech, USA) announced that they had evidence of the existence of this object, located at an average  of 700 AU and with a mass 10 times that of the Earth. Their calculations were motivated by the peculiar distribution of the orbits found for the trans-Neptunian objects (TNO) in the Kuiper belt, which suggested the presence of a Planet Nine within the solar system.
Using calculations and data mining, the Spanish astronomers have found that the nodes of the 28 ETNOs analysed (and the 24 extreme Centaurs with average distances from the Sun of more than 150 AU) are clustered in certain ranges of distances from the Sun; furthermore, they have found a correlation, where none should exist, between the positions of the nodes and the inclination, one of the parameters which defines the orientation of the orbits of these icy objects in space.
“Assuming that the ETNOs are dynamically similar to the comets that interact with Jupiter, we interpret these results as signs of the presence of a planet that is actively interacting with them in a range of distances from 300 to 400 AU,” says De la Fuente Marcos, who emphasizes: “We believe that what we are seeing here cannot be attributed to the presence of observational bias”.
Is there also a Planet Ten?
De la Fuente Marcos explains that the hypothetical Planet Nine suggested in this study has nothing to do with another possible planet or planetoid situated much closer to us, and hinted at by other recent findings.
Also applying data mining to the orbits of the TNOs of the Kuiper Belt, astronomers Kathryn Volk and Renu Malhotra from the University of Arizona (USA) have found that the plane on which these objects orbit the Sun is slightly warped, a fact that could be explained if there is a perturber of the size of Mars at 60 AU from the Sun.
“Given the current definition of planet, this other mysterious object may not be a true planet, even if it has a size similar to that of the Earth, as it could be surrounded by huge asteroids or dwarf planets,” explains the Spanish astronomer.
“In any case, we are convinced that Volk and Malhotra’s work has found solid evidence of the presence of a massive body beyond the so-called Kuiper Cliff, the furthest point of the trans-Neptunian belt, at some 50 AU from the Sun, and we hope to be able to present soon a new work which also supports its existence”.

India Lauches PSLV C38 with 30 Satellites

India Lauches PSLV C38 with 30 Satellites

The Indian Space Research Organization (ISRO) successfully launched on Friday PSLV-C38 rocket on a mission to send 31 satellites, including India’s Cartosat 2 and NIUSAT satellites along with 29 foreign nano satellites, into orbit, ISRO said in a press release.
“India’s Polar Satellite Launch Vehicle, in its 40th flight (PSLV-C38), launched the 712 kg [0.7 tonnes] Cartosat-2 series satellite for earth observation and 30 co-passenger satellites together weighing about 243 kg [0.2 tonnes] at lift-off into a 505 km [313 mile] polar Sun Synchronous Orbit (SSO),” ISRO said.
According to ISRO, the co-passenger satellites comprise 29 nano satellites from 14 countries namely, Austria, Belgium, Chile, the Czech Republic, Finland, France, Germany, Italy, Japan, Latvia, Lithuania, Slovakia, the United Kingdom and the United States as well as one nano satellite from India.

Google launches new TensorFlow Object Detection API

Google launches new TensorFlow Object Detection API

Object Detect API

Google has finally launched its new TensorFlow object detection API. This new feature will give access to researchers and developers to the same technology Google uses for its own personal operations like image search and street number identification in street view.
The company was planning to release this new feature for quite a few time and finally, it is available to open source community. The system which the tech company has released won a Microsoft’s Common Objects in Context object detection challenge last year. The company won the challenge by beating 23 teams participating in the challenge.
According to the company, it released this new system to bring general public close to AI, and also get help from developers and AI scientist to collaborate with the company and make new and innovative things using Google’s technology.
Google is not the first company offering AI technology to the general public, user and developers. Microsoft, Facebook, and Amazon have also given access to people to use their respective AI technology. Moreover, Apple in its recent WWDC has also rolled out AI technology named as CoreML for its users.
One of the main benefits which the company is offering with this new release is giving users to use this new technology on mobile phones through its object detection system. The system is based on MobileNets image recognition models which can handle and do tasks like object detection, facial recognition, and landmark recognition.

Internet fast lane for first responders

Internet fast lane for first responders

Time is the enemy of first responders, and communication delays can cost lives. Unfortunately, during natural disasters and other crises, communications — both cellular and internet — are often overloaded by friends and family reaching out to those in affected areas. That extra network traffic has in the past impacted the ability of first responders send and receive data.
Researchers at the Rochester Institute of Technology are testing a new protocol — developed with funding from the National Science Foundation and U.S. Ignite — that will allow first responders and emergency managers to send data-intensive communications over the internet regardless of the amount of other traffic eating up the available bandwidth.
The protocol — dubbed MultiNode Label Routing, or MNLR — runs below existing internet protocols, allowing other traffic to run simultaneously. Rather than using traditional transmission protocols, it uncovers routes based on the routers’ labels, which in turn carry the structural and connectivity information among routers.
It also features an immediate failover mechanism so that if a link or node fails, it uses an alternate path as soon as the failure is detected, which also speeds transmission.
According to Nirmala Shenoy, professor in RIT’s Information Sciences and Technologies Department and principal investigator of the project, the protocol is designed to give transmissions over MNLR priority over other traffic so that critical data isn’t lost or delayed.  “Because MNLR literally bypasses the internet protocol and other routing protocols, it can put other traffic on the Internet to a lower priority,” she said.
According to the RIT team, the protocol’s ability to prioritize transmissions solves problems encountered during recent major hurricanes when first-responders and emergency managers had difficulty transmitting large but critical data files such as LIDAR maps and video chats.
“Sharing data on the internet during an emergency is like trying to drive a jet down the street at rush hour,” Jennifer Schneider, RIT professor and co-principal investigator, told RIT news.  “A lot of the critical information is too big and data heavy for the existing internet pipeline.”
Another communications challenge during disasters is damage to network routers.  Accordingly, the team built capabilities into the MNLR protocol to get around routing limitations of the major existing internet protocols.  Specifically, the team included a faster failover mechanism so that if a router link fails, the transmission will be automatically rerouted more quickly than existing protocols support.  In testing the protocol during a link failure, the team found that MNLR recovered in less than 30 seconds, while Border Gateway Protocol required about 150 seconds.
While the team is continuing to refine the protocol, Shenoy acknowledged that deployment of MNLR will face two hurdles.  First, the protocol has to be broadly adopted into current routers.  “That depends on equipment vendors,” she said.
Secondly, heavy use of MNLR during an emergency will impact other internet traffic.  While most service providers and even customers are likely to be OK with that, service agreements will likely need to be modified to account for variable service during emergencies.

Amazon Linux now available for On Premise Development/Testing

Amazon Linux now available for On Premise Development/Testing

Finally, (Well from November 2016) Amazon Web Services is letting customers download its own flavour of Linux.
Cloud instances has often been suggested as the ideal test and dev environment, on cost avoidance grounds. AWS says it’s made its Linux available after customer requests to do more development on-premises. Those requests don’t represent a bursting of the cloud bubble, but it’s nonetheless notable that developers feel the need to do some testing without paying for it by the hour.
More often than not, we feel a need to deploy a local developmental or testing instance of our app/product. We have all gone through the same routine,

  1. Developer Testing & Code Reviews
  2. Automated Test with CI
  3. Meticulous testing and validation by QA
  4. Deploy in Staging

And All hell broke loose on you!!!

It could be anything ranging from a simple charset/locale to wrong version of JVM or a libriary or other. Sometimes, it could be something sinister like the AWS A/ELB handling the request other than the way your app server intended to handle!
So, many Devops, SCM and Product owners prayed to the almighty altar of AWS. And their prayers have been answered.
The cloud giant’s chief evangelist Jeff Barr made the announcement in this blog post.
The company has loosed its Linux Container Image to assist those planning a move into its cloud can test their software and workloads on-premises. Previously the image was only accessible on-cloud, for customers running virtual machine instances on AWS.
The image is available from the EC2 Container Registry (read Pulling an Image to learn how to access it). It is built from the same source code and packages as the AMI and will give you a smooth path to container adoption. You can use it as-is or as the basis for your own images.
And Here you Go!!
 

Image Courtesy: Amazon Inc. (c)

 

Ubuntu Retires Unity, Desktop Switching Back To GNOME

Ubuntu Retires Unity, Desktop Switching Back To GNOME

Approximately Six year ago, Canonical made #Unity the default shell in Ubuntu. This was acclaimed to be a step to bring Ubuntu to hitherto unavailable devices like Tablets and Mobiles. I personally was not amused. The unity shell was available in one for or the other for sometime before that. Namely in the netbook remix of the Ubuntu.
Last week, Mark Shuttleworth, Founder of Ubuntu and Canonical, confirmed in a post on the company’s official blog today that the company is giving up on Unity and that the default Ubuntu desktop will be shifted back to GNOME for Ubuntu 18.04 LTS.
Shuttleworth reiterated the company’s commitment to the Ubuntu desktop that millions of users across the globe rely on. It says that Canonical will continue to produce this open source desktop, maintain existing LTE releases, work with commercial partners to distribute Ubuntu, and provide support to corporate customers. Nothing is changing on that front.
He points out that the community viewed the Unity effort as fragmentation and not innovation even though the aim was to deliver it as a free software, an alternative to the closed alternatives currently available to device manufacturers.
It is out of respect for the market’s wishes (or mounting pressure from the community) that Canonical has decided to shelve this project and shift the desktop back to #GNOME starting next year.
 

Bezos sells $1 bn in Amazon stock yearly to pay for Blue Origin

Bezos sells $1 bn in Amazon stock yearly to pay for Blue Origin


Yesterday, Billionaire entrepreneur Jeff Bezos introduced the Blue Origin capsule to the press corps.
Speaking Wednesday at the 33rd Space Symposium in Colorado Springs, Colorado, Bezos vowed to lower the cost of space travel and start taking customers to space by next year. Jeff Bezos said he is selling $1 billion in stock of his retail giant Amazon each year to finance his rocket company, Blue Origin, which aims to carry tourists to space by 2018.
The entrepreneur did not say how much a ticket would cost, as he showed off the New Shepard rocket and a mock-up of the large-windowed capsule that tourists will one day ride to suborbital space — just past the Karman Line some 62 miles (100 kilometers) above Earth — and back.
Bezos did say that the next-generation New Glenn rocket, which would be powerful enough to reach orbit and is expected to start flying satellites by 2020, is expected to cost $2.5 billion to develop.
Bezos did say that the next-generation New Glenn rocket, which would be powerful enough to reach orbit and is expected to start flying satellites by 2020, is expected to cost $2.5 billion to develop.
“My business model right now for Blue Origin is that I sell about $1 billion a year of Amazon stock and I use it to invest in Blue Origin,” he said.
“It’s very important that Blue Origin stand on its own feet and be a profitable, sustainable enterprise. That’s how real progress gets made.”
Bezos, a lifelong space enthusiast, founded Blue Origin in 2000.

Bluetooth 5 !

Bluetooth 5 !

The Bluetooth Special Interest Group (SIG) gave the green light to Bluetooth 5 this week, a new spec that promises some pretty radicular performance enhancements over its predecessor, according to the organization.
The latest version of the ubiquitous wireless technology is said to offer twice the speed, four times the range and eight times the capacity for broadcast messages. All of those bumps are firmly targeted at Bluetooth’s increase importance as a standard for the connected home. The update also includes some fixes designed to limit its interference with other wireless technologies.
Audio looks to be less of a focus this time out – a bit surprising, perhaps, given that smartphone manufactures are rapidly pushing wireless headphone adoption by accelerating the death of the standard jack on devices like the iPhone 7. This spec, however is all about the internet of things. According to the official release, “Bluetooth continues to embrace technological advancements and push the unlimited potential of the IoT.”

This week’s adoption means we can expect to start seeing the first Bluetooth 5 devices within the next two to six months, according to the organization.

Open Source Vs Open Governance: The state and Future of Open Source Movement

Open Source Vs Open Governance: The state and Future of Open Source Movement

Last week, DataStax announced that it was jettisoning its role in maintaining the Planet Cassandra community resource site, even as the project lead, Jonathan Ellis, made it known that DataStax would be doubling down on its commercial product, rather than Cassandra. Though the DataStax team put a brave face on the changes, the real question is why DataStax had to change at all.
Similarly, When Sun and then Oracle bought MySQL AB, the company behind the original development, MySQL open source database development governance gradually closed. Now, only Oracle writes updates, patches and features. Updates from other sources — individuals or other companies — are ignored.
These are two opposite extremes in the open-source movement. When an open source project reaches a critical threshold and following, it grows bigger than the chief contributor. A company or a group of people, it may be. But, it has taken shape in such a way that they can no longer commit resources and still the community will take care of everything, features, development, support, documentation and everything in between. This is the real essence of open source development.
However, it is absolutely necessary for an initial sponsor for the project to thrive in its infancy.
MySQL is still open source, but it has a closed governance. 
In the case of MySQL, the source code was forked by the community, and the MariaDB project started from there. Nowadays, when somebody says s/he is “using MySQL”, he is in fact probably using MariaDB, which has evolved from where MySQL stopped in time.
Take a look at the Github page of MySQL for reference.

MySQL Repository
        MySQL Repository with its core Contributors:  A project which powers 1 in 3 Web sites and apps, having 51 developers!!! 

 
MySQL Developers Commit Summary
All Core committers are from Oracle !
 
Cassandra is still open source, but now it has open governance.
The Cassandra question is ultimately about control. As the ASF board noted in the minutes from its meeting with DataStax representatives, “The Board expressed continuing concern that the PMC was not acting independently and that one company had undue influence over the project.” Given that DataStax has been Cassandra’s primary development engine since the day it spun out of Facebook, this “undue influence” is hardly new.
And, according to some closest to Cassandra, like former Cassandra MVP Kelly Sommers, that “undue influence” has borne exceptional fruits. Sommers clearly feels this way, insisting that the ASF “is really out of line in their actions with Cassandra,” ultimately concluding that the ASF might be hostile to the very people most responsible for a project’s success.
In her view, the ASF’s search for diversity in the Cassandra project should have started with expanding its existing leadership, rather than cutting it out: “The ASF forced DataStax to reduce their role in Cassandra rather than forming a long-term strategy to grow diversity around theirs,” Sommers said.
Though Sommers doesn’t directly comment on the trademark issues, she didn’t pull any punches in her disdain for project process over code results: “Politics is off the rails when focus is lost on success of the thing it runs and all that matters is process. This is how I feel ASF operates,”
So, for companies hoping to monetise open source, the Cassandra blow-up is a not-so-subtle reminder that community can be inimical to commercial interests, however much it can fuel adoption. It may also be a signal to the ASF that less corporate influence on projects could yield less code.
Now, Back to our agenda.

Open source vs. open governance

Open source software’s momentum serves as a powerful insurance policy for the investment of time and resources an individual or enterprise user will put into it. This is the true benefit behind Linux as an operating system, Samba as a file server, Apache HTTPD as a web server, Hadoop, Docker, MongoDB, PHP, Python, JQuery, Bootstrap and other hyper-essential open source projects, each on its own level of the stack. Open source momentum is the safe antidote to technology lock-in. Having learned that lesson over the last decade, enterprises are now looking for the new functionalities that are gaining momentum: cloud management software, big data, analytics, integration middleware and application frameworks.
On the open domain, the only two non-functional things that matter in the long term are whether it is open source and if it has attained momentum in the community and industry. None of this is related to how the software is being written, but this is exactly what open governance is concerned with: the how.

The value of momentum

Open governance alone does not guarantee that the software will be good, popular or useful (though formal open governance only happens on projects that have already captured some attention of IT industry leaders). A few examples of open source projects that have formal open governance are CloudFoundry, OpenStack, JQuery and all the projects under the Apache Software Foundation umbrella.
For users, the indirect benefit of open governance is only related to the speed the open source project reaches momentum and high popularity.
In conclusion it is a very delicate act of balancing Open-Source development and Open-Governance on Development. Oracle failed in getting the diversity thereby creating MariaDB while ASF ejected Datastax to avoid a repeat of the former!!!
 

Bitnami