Author: Ramkumar Sundarakalatharan

Engineering Leadership in Start-Ups: Engineering Manager, Director, VP of Engineering.

Engineering Leadership in Start-Ups: Engineering Manager, Director, VP of Engineering.

This post is partly the result of my discussions with our People practice leader and talent acquisition executive. ITILITE is at a phase of growth, where are looking for more engineering & product management bandwidth. And I had to think hard to write the various Job-Descriptions. So, I have tried to generalise it using my experiences from the last 2-3 stints. In case you’re interested to explore an Engineering Management role with ITILITE, please get in touch with me or write to careers{at}itilite.com

Engineering Leadership

As apps are becoming increasingly omnipresent and in most cases, there is a startup behind them. Engineers make up to 70% of a tech startup’s workforce, there is an increasing need for managers who look after those developers. As a result, there is a rise in the number of engineering managers in recent years. Engineering managers are responsible for delivery teams that develop these “Apps”. The following is a very generalised version of what you could do in these roles and a possible career progression.

Engineer to Tech Lead/Lead Developer

The first step in your journey from an Individual Contributor(IC) to a management role. This could be a mix of people management, delivery management, process management etc, depending on the context of your organisation. In most organisations, it is a “technical mentorship” role with some aspects of people management, quality and delivery ownership.

Most Tech Leads are natural technical leaders. They are great engineers on their own, they were well respected by the engineers around them, they worked reasonably well with the team, they understood how the product/module was designed, built and shipped, they had a decent sense for making the right kinds of product tradeoffs and they were willing to do just enough project management and people development to keep the team/project humming along. 

In this role,

  • Most TLs would retain some independent deliverables in addition to anchoring and owning the deliveries of their team.
  • Most of the team still works on the same module/feature or sub-system
  • They do code & design reviews, suggest changes and have the final say for their modules.
  • Together with the Product Managers, they “own” the feature/module.

We at itilite, call them Engineering Owners, much like Product Owners

Tech Lead to Engineering Managers

The next step in the Engineering Manager. In this role, you will be “Managing” a collection of inter-related modules/projects. In this role, the focus on timely delivery, people management and quality are higher than technical design & architecture. But, you are very much an Engineer and may be required to occasionally write quick hacks, frameworks for your developers to build atop.

The main difference is you will be responsible for the delivery of multiple projects in a related area. You will be expected to optimise the resources (Devs, Testers, etc.) available with you to maximise the outputs of your group, across multiple projects/modules

In this role, you’d be

  • Expected to actively engage with the Product Management teams to define what needs to be built
  • Defining how you will measure the outcomes of what your team is building and quantify the outcomes with metrics
  • Ensuring quality, getting stakeholder alignment and signoffs
  • Macromanage the overall deliverables of your group

The Pivot – Tech, Product, Solution Architect

The next step in your career gives you two options. One with people management, P&L accountability and other a purely technical role. If you’re planning for a pureplay technical role, some organisations have Staff Engineer, Principal Engineer etc. In essence, they are mostly a combination of Tech Lead+Architect type roles. Depending on your seniority/tenure and organisational context, you may be reporting to an Engineering/Delivery Manager, Director/VP or the CTO. In this rolw,

  • You will work closely with Engineering Managers, Quality Assurance leads/managers and Product Owners to design the system architecture, define the performance baselines
  • You will work with Tech Leads and Sr.Devs to drive the performance, redundancy, scalability among other stuff.
  • You will be called into discussions/decide when the team can’t reach consensus on engineering choices

Engineering Manager to Director of Engineering

A Director of Engineering role is completely different. You now have multiple leads+managers, likely multiple projects within a general focus area of the organisation. This will mean there will be way more individual deliverables and project milestones than you can track in detail on a regular basis. Now you have to manage both people and projects “from the outside” rather than “from the inside”. You’ll likely start appreciating the metrics and dashboards, as they will help you in tracking those multiple projects and deadlines, schedules, overruns etc.

You have to make sure that your managers and leads are managing their resources appropriately and support them in their effort rather than managing individual contributors and projects directly.

Lots of great technical leaders have difficulty making this transition.

While being an engineering lead/manager is certainly managing, it’s type of managing from “within the project” is much easier than “managing from outside the project” and as a director, you almost always have to manage multiple people and projects “from the outside”.

Also, as a director, you will be responsible for a number of aspects of the culture, such us

  • What kind of people are you hiring, setting responsibilities and workload expectations,
  • What is the team(s) doing for fun, how do they interact with other functions
  • What kinds of performance is rewarded/encouraged vs punished/discouraged.

Now, moving to some serious responsibilities, you may be the first major line of responsibility for what to do when things does not work,

  • an employee not working out,
  • a project falling behind,
  • a project not meeting it’s objectives,
  • hiring not happening in time, etc…

While most of these things are the direct responsibility of the engineering manager, the engineering manager is usually not left to face these issues alone, they work on it with the director and the director is expected to guide the process to the right decision/outcome.

I’ve seen people who were great technical leaders and good engineering managers who did not enjoy being a director at all (or weren’t as good at it) because it was a whole different type of managing bordering the administration.

Director to Vice-President

The VP of Engineering is the executive responsible for all of engineering. Development, Quality, DevOps and partly to Security and Product Management as well. While both the engineering manager and director of engineering have managers who themselves have likely been engineering managers and directors before, the VP may work for the CEO (in an early stage Startup or a smaller company) who has never been a VP of Engineering before.

A large company may have multiple levels of VPs, but in most cases, you work for someone who hasn’t been a VP of Engineering or doesn’t actually know how to do your job. This means, there simply is no first-hand experience from your Manager, that you can rely on to solve your problem. The first time you step into that role and realize that, it’s a sobering thought. You’re a pretty much on your own to figure things out. Not only are you completely responsible for everything that happens in the engineering organization, but when things aren’t going right, there’s pretty much no help from anywhere else. You and your team have to figure it out by yourselves. Many successful VPs eventually come to like this autonomy, but it can be a big adjustment when moving from director to VP.

At the director level, you can always go to your VP for help and consulting on difficult issues and they can and should help you a lot. At the VP level, you may consult with the executive team or the CEO on some big decisions, but you’re more likely talking to them about larger tradeoffs that affect other parts of the company, not how you solve issues within your team.

As a VP, you are primarily responsible for setting up processes and procedures for your organization to make it productive:

  • Team/Project tools such as bug system, project tracking, source code management, versioning, build system, etc.
  • Defining/improving processes to track, monitor and report on projects.
  • Defining processes to deal with projects that run into trouble.
  • Hiring: How you hire? What kind of people do you hire? how do you maintain the quality of new hire?
  • Firing: When someone isn’t working out, how do you fix it: reassignment, training, performance plan, transfer, firing?
  • Training: How does your team get the training they might need, it could be hard-skills, soft-skills or managerial
  • Rewards: How do you reward your top individual contributors and for your top managers?

You may be part of the Leadership “Council” or participate regularly in business discussions that may or may not concern your department directly. In a startup, you are often “the” technical representative on exec staff. You help craft the strategy of the business. You are relied upon for technical direction of the company (sometimes with the help of a CTO).

As a VP, you are expected to understand many important aspects of other departments, what is important to other departments and how your department serves or interacts with or depends upon other departments. Two classic example might be,

  • Sales depending upon certain product features/capabilities being delivered in a given timeframe to be able to convert a prospect.
  • Customer success depending upon certain product fixes being delivered in a given timeframe.

As a VP, you will participate in the setting of these timeframes and balancing these against all the other things your department is being tasked to do.

As you can see, Engineering Management/Leadership is a very interesting career option. We have multiple opening across Product and Engineering functions at ITILITE. Please see if any of these roles interest you.

Building a Log-Management & Analytics Solution for Your StartUp

Building a Log-Management & Analytics Solution for Your StartUp

Building a Log-Management & Analytics Solution for Your StartUp

Background:

As described in an earlier post, I run the Engineering at an early stage #traveltech #startup called Itilite. So, one of my responsibility is to architect, build and manage the cloud infrastructure for the company. Even though I have had designed/built and maintained the cloud infrastructure in my previous roles, this one was really challenging and interesting. Due in part to the fact, that the organisation is a high growth #traveltech startup and hence,

  1. The architecture landscape is still evolving,
  2. Performance criteria for the previous month look like the minimum acceptable criteria the next
  3. The sheer volume of user-growth, growth of traffic-per-user
  4. Addition of partner inventories which increases the capacity by an order of magnitude

And several others. Somewhere down the lane, after the infrastructure, code-pipeline and CI is set-up, you reach a point where managing (read: trigger intervention, analysis, storage, archival, retention) logs across several set of infrastructure clusters like development/testing, staging and production becomes a bit of an overkill.

Enter Log Management & Analytics

Having worked up from a simple tail/multitail to Graylog-aggregation of 18 server logs, including App-servers, Database servers, API-endpoints and everything in between. But, as my honoured colleague (former) Mr.Naveen Venkat (CPO of Zarget) used to mention in my days with Zarget, There are no “Go-To” persons in a start-up. You “Go-Figure” yourself!

There is definitely no “One size fits all” solution and especially, in a Start-up environment, you are always running behind Features, Timelines or Customers (scope, timeline, or cost in conventional PMI model).

So, After some due research to account for the recent advances in Logstash and Beats. I narrowed down on the possible contenders that can power our little log management system. They are,

  1. ELK Stack  — Build it from scratch, but have flexibility.
  2. Graylog  — Out of the box functionality, but you may have to tune up individual components to suit your needs.
  3. Fluentd — Entirely new log-management paradigm, interesting and we explored it a bit.

(I did not consider anything exotic or involves us paying (in future) anything more than what we pay for it in first year. So, some great tools like splunk, nagios, logpacker, logrythm were not considered)

Evaluation Process:

I wrote an Ansible script to create a replica environment and pull in the necessary configurations. And used previously written load-test job to simulate a typical work hour. This configuration was used for each of the frameworks/tools considered.

I started experimenting with Graylog, due to familiarity with the tool. Configured it the best way, I felt appropriate at that point in time.

Slight setback:

However, the collector I had used (Sidecar with Filebeat) had a major problem in sending files over 255KB and the interval was less than 5 secs. And the packets that are to be sent to the Elasticsearch never made it. And the pile-up caused a major issue for application stability.

One of the main use-case for us is to ingest XML/JSON data from multiple sources. (We run a polynomial regression across multiple sources, and use the nth derivatives to do further business operations). Our architecture had accounted for several things, but by design, we used to hit momentary peaks in CPU utilisation for the “Merges”. And all of these were “NICE” loads.

When the daily logs you need to export is in upwards of 5GB for an app (JSON logs), add multiple APIs and some micro-services application logs, web-server, load-balancers, CI (Jenkins), database-query-log, bin-log, redis and … yes, you get the point?

(())Upon further investigation, The sidecar collector was actually not the culprit. Our architecture had accounted for several things, but by design, we used to hit momentary peaks in CPU utilisation for the “Merges”. And all of these were “NICE” loads! (in our defence) 

So, once the CPU hit 100% mark, sidecar started behaving very differently. But, ultimately fixed it with a patched version of sidecar and actually shifting to NXLog.

Experiment with the ELK is a different beast in itself, as provisioning and configuring took a lot more time than I was comfortable with. So, switched to AWS “Packaged Service” . We deployed the ES domain in AWS, fired up a couple of Kibana and Logstash instances and connected them (after what appeard to be forever), it was a charm. Was able to get all information required in Kibana. One down-side is that you need to plan the Elastic Search indices according to how your log sources will grow. For us, it was impractical.

Fluentd was an excellent platform for normalising your logs, but then it also depended on Kibana/ES for the ultimate analysis frontend.

So, finally we settled down to good old Graylog.

Advantages of Graylog

 The tool perfectly fit into our workflow and evolving environment:

  1. Graylog is a free & open-source software. — So we wont have pay now or in future.
  2. Its trigger actions and notifications are a good compliment to Graylog monitoring, just a bit deeper!
  3. With error stack traces received from Graylog, engineers understand the context of any issue in the source code. This saves time and efforts for debugging/troubleshooting and bug fixing.
  4. The tool has a powerful search syntax, so it is easy to find exactly what you are looking for, even if you have terabytes of log data. The search queries could be saved. For really complex scenarios, you could write an ElasticSearch query and save it in the dashboard as a function.
  5. Graylog offers an archiving functionality, so everything older than 30 days could be stored on slow storage and re-imported into Graylog when such a need appears (for example, when the dev team need to investigate a certain event from the past).
  6. Java, Python & Ruby applications could be easily connected with Graylog as there is an out-of-box library for this.

#logmanagement #analytics #startup #hustle #opensource #graylog #elk

What is SA-Core-2018-002 and How Acquia Mitigated 500000 attacks on Drupal

What is SA-Core-2018-002 and How Acquia Mitigated 500000 attacks on Drupal

Disclaimer: I have been working on WCMS and specifically with Acquia/Drupal for more than seven years. And in that period, I have developed a Love/hate relationship with Drupal. Love for Drupal 6 and hate for 7. Or something like that. So my views may be slightly unneutral.
 
On March 28th, the Drupal Security Team released a bug fix for a critical security vulnerability, named SA-CORE-2018-002. Over the next week, various exploits have been identified, as attackers have attempted to compromise unpatched Drupal sites. Hackers continue to try to exploit this vulnerability, and Acquia’s own security team has observed more than 100,000 attacks a day.

Timeline of SA-CORE-2018-002

The Remote code execution exploit or the so-called SA-CORE-2018-002 was a vulnerability that had been present on various layers of Drupal 7 and 8. And Drupal being Drupal,  had one of the most efficient governance among Open Source projects around. This I can say with confidence and pride as I have had more than a few interactions with the community, notifying issues, committing documentation, in feature roadmap discussions (Agreed, some of them are heated!) and submitting patch/fixes. Drupal community has very high standards and even though your patch or fix has functionally addressed the underlying issue, it may be declined. That said, it’s also one of the democratic community software you can get. Still, They insist on following the stringent and high community standards for the modules or themes.
So, it is no surprise that Drupal today has one of the most Responsible Disclosure policy.
Drupal community had previously notified all the developers in official channels and had asked to prepare a high impact patch. Meanwhile, Acquia did the same for its SMEs and Enterprise clients as well. Those in the deep of it knew a bit early on the nature of exploit and mitigation strategy.
And in the community forums, there were detailed descriptions of planning this infrastructure patch up and how to plan for uptime, isolation post disclosure, patching, updation and redeployment.
Multiple methods to suit multiple needs of the environment, architecture etc has also begun to appear. It was one giant machinery, albeit a self-governing one in it. I have known large organisations do a hodge-podge patchwork and contain the underlying vulnerability. Leaving a vendetta-driven Ex-Employee or a determined Hacker to expose the inner workings of the exploit. It had resulted in many multi-million dollar loss. Only after the #Apache project had reached a state of maturity, did these larger organisations learnt the art of disclosure. but, how many of them were practising it is a big question.
Till 28th March 2018, there were no (publically) known exploit for the RCA in Drupal 7/8. 
This all changed after Checkpoint Research released a detailed step by step explanation of the security bug SA-CORE-2018-02 and how it can be exploited. In less than 6 hours after Checkpoint Research’s blog post, Vitalii Rudnykh, a Russian security researcher, shared a proof-of-concept exploit on GitHub.
The article by Checkpoint Research and Rudnykh’s proof-of-concept code have spawned numerous exploits, which are written in different programming languages such as Ruby, Bash, Python and more. As a result, the number of attacks has grown significantly after that.
The scale and the severity of this attack suggest that if you failed to upgrade your Drupal sites, or your site is not supported by Acquia Cloud or another trusted vendor that provides platform level fixes, the chances of your site being hacked are very high. If you haven’t upgraded your site yet and you are not on a protected platform then assume your site is compromised. Rebuild your host, reinstall Drupal from a backup taken before the vulnerability was announced and upgrade before putting the site back online.
Geographic distribution of SA-CORE Attack Vectors

Solution:

Upgrade to the most recent version of Drupal 7 or 8 core.

  • If you are running 7.x, upgrade to Drupal 7.58. (If you are unable to update immediately, you can attempt to apply this patch to fix the vulnerability until such time as you are able to completely update.)
  • If you are running 8.5.x, upgrade to Drupal 8.5.1. (If you are unable to update immediately, you can attempt to apply this patch to fix the vulnerability until such time as you are able to completely update.)

Drupal 8.3.x and 8.4.x are no longer supported and the community doesn’t normally provide security releases for unsupported minor releases. However, given the potential severity of this issue, Drupal community choose to provide 8.3.x and 8.4.x releases that include the fix for sites which have not yet had a chance to update to 8.5.0.

DevOps Post Series : 2, How to install and configure SSL/TLS Certificate on AWS EC2

DevOps Post Series : 2, How to install and configure SSL/TLS Certificate on AWS EC2

Assumption:

It is assumed, you have launched an EC2 instance with a valid Key, configured the Security groups, Installed Apache/Nginx and have deployed your app.

Background:

Now, its time to configure your TLS/SSL certificate. Why would you want to configure your own certificate, when you can get Amazon to issue a free TLS/SSL certificate? Well, there are more than a few use-cases that we have come across.

  1. First and Foremost is, AWS Certificate Manager certificates can be installed only on Elastic Load Balancers, Amazon CloudFront distributions, or APIs for Amazon API Gateway. (At the time of writing)
  2. You are building a staging/testing server and would test integrations in it and require SSL/TLS.
  3. You are just starting off, and have only one EC2 instance to start with. (you cannot install AWS provisioned certificate on a EC2 directly)
  4. Provisioning a new service, say for data exchange for your customers with their customers/vendors etc, and will be a very under utilised service.
  5. Planning an endpoint for SSO/OpenID etc. and prefer to have this part as logically different than your app.abc.com or abc.com.

And at least a dozens other use-cases that comes to my mind, but leaving out for brevity.

Getting Started

Self-Signed Certificates:

Firstly, enable apache  in your EC2 Instance and install/enable ssl.
(As usual, I’ll try to give the instruction for both RPM and DEB package based distributions)
[shell]sudo systemctl is-enabled httpd[/shell]
This should return “enabled” if not, enable it by typing the following,
[shell]sudo systemctl start httpd && sudo systemctl enable httpd [shell]sudo yum update -y [shell]sudo yum install -y mod_ssl[/shell]
And Follow the on-screen instructions, You would have answered some basic questions like domain Name, Country, Email ID etc. And if you accepted the default locations from the prompt, you would have generated 2 files in the following locations.
/etc/pki/tls/private/localhost.key – This is an auto-generated 2048-bit RSA private key for your Amazon EC2 host. You can also use this key to generate a certificate signing request (CSR) to submit to a certificate authority (CA).
/etc/pki/tls/certs/localhost.crt – This is a self-signed X.509 certificate for your server host. This certificate is useful only where you can control the “client” environment, like a testing or staging server.
Now, restart the apache
[shell]sudo systemctl restart httpd[/shell]
And try https://your-aws.public.dns or https://[yourpublicip].
Since you’re accessing your site with a self-signed, untrusted host certificate, your browser may display a series of security warnings. But, once you added it to the exception list, you should be good to go. This would be the end of it, if you’re only looking for a certificate to be used for staging/other controlled environments. If you want a public facing SSL, so your users/customer can login and access this new service,

CA-Signed Certificate

– Go to /etc/pki/tls/private/  and generate a new private key
[shell]sudo openssl genrsa -out virtualserver1.key 2048[/shell]
This generates an RSA key that is identical to the default key. You can generate a 4096-bit key, not use RSA altogether and depend on some other mathematical models as well. But those are beyond the scope of this post.
[bash]sudo chown root.rootvirtualserver1.key
sudo chmod 600 virtualserver1.key
ls -alvirtualserver1.key [/bash]
Now, you can use this key to generate a Certificate Signing Request
[shell]sudo openssl req -new -keyvirtualserver1.key -out csr.pem[/shell]
When you do this, OpenSSL will open a series of prompts for all sorts of data, the “CommonName” is one thing which is Mandatory for your to get a certificate. All other data requested by it are optional. Once you’re done with that, you should have generated a csr.pem.
Submit the CSR to a CA. This usually consists of opening your CSR file in a text editor and copying the contents into a web form. At this time, you may be asked to supply one or more subject alternate names (SANs) to be placed on the certificate.
Remove or rename the old self-signed host certificate localhost.crt from the/etc/pki/tls/certs directory and place the new CA-signed certificate there (along with any intermediate certificates).
Once you’ve copied the contents of the .key file in the form and submitted it with your CA, you would have received an Email confirming the “Issue” of the certificate. Once its done, you can check your application in Https, now it should be with a “green padlock”. Meaning fully secure.
However, you can run a security test on your SSL, just go to SSLLabs and start a test by giving your URL. After about 2-5 mins, you would receive a rating and details. SOmthing similar to the following image.

That’s It! You’re done.

DevOps Post Series : 1, How to install and configure LAMP on AWS EC2

DevOps Post Series : 1, How to install and configure LAMP on AWS EC2

In this #DevOps centric series of blog posts, I will write about some of the interesting yet common problems and their solutions or quick guides and how-tos. This is the result of setting up a new #Datacenter setup for the #Startup I am working.
 
In this post, I will assume that you have already launched an EC2 instance type with the operating system of your choice. Generally, Amazon Linux (based on RedHat/CentOS) or Ubuntu is the preferred OS of choice. In case you prefer an exotic flavour of Linux, which does not support either the rpm/yum(RHEL/CentOS/Fedora/AMI) or apt (Debian/Ubuntu and derivatives)  this article may not be of much use to you.

  1. Connect to your instance – Use the private key you downloaded during the ec2 launch.
    1. If you’re in Linux or Mac – use the following by replacing it with your private key name and instance’s public dns –  ssh -i "loginserver."[email protected]
    2. If you’ve launched an Amazon Linux, use “ec2-user” instead of “root”
    3. If you’ve launched an Ubuntu Linux, use “ubuntu” instead of “root”
    4. another important thing is to ensure that the private key has 0400 privilege and it is “owned” by the “User” as who you’ll execute the ssh connection.
  2. Update your package manager
    1. Amazon Linux : sudo yum update
    2. Ubuntu Linux: sudo apt-get update
  3. Tools & Utils (Optional/Personal Preference) I normally prefer to have a couple of tools installed in the server for quick-hacks/edits, monitoring etc.
    1. Amazon Linux : sudo yum install -y mc nano tree multitail git lynx
    2. Ubuntu Linux: sudo apt-get -y mc nano tree multitail git  lynx
      1. For details on the above-mentioned tools, refer the bottom of the article.
  4. LAMP Server
    1. Amazon Linux :sudo yum install -y httpd24 php70 mysql56-server php70-mysqlnd mysql56-client
    2. Ubuntu Linux: sudo apt-get install mysql-client-core-5.6 mysql-server-core-5.6 apache2 php libapache2-mod-php php-mcrypt php-mysql
      1. Your operating system will start to download and install the specified software, as for MySQL, you will be prompted for a root password. After installation, I strongly recommend you to run mysql_secure_installation and proceed with the onscreen instructions.
      2. Some of the critical things to do are remove the “test” db, remove access to "root"@"%", others are optional.
      3. The optional steps are,
        1. remove the anonymous user accounts.
        2. disable the remote root login.
        3. reload the privilege tables and save your changes.
  5. Configuration and other dependencies
    1. Amazon Linux :
      sudo yum install php70-mbstring.x86_64 php70-zip.x86_64 composer node -y
    2. Ubuntu replace yum install with apt0get install

Finally, restart the services and off you go. You have successfully installed LAMP server in EC2. Now, go to your browser and enter the publicDNS of the ec2 instance and you should be able to see the default apache page.  If you get either a timeout or not found error, it may mean you have to configure the security group accordingly. You should “ALLOW” port 80/443 (http/Https) in the security group.


 
 
 
 
 
 

How to Disable an Adblocker-blocker or Create an Anti-Adblock Killer!

How to Disable an Adblocker-blocker or Create an Anti-Adblock Killer!

History & Theory:

Digital Advertisement:

I get it. Ads are a necessary evil in content delivery game. Hell, I have been in the engineering side of content delivery for 10 yrs myself.  So, back in the days of #dotcom #bubble, we endured Banner ADs. When the #BigBrother, oops #Google came up, they swept the market clean with their (initially, atleast) non-intrusive text ADs. And people even appreciated the contextual advertisement, just when you were searching for a suspension for your car, you see 4 different ADs for OEM grade replacement suspension, grease monkeys to install them and so on.
Fast forward 10 yrs and Google is the global powerhouse of advertisement. Google knows what your mothers’ cousin` once removed does like and runs ads tailored to it in no less than 50 websites run by Google and countless other affiliates. The convenience transformed itself into a mild hindrance and a major nuisance in no time.  In its core, Google, Microsoft and Yahoo ADs were all based on a relevance relevance engine. I.E. based on the content that is currently served by the publisher (website you’re visiting) they search for the relevant ADs from their database and one that matches and has the target profile matching yours (this is where privacy advocates go crazy) they serve this AD. In its simplest form the process look something like the below diagram.

ADsense process diagram
Contextual Advertisement – Process Flow (ADSense/ADWords)

For the inquisitive lot, who want to know the technicality, it looks a lot more complex than this and it is presented below.
Tentative Process flow of How ADWords and ADSense content advertisement happens.

Enter ADBlockers:

And soon, people found a way to block the ADs. As seen above, All of these Adverts are programmed to run using great stores of data from the backend. So, when a user visits site a lot happens in the backend and a script is used get the resultant AD piece. Technically inclined people started writing custom scripts that would stop this script which renders the ADs. In no time all the bells and whistles like #blacklist #whitelist #regular-expression support all came in. Once the modern browser came with support for content filtering built-in, it was easy to supplement them with custom lists and scripts to block these ads. And ADBlockers for every device, OS, browsers became available and public knowledge of the same exploded their use in around 2013-2015 period. (see graph below) . So, All seems rosy from here.

ADBlocker-Blocker

Publishers and their representative trade bodies, on the other hand, argue that Internet ads provide revenue to website owners, which enable the website owners to create or otherwise purchase content for the website. Publishers claim that the prevalent use of ad blocking software and devices could adversely affect website owner revenue and thus in turn lower the availability of free content on websites. So, there is no wonder that publishers have begun to block or evict users found to be using #ADBlockers. (A page from my personal experience, I do not remember a time when I did not use ADblock, before Mozilla, I used MyIE (Maxthon) which had this configurable filters). But, off-lately the publishers have become more aggressive and have rolled out a slew of their own warriors. AKA ADBlocker-Blocker. Which are nifty little utilities you can embed in your site and traffic from ADB enabled users will be blocked until they disable or whitelist you.  Some majors like Economist, Wired and others have announced a novel approach, either you can disable ADB on their site or pay a small fee to see their site without the clutter of advertisements. For the sites that do not offer this feature or If you wish to  simply override them, read on.

Practice & Implementation

So, enter Anti-AdBlocker Killer — https://github.com/reek/anti-adblock-killer
It’s simple, really: it tricks sites that use #anti-adblocker technology into thinking you aren’t using an adblocker. The #adblocker-blocker lets you keep your adblocker on when you visit a page that would usually disable it by using a JavaScript file and filter list. This means you can work around bans on adblockers from common news companies, like Forbes, which lock you out when you’re detected.
It works against a number of different technologies used to detect #adblock users, and is likely to be a part of the next #armsrace as publishers work out how to block the #adblockers using #adblocker-blockers. If you’re still reading, I will conclude my narration and give step-by-step instruction on how to enable it and activate.

Step-by-step Instruction to Activate Anti-Adblock Killer

  1. Step 1 – Get a Script Manager:
    1.  Greasemonkey or Scriptish
    2.  Tampermonkey or Native
    3.  Tampermonkey or Violentmonkey
    4.  Tampermonkey or NinjaKit
    5.  Tampermonkey
        • (* After installation, depending on your browser, may require a browser restart for it to effect)
  2. Step 2 – Subscribe to a FilterList
    1. Subscribe from github.com (I prefer this)
    2. Subscribe from reeksite.com 
      • At this point, if you chose Github list, you’ll be prompted with a list of Extension and you can chose to Manualy install AAKiller. (representative screenshot is shown below) 
  3. Step 3 – Get User Scripts
    1. Install from greasyfork.org
    2. Install from openuserjs.org
    3. Install from github.com
    4. Install from reeksite.com

Once this is done, you’re on your way to enjoy AD-Blocker pop-up free browsing.

More data to support Planet Nine Hypothesis

More data to support Planet Nine Hypothesis


Last year, the existence of an unknown planet in our Solar system was announced. However, this hypothesis was subsequently called into question as biases in the observational data were detected. Now, Spanish astronomers have used a novel technique to analyse the orbits of the so-called extreme trans-Neptunian objects and, once again, they report that there is something perturbing them—a planet located at a distance between 300 to 400 times the Earth-sun distance.
Like the comets that interact with Jupiter.
At the beginning of 2016, researchers from the California Institute of Technology (Caltech, USA) announced that they had evidence of the existence of this object, located at an average  of 700 AU and with a mass 10 times that of the Earth. Their calculations were motivated by the peculiar distribution of the orbits found for the trans-Neptunian objects (TNO) in the Kuiper belt, which suggested the presence of a Planet Nine within the solar system.
Using calculations and data mining, the Spanish astronomers have found that the nodes of the 28 ETNOs analysed (and the 24 extreme Centaurs with average distances from the Sun of more than 150 AU) are clustered in certain ranges of distances from the Sun; furthermore, they have found a correlation, where none should exist, between the positions of the nodes and the inclination, one of the parameters which defines the orientation of the orbits of these icy objects in space.
“Assuming that the ETNOs are dynamically similar to the comets that interact with Jupiter, we interpret these results as signs of the presence of a planet that is actively interacting with them in a range of distances from 300 to 400 AU,” says De la Fuente Marcos, who emphasizes: “We believe that what we are seeing here cannot be attributed to the presence of observational bias”.
Is there also a Planet Ten?
De la Fuente Marcos explains that the hypothetical Planet Nine suggested in this study has nothing to do with another possible planet or planetoid situated much closer to us, and hinted at by other recent findings.
Also applying data mining to the orbits of the TNOs of the Kuiper Belt, astronomers Kathryn Volk and Renu Malhotra from the University of Arizona (USA) have found that the plane on which these objects orbit the Sun is slightly warped, a fact that could be explained if there is a perturber of the size of Mars at 60 AU from the Sun.
“Given the current definition of planet, this other mysterious object may not be a true planet, even if it has a size similar to that of the Earth, as it could be surrounded by huge asteroids or dwarf planets,” explains the Spanish astronomer.
“In any case, we are convinced that Volk and Malhotra’s work has found solid evidence of the presence of a massive body beyond the so-called Kuiper Cliff, the furthest point of the trans-Neptunian belt, at some 50 AU from the Sun, and we hope to be able to present soon a new work which also supports its existence”.

India Lauches PSLV C38 with 30 Satellites

India Lauches PSLV C38 with 30 Satellites

The Indian Space Research Organization (ISRO) successfully launched on Friday PSLV-C38 rocket on a mission to send 31 satellites, including India’s Cartosat 2 and NIUSAT satellites along with 29 foreign nano satellites, into orbit, ISRO said in a press release.
“India’s Polar Satellite Launch Vehicle, in its 40th flight (PSLV-C38), launched the 712 kg [0.7 tonnes] Cartosat-2 series satellite for earth observation and 30 co-passenger satellites together weighing about 243 kg [0.2 tonnes] at lift-off into a 505 km [313 mile] polar Sun Synchronous Orbit (SSO),” ISRO said.
According to ISRO, the co-passenger satellites comprise 29 nano satellites from 14 countries namely, Austria, Belgium, Chile, the Czech Republic, Finland, France, Germany, Italy, Japan, Latvia, Lithuania, Slovakia, the United Kingdom and the United States as well as one nano satellite from India.

Google launches new TensorFlow Object Detection API

Google launches new TensorFlow Object Detection API

Object Detect API

Google has finally launched its new TensorFlow object detection API. This new feature will give access to researchers and developers to the same technology Google uses for its own personal operations like image search and street number identification in street view.
The company was planning to release this new feature for quite a few time and finally, it is available to open source community. The system which the tech company has released won a Microsoft’s Common Objects in Context object detection challenge last year. The company won the challenge by beating 23 teams participating in the challenge.
According to the company, it released this new system to bring general public close to AI, and also get help from developers and AI scientist to collaborate with the company and make new and innovative things using Google’s technology.
Google is not the first company offering AI technology to the general public, user and developers. Microsoft, Facebook, and Amazon have also given access to people to use their respective AI technology. Moreover, Apple in its recent WWDC has also rolled out AI technology named as CoreML for its users.
One of the main benefits which the company is offering with this new release is giving users to use this new technology on mobile phones through its object detection system. The system is based on MobileNets image recognition models which can handle and do tasks like object detection, facial recognition, and landmark recognition.

Internet fast lane for first responders

Internet fast lane for first responders

Time is the enemy of first responders, and communication delays can cost lives. Unfortunately, during natural disasters and other crises, communications — both cellular and internet — are often overloaded by friends and family reaching out to those in affected areas. That extra network traffic has in the past impacted the ability of first responders send and receive data.
Researchers at the Rochester Institute of Technology are testing a new protocol — developed with funding from the National Science Foundation and U.S. Ignite — that will allow first responders and emergency managers to send data-intensive communications over the internet regardless of the amount of other traffic eating up the available bandwidth.
The protocol — dubbed MultiNode Label Routing, or MNLR — runs below existing internet protocols, allowing other traffic to run simultaneously. Rather than using traditional transmission protocols, it uncovers routes based on the routers’ labels, which in turn carry the structural and connectivity information among routers.
It also features an immediate failover mechanism so that if a link or node fails, it uses an alternate path as soon as the failure is detected, which also speeds transmission.
According to Nirmala Shenoy, professor in RIT’s Information Sciences and Technologies Department and principal investigator of the project, the protocol is designed to give transmissions over MNLR priority over other traffic so that critical data isn’t lost or delayed.  “Because MNLR literally bypasses the internet protocol and other routing protocols, it can put other traffic on the Internet to a lower priority,” she said.
According to the RIT team, the protocol’s ability to prioritize transmissions solves problems encountered during recent major hurricanes when first-responders and emergency managers had difficulty transmitting large but critical data files such as LIDAR maps and video chats.
“Sharing data on the internet during an emergency is like trying to drive a jet down the street at rush hour,” Jennifer Schneider, RIT professor and co-principal investigator, told RIT news.  “A lot of the critical information is too big and data heavy for the existing internet pipeline.”
Another communications challenge during disasters is damage to network routers.  Accordingly, the team built capabilities into the MNLR protocol to get around routing limitations of the major existing internet protocols.  Specifically, the team included a faster failover mechanism so that if a router link fails, the transmission will be automatically rerouted more quickly than existing protocols support.  In testing the protocol during a link failure, the team found that MNLR recovered in less than 30 seconds, while Border Gateway Protocol required about 150 seconds.
While the team is continuing to refine the protocol, Shenoy acknowledged that deployment of MNLR will face two hurdles.  First, the protocol has to be broadly adopted into current routers.  “That depends on equipment vendors,” she said.
Secondly, heavy use of MNLR during an emergency will impact other internet traffic.  While most service providers and even customers are likely to be OK with that, service agreements will likely need to be modified to account for variable service during emergencies.

Bitnami