Monday, 28 November 2016

Example for Priority and Severity

Example for High Priority and High Severity :-

 Suppose there is ATM Machine hope everyone is aware of ATM ma machine which is used for banking transactions. If ATM machine has bug like when user withdraw money from same bank ATM for which he is holding bank account, He is getting charged by 20 rs per transaction. Which is invalid as bank policy says withdrawing money from owns bank ATM no charge will be applied.

So this bug is high priority  because Bank is charging 20 rs per transaction for own ATM which is opposite to business logic. 

and bug is high severity this bug need to resolved immediately because thousands of user withdraw money per hour so it cost high. 

Example for High Priority and Low Severity Bug

Suppose there is ATM machine. Every one use ATM machine for daily transactions.
When you visit ATM machine sometime you may see advertisements from bank on festival season. Suppose on 2nd Oct on occasions of Gandhi jayanti (Last date to do saving is 3rd oct) bank is giving 0.5% extra (basic 5 % + additional 0.5%= 5.5%) additional interest for senior citizens on savings. This advertisement need to flash on or before 3rd oct. Suppose you visited ATM on 4th Oct morning and still you are seeing this add and date displaying in advertisement is 3rd oct last date to do savings.

Here priority is high as advertisement is flashing on multiple ATMs after end of date to do savings. and Impact is not much as date displaying in  advertisement 3rd oct
 means scheme is already over so severity is low . So need to stop this advertisement on priority.

so this is one of example of high priority and low severity


    Now about the next Example for Low Priority and Low Severity Bug. Here we can talk about well know and well used website portal. Yahoo.com. Everyone knows logo of Yahoo.com. Suppose while updating the website they made a spell mistake in a content. Its fine it wont impact much. User can still use the website. Its not mean that no need to fix this bug. Bug need to be fix but on Low Priority and Low Severity Bug.

    This bug is in Low Priority because - Its fine it wont impact much. User can still use the website and can be fix after some time.
    This bug is in Low Severity because - Its fine it wont impact much. User can still use the website 



    Example of low priority and high severity bug

    This is the best example for low priority and high severity bug. Lest's think there is Banking application who gives interest of rs 2 for every 1000 rs in account on the last day of year. Means on last day of year 31.12.YYYY the bank will deposit 2 rs interest for every 1000rs in account. Now bank found a bug that instead of 2 rs application giving interest of 4 rs for every 1000 rs in account. Means due to bug interest is going double.

    This bug is high severity - Due to bug interest is going double and bank may have thousands of accounts, So it will not be profitable for bank.

    This bug is Low priority  - Depositing interest is happen on last day of year so if its beginning of year like January then there is lot of time to solve this bug .


    agile methodology

    1)SCRUM
    2)Lean and Kanban
    3)Extrem programming
    4)Crystal
    5)Dynamic system development
    6)feature driven developement

    Tuesday, 22 November 2016

    Parenting by Example

    Parents cannot do anything directly, but indirectly they can do everything. First, by becoming examples for the children to follow. This is most important. And if we live in the right way, the children automatically lead their life in the right way. That is the importance of parenthood – that we provide the right environment, the right example.

    பிரியமானவளே...

    பிரியமானவளே... 

    என்னை கண்டதும் உன் விழிகளில் 
    உருவான கண்ணீர்... 

    கீழே விழாமல் 
    தாங்கிக்கொள்கிறது உன் இமைகள்... 

    இதழ்கள் நடுநடுங்க 
    சொல்கிறாய்... 

    நீங்கள் நன்றாக இருக்க 
    வேண்டுமென்று வாழ்த்துகிறாய்... 

    நிச்சயம் உனக்காக 
    நான் இருப்பேனடி... 

    நீயோ நானோ வேண்டாமென்று 
    சொல்லவில்லையடி கண்ணே... 

    உன் பெற்றோரின் விருப்பத்திற்காக 
    தியாகம் செய்தோம் நம் காதலை... 

    காதலுக்காக நாம் பெற்றோர்களை 
    தியாகம் செய்யவில்லையடி... 

    நீ இல்லாமல் என்னால் 
    இருக்கமுடியாதுதான்... 

    உன் சந்தோஷத்திற்காக சந்தோசமாக 
    இருக்க முயற்சி செய்கிறேன்... 

    நீயும் முயற்சி 
    செய்யாதே... 

    உன் முகத்தில் புன்னகையோடு 
    நீ இருக்கவேண்டுமடி... 

    நாம் சந்திக்கும்போதெல்லாம் 
    நீ எனக்கு கொடுக்கும் புன்னகை... 

    மீண்டும் ஒருமுறை உன்னை 
    நான் பார்க்க நேர்ந்தால்... 

    அப்போது வேண்டுமடி அதே புன்னகை 
    உன் முகத்தில் எனக்கு... 

    என் உயிரானவளே.....

    robots.txt file

    ROBOTS FILE:


    robots.txt file is a file at the root of your site that indicates those parts of your site you don’t want accessed by search engine crawlers. The file uses the Robots Exclusion Standard, which is a protocol with a small set of commands that can be used to indicate access to your site by section and by specific kinds of web crawlers (such as mobile crawlers vs desktop crawlers).

    GOOGLE CRAWLING AND INDEXING

    Finding information by crawling

    We use software known as “web crawlers” to discover publicly available webpages. The most well-known crawler is called “Googlebot.” Crawlers look at webpages and follow links on those pages, much like you would if you were browsing content on the web. They go from link to link and bring data about those webpages back to Google’s servers.
    The crawl process begins with a list of web addresses from past crawls and sitemaps provided by website owners. As our crawlers visit these websites, they look for links for other pages to visit. The software pays special attention to new sites, changes to existing sites and dead links.
    Computer programs determine which sites to crawl, how often, and how many pages to fetch from each site. Google doesn't accept payment to crawl a site more frequently for our web search results. We care more about having the best possible results  because in the long run that’s what’s best for users and, therefore, our business.

    Organizing information by indexing

    The web is like an ever-growing public library with billions of books and no central filing system. Google essentially gathers the pages during the crawl process and then creates an index, so we know exactly how to look things up. Much like the index in the back of a book, the Google index includes information about words and their locations. When you search, at the most basic level, our algorithms look up your search terms in the index to find the appropriate pages.
    The search process gets much more complex from there. When you search for “dogs” you don’t want a page with the word “dogs” on it hundreds of times. You probably want pictures, videos or a list of breeds. Google’s indexing systems note many different aspects of pages, such as when they were published, whether they contain pictures and videos, and much more. With the Knowledge Graph, we’re continuing to go beyond keyword matching to better understand the people, places and things you care about.

    Google bot

    Sitemap

    sitemap is a file where you can list the web pages of your site to tell Google and other search engines about the organization of your site content. Search engine web crawlers like Googlebot read this file to more intelligently crawl your site.

    Googlebot:(user agent)

    Googlebot is Google's web crawling bot (sometimes also called a "spider"). Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.

            How Googlebot accesses your site:

                            Googlebot was designed to be distributed on several machines to improve performance and scale as the web grows. Therefore, your logs may show visits from several machines at google.com, all with the user-agent Googlebot. 

              Blocking Googlebot from content on your site

    It's almost impossible to keep a web server secret by not publishing links to it. As soon as someone follows a link from your "secret" server to another web server, your "secret" URL may appear in the referrer tag and can be stored and published by the other web server in its referrer log.
    If you want to prevent Googlebot from crawling content on your site, 
    1)using robots.txt to block access to files and directories on your server.(for example, www.example.com/robots.txt)
    2)you can use the nofollow meta tag. To prevent Googlebot from following an individual link, add the rel="nofollow" attribute to the link itself.
               

             Problems with spammers and other user-agents

    The IP addresses used by Googlebot change from time to time. The best way to identify accesses by Googlebot is to use the user-agent (Googlebot). You can verify that a bot accessing your server really is Googlebot by using a reverse DNS lookup.

    WEB Scraping & Crawling

    Web scraping
    to use a minimal definition, is the process of processing a web document and extracting information out of it. You can do web scraping without doing web crawling.
    Web crawling,
    to use a minimal definition, is the process of iteratively finding and fetching web links starting from a list of seed URL's. Strictly speaking, to do web crawling, you have to do some degree of web scraping (to extract the URL's.)

    Web Crawling



    World Wide Web:


    The World Wide Web (abbreviated WWW or the Web) is an information space where documents and other web resources are identified by Uniform Resource Locators (URLs), interlinked by hypertext links, and can be accessed via the Internet
    .English scientist Tim Berners-Lee invented the World Wide Web in 1989. He wrote the first web browser computer programme in 1990 while employed at CERN in Switzerland.


    Internet:

    The Internet is the global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link devices worldwide


     Uniform Resource Locator (URL):

    A Uniform Resource Locator (URL), commonly informally termed a web address (a term which is not defined identically)[1] is a reference to a web resource that specifies its location on a computer network and a mechanism for retrieving it.

    Internet bot:

    An Internet bot, also known as web robot, WWW robot or simply bot, is a software application that runs automated tasks (scripts) over the Internet. . The largest use of bots is in web spidering (web crawler), in which an automated script fetches, analyzes and files information from web servers at many times the speed of a human.

    Web indexing (or Internet indexing):

    It refers to various methods for indexing the contents of a website or of the Internet as a whole. Individual websites or intranets may use a back-of-the-book index, while search engines usually use keywords and metadata to provide a more useful vocabulary for Internet or onsite searching

    web search engine :

    A web search engine is a software system that is designed to search for information on the World Wide Web.
    The search results are generally presented in a line of results often referred to as search engine results pages (SERPs)

     



    Web Crawling

    Web Crawling is the process of search engines combing through web pages in order to properly
     index them. These “web crawlers” systematically crawl pages and look at the keywords contained
    on the page,
    the kind of content, all the links on the page, and then returns that information to the search engine’s
     server for indexing. Then they follow all the hyperlinks on the website to get to other websites.
    When a search engine user enters a query, the search engine will go to its index and return the most
     relevant search results based on the keywords in the search term. Web crawling is an automated
     process and provides quick, up to date data.

    Web crawler :
    A Web crawler is an Internet bot which systematically browses the World Wide Web, typically for the purpose of Web indexing (web spidering).
    Web search engines and some other sites use Web crawling or spidering software to update their web content or indices of others sites' web content. Web crawlers can copy all the pages they visit for later processing by a search engine which indexes the downloaded pages so the users can search much more efficiently.