Thursday, February 28, 2019

INDUSTRY PRACTICES AND TOOLS 2

Code standards




Coding standards, sometimes referred to as programming styles or coding conventions, are a very important asset to programmers. Standards are rules which a developer is expected to follow. However, these rules, such as “the compiled file must have the extension .h” are often apparent are followed anyway. Guidelines, however, are much less well followed. They are stylistic measures which have no direct effect on the compilation of the guide and exist only to make the source code more humanly readable. They are optional but highly recommended.
Why Is Good Quality Code So Important?
A good quality code is an essential property of software because it could lead to financial losses or a waste of time needed for further maintenance, modification or adjustments if code quality is not good enough.
The long-term usefulness and long-term maintainability of the code, minimize errors and easily debugged, improve understandability, Decrease risks.
Characteristics of Good Quality Code
  • Efficiency
  • Reliability
  • Robustness
  • Portability
  • Maintainability
  • Readable
  •  Readable
1. Efficiency
Directly related to the performance and speed of running the software.

Why important?
The quality of the software can be evaluated with the efficiency of the code used. No one likes to use a software that takes too long to perform an action.

How?
Remove unnecessary or redundant code o Write reusable code
 Reduce resource consumption
 Use appropriate data types, functions and looping in appropriate place


2. Reliability
Ability to perform consistent and failure-free operations every time it runs.

Why important?
The software would be very less useful if the code function differently every time it runs even with the same input in same environment and if it breaks down often without throwing any errors.

How?
Take much time to review and test the code carefully and thoroughly in every possible way.
 Use proper error and exception handling
 Ability to perform consistent and failure-free operations every time it runs.

3. Robustness
Ability to cope with errors during program execution even under unusual condition.

Why important?
Image how would you feel when you use a software that keep showing strange and unfamiliar message when you did something wrong. Software is typically buggy and fragile but it should handle any errors encountered gracefully.

How?
Test software from every condition; both usual and unusual. Use proper error and exception handling 
Use proper error and exception handling.

4.  Robustness
Ability of the code to be run on as many different machines and operating systems as possible.

Why important?
It would be a waste of time and energy for programmers to re-write the same code again when it transferred from one environment to another.

How?
Start from the very beginning, write code that could work on every possible environment.

5. Maintainability
Code that is easy to add new features, modify existing features or fix bugs with a minimum of effort without the risk of affecting other related modules.

Why important?
Software always needs new features or bug fixes. So the written code must be easy to understand, easy to find what needs to be change, easy to make changes and easy to check that the changes have not introduced any bugs.

How?
Take much time to review and test the code carefully and thoroughly in every possible way.
 Use proper error and exception handling
Ability to perform consistent and failure-free operations every time it runs.


6. Readable
Ability to cope with errors during program execution even under unusual condition.

Why important?
Image how would you feel when you use a software that keep showing strange and unfamiliar message when you did something wrong. Software is typically buggy and fragile but it should handle any errors encountered gracefully.

How?
Good naming of variable, method and class names
Use of proper indentation and formatting style
Good technical documentation
Write appropriate comment or summary descriptions at the top of files, classes or functions.
The quality of the code can be measured by different aspects
     1.    Weighted Micro Function Points
modern software sizing algorithm invented by Logical Solutions in 2009 which is a successor to solid ancestor scientific methods as COCOMO, COSYSMO, maintainability index, cyclomatic complexity, function points, and Halstead complexity. It produces more accurate results than traditional software sizing methodologies while requiring less configuration and knowledge from the end user, as most of the estimation is based on automatic measurements of existing source code.
     2.    Halstead Complexity Measures
Halstead complexity measures are software metrics introduced by Maurice Howard Halstead in 1977[1] as part of his treatise on establishing an empirical science of software development. Halstead made the observation that metrics of the software should reflect the implementation or expression of algorithms in different languages, but be independent of their execution on a specific platform. These metrics are therefore computed statically from the code.
Halstead's goal was to identify measurable properties of software, and the relations between them. This is similar to the identification of measurable properties of matter (like the volume, mass, and pressure of a gas) and the relationships between them (analogous to the gas equation). Thus, his metrics are actually not just complexity metrics.  
     3.    Cyclomatic Complexity
Cyclomatic complexity is computed using the control flow graph of the program: the nodes of the graph correspond to indivisible groups of commands of a program, and a directed edge connects two nodes if the second command might be executed immediately after the first command. Cyclomatic complexity may also be applied to individual functions, modules, methods or classes within a program.
     4.    Lines of code
     5.    Lines of code per method

Impact of Quality code
      1.      Clarity:
Easy to read and oversee for anyone who isn’t the creator of the code. If it’s easy to understand, it’s much easier to maintain and extend the code. Not just computers, but also humans need to understand it.
     2.      Maintainable:
A high-quality code isn’t overcomplicated. Anyone working with the code has to understand the whole context of the code if they want to make any changes.
     3.      Documented:
The best thing is when the code is self-explaining, but it’s always recommended to add comments to the code to explain its role and functions. It makes it much easier for anyone who didn’t take part in writing the code to understand and maintain it.
     4.      Refactored:
Code formatting needs to be consistent and follow the language’s coding conventions.
     5.      Well-tested:
The less bug the code has the higher its quality is. Thorough testing filters out critical bugs ensuring that the software works the way it’s intended.
     6.      Extensible:
The code you receive has to be extendible. It’s not really great when you have to throw it away after a few weeks.
     7.      Efficiency:
High-quality code doesn’t use unnecessary resources to perform a desired action.

To Improve Readability of Code
       ·         Use of Comments
       ·         Appropriate naming
       ·         Proper indention

    1.    Use of Comments
     Provide the programmers with additional information about the written code allowing them to
      easily understand
      the purpose and reason behind a piece of code, 
      what a piece of code would do and
      what you are trying to accomplish with it. 
      Allows you to talk to your fellow developers about the business needs, special-requests,
      remaining tasks and the temporary solutions. 
      Proper use of commenting can make code maintenance much easier, as well as helping in 
      finding bugs faster. 
      Reduce the effort and time needed to read and try to understand a previously written code.

     2.    Appropriate Naming
             Have a significant role in program readability
       Make it noticeably easier for programmers to quickly determine what the code is doing, and to 
       fixed or modify for further requirements.
       components play.

    3.    Proper Indention
      Guides programmers where they should follow and implement, fix or make changes in coding.
      Makes the structure of the code more obvious, making it easier to follow improving the readability of the code.
      If the indention is poorly done, other programmers are going to have a hard time following your code and determining when a block executes.
ü  Proper code indentation will make it:
ü  Easier to read
ü  Easier to understand
ü  Easier to modify
ü  Easier to maintain
ü  Easier to enhance
ü  Saving lots of time when we need to revisit the code and use it.

Some suggestions  
  •  No duplication
  • Well-factored 
  • Conceptual integrity 
  • Intentional naming 
  • Low complexity
  • Strong cohesion
  • Weak coupling
  • Lean and mean
  • Simple
  • Testable
  • Readable
  • Not too smart
  • Consistent
  • Uniform
  • Imperfect
  •      Flat
Best Code Review Tools in the Market
Dependency management
Software project may have a backbone framework and many external artefacts linked
·         Third party packages
·         External libraries
·         Plug-ins
·         Etc.
These external artefacts may introduce many integration issues
·         Different folder/file structures and may use different ways of integrating into the main framework.
·         Different external artifacts may use different ways of integration
·         Different versions are available and difficult to upgrade.
There are tools to manage these external artefacts towards minimizing these issues
·         Composer [php]
·         Maven [Java]
·         NuGet[.net]
·         NPM (Node Package Manager) [JS]
·         Bower [JS]
Build Tool
What does Build Tool mean?
Build tools are programs that automate the creation of executable applications from source code. Building incorporates compiling, linking and packaging the code into a usable or executable form. In small projects, developers will often manually invoke the build process. This is not practical for larger projects, where it is very hard to keep track of what needs to be built, in what sequence and what dependencies there are in the building process. Using an automation tool allows the build process to be more consistent.
Build-automation utilities allow the automation of simple, repeatable tasks. When using the tool, it will calculate how to reach the goal by executing tasks in the correct, specific order and running each task. The two ways build tools differ are task-oriented vs. product-oriented. Task-oriented tools describe the dependency of networks in terms of a specific set task and product-oriented tools describe things in terms of the products they generate.
Basically, build automation is the act of scripting or automating a wide variety of tasks that software developers do in their day-to-day activities like:
·         Downloading dependencies.
·         Compiling source code into binary code.
·         Packaging that binary code.
·         Running tests.
·         Deployment to production systems.
Advantages
The advantages of build automation to software development projects include
A necessary pre-condition for continuous integration and continuous testing
Improve product quality
Accelerate the compile and link processing
Eliminate redundant tasks
Minimize "bad builds"
Eliminate dependencies on key personnel
Have history of builds and releases in order to investigate issues
Save time and money - because of the reasons listed above.
Various build tools available (Naming only few):
·         For java - Ant, Maven, Gradle.
·         For .NET framework - NAnt
·         C# - MsBuild.
Build automation
In the context of software development, build refers to the process that converts files and other assets under the developers' responsibility into a software product in its final or consumable form. The build may include:
·         compiling source files
·         packaging compiled files into compressed formats (such as jar, zip)
·         producing installers
·         creating or updating of database schema or data
The build is automated when these steps are repeatable, require no direct human intervention, and can be performed at any time with no information other than what is stored in the source code control repository.

Expected Benefits
Build automation is a prerequisite to effective use of continuous integration. However, it brings benefits of its own:
eliminating a source of variation, and thus of defects; a manual build process containing a large number of necessary steps offers as many opportunities to make mistakes
requiring thorough documentation of assumptions about the target environment, and of dependencies on third party products
Types
·         On-demand automation such as a user running a script at the command line
·         Scheduled automation such as a continuous integration server running a nightly build
·         Triggered automation such as a continuous integration server running a build on every commit to a version-control system.
Advantages
·         A necessary pre-condition for continuous integration and continuous testing •Accelerate the compile and link processing
·         Eliminate redundant tasks
·         Minimize "bad builds"
·         Documentation –has the history of builds and releases in order to investigate issues
·         Save time and money, and improve product quality
What is Maven and Maven - Build Life Cycle
Maven uses Convention over Configuration, which means developers are not required to create build process themselves. Developers do not have to mention each and every configuration detail. Maven provides sensible default behavior for projects.

A Build Lifecycle is a well-defined sequence of phases, which define the order in which the goals are to be executed. The primary(default) life cycle of Maven is used to build the application, using 23 phases Validate, Initialize, Generate-sources, Compile, Generate-test-sources, Etc.
What is Build Lifecycle?

A Build Lifecycle is a well-defined sequence of phases, which define the order in which the goals are to be executed. Here phase represents a stage in life cycle. As an example, a typical Maven Build Lifecycle consists of the following sequence of phases

There are always pre and post phases to register goals, which must run prior to, or after a particular phase.
When Maven starts building a project, it steps through a defined sequence of phases and executes goals, which are registered with each phase.
Maven has the following three standard lifecycles −
•           clean
•           default(or build)
•           site
A goal represents a specific task which contributes to the building and managing of a project. It may be bound to zero or more build phases. A goal not bound to any build phase could be executed outside of the build lifecycle by direct invocation.
The order of execution depends on the order in which the goal(s) and the build phase(s) are invoked. For example, consider the command below. The clean and package arguments are build phases while the dependency: copy-dependencies is a goal.
mvn clean dependency: copy-dependencies package
Here the clean phase will be executed first, followed by the dependency: copy-dependencies goal, and finally package phase will be executed.
Clean Lifecycle
When we execute mvn post-clean command, Maven invokes the clean lifecycle consisting of the following phases.
•           pre-clean
•           clean
•           post-clean
Maven clean goal (clean: clean) is bound to the clean phase in the clean lifecycle. Its clean:cleangoal deletes the output of a build by deleting the build directory. Thus, when mvn clean command executes, Maven deletes the build directory.
We can customize this behavior by mentioning goals in any of the above phases of clean life cycle.
In the following example, We'll attach maven-antrun-plugin:run goal to the pre-clean, clean, and post-clean phases. This will allow us to echo text messages displaying the phases of the clean lifecycle.
We've created a pom.xml .
Now open command console, go to the folder containing pom.xml and execute
Default (or Build) Lifecycle
This is the primary life cycle of Maven and is used to build the application. It has the following 21 phases.
validate
Validates whether project is correct and all necessary information is available to complete the build process.
initialize
Initializes build state, for example set properties.
generate-sources
Generate any source code to be included in compilation phase.
process-sources
Process the source code, for example, filter any value.
generate-resources
Generate resources to be included in the package.
process-resources
Copy and process the resources into the destination directory, ready for packaging phase.
compile
Compile the source code of the project.
process-classes
Post-process the generated files from compilation, for example to do bytecode enhancement/optimization on Java classes.
generate-test-sources
Generate any test source code to be included in compilation phase.
process-test-sources
Process the test source code, for example, filter any values.
test-compile
Compile the test source code into the test destination directory.
process-test-classes
Process the generated files from test code file compilation.
test
Run tests using a suitable unit testing framework (Junit is one).
prepare-package
Perform any operations necessary to prepare a package before the actual packaging.
package
Take the compiled code and package it in its distributable format, such as a JAR, WAR, or EAR file.
pre-integration-test
Perform actions required before integration tests are executed. For example, setting up the required environment.
integration-test
Process and deploy the package if necessary, into an environment where integration tests can be run.
post-integration-test
Perform actions required after integration tests have been executed. For example, cleaning up the environment
verify
Run any check-ups to verify the package is valid and meets quality criteria.
install
Install the package into the local repository, which can be used as a dependency in other projects locally.
deploy
Copies the final package to the remote repository for sharing with other developers and projects.

There are always pre

Thursday, February 21, 2019

INDUSTRY PRACTICES AND TOOLS 1

This article explains with  Tools of Software Development, Framework vs Plugin vs Liberace, Version control systems, Git , CDN, virtualization

Tools of Software Development

Two types of tools used by software engineers:
1. Analytical tools
  • Stepwise refinement
  • Cost-benefit analysis
  • Software metrics

2. CASE tools



CASE stands for Computer Aided Software Engineering which is software that supports one or more software engineering activities within a software development process.  improving capabilities, functionality and quality of software.

CASE tools may support the following development steps for developing database application:
• Creation of data flow and entity models
• Establishing a relationship between requirements and models
• Development of top-level design
• Development of functional and process description
• Development of test cases.

Why CASE tools are developed:
• Firstly, Quick Installation.
• Time-Saving by reducing coding and testing time.
• Enrich graphical techniques and data flow.
• Optimum use of available information.
• Enhanced analysis and design development.
• Create and manipulate documentation.
• Transfer the information between tools efficiently.
• The speed during the system development increased.

Categories of CASE Tools
• Tools
• Workbenches
• Environments

    1. Tools - Support individual process tasks

 Examples:
   • Checking the consistency of a design
   • Compiling a program
   • Comparing test results

• Upper-CASE tools (front-end tools)
   Assist developer during requirements, analysis, and design workflows or activities
• Lower-CASE tools (back-end tools)
    Assist with implementation, testing, and maintenance workflows or activities
• Integrated CASE tools (I-CASE)
provide support for the full life cycle

   2.  Workbenches

Collection of tools that together support. (Process workflows (requirements, design, etc.). One or two activities where activity is a related collection of tasks.)
Commercial examples:
   • PowerBuilder
   • Software Through Pictures
   • Software Architect

3.  Environments
 Support the complete software processor, at least, a large portion of the software process. Normally include several different workbenches which are integrated in some way.

Framework vs Plugin vs Liberace

Plugins provide
  • specific tools for development.
  • At development time - The plugin (source code files, modules, packages, executables, etc.) is placed in the project, apply some configurations using code
  • At runtime - The plug-in will be invoked via the configurations

    Libraries provide
  • an API, the coder can use it to develop some features, when writing code
  • At development time - Add the library to the project (source code files, modules, packages, executables, etc.), Call the necessary functions/methods using the given packages/module/classes
  • At runtime - The library will be called by the code.

Framework is

a collection of libraries, tools, rules, structures, and control, to build software  systems
  • At development time - Create the structure of the application, place your code in necessary places, you may use the given libraries to write your code, you can include additional libraries and plugins,
  • At runtime - The framework will call your code (inverse of control)

  Version control systems

Version control systems, also known as source control, source code management systems, or revision control systems, are a mechanism for keeping multiple versions of your files so that when you modify a file you can still access the previous revisions.
Now the most popular version control system used are Subversion and Git. Let’s first look at why we need to use a versioning control system and next let’s look at putting our source code in Git source code repository system.
  • Version control software keeps track of every modification to the source in a special kind of database
  • If a mistake Is made…
  • developers can turn back the clock
  • compare earlier versions of the code to help fix the mistake
  • minimizing disruption to all team members

Why used VCS
  • Collaboration - With a VCS, everybody on the team is able to work absolutely freely-on any file at any time., The VCS will later allow you to merge all the changes into a common version
  • Storing versions properly - A version control system acknowledges that there is only one project 
  •         Backup
  Three models of VCSs. That are
  • Local version control systems.
  • Centralized version control systems
  • Distributed Version Control Systems

1. Local version control systems




Local version control system maintains track of files within the local system. This approach is very common and simple. This type is also error prone which means the chances of accidentally writing to the wrong file is higher.
Oldest VCS, everything is in your Computer, cannot be used for collaborative software development.
2. Centralized Version Control Systems



In this approach, all the changes in the files are tracked under the centralized server. The centralized server includes all the information of versioned files, and list of clients that check out files from that central place.

Example: Tortoise SVN
Can be used for collaborative software development., Everyone knows to a certain degree what others on the project are doing., Administrators have fine-grained control over who can do what., Most obvious is the single point of failure that the centralized server represents.
3. Distributed Version Control System:



Distributed version control systems come into the picture to overcome the drawback of the centralized version control system. The clients completely clone the repository including its full history. If any server dies, any of the client repositories can be copied on to the server which helps restore the server.
Every clone is considered as a full backup of all the data.
Example: Git
No single point of failure., Clients don’t just check out the latest snapshot of the files: they fully mirror the repository., If any server dies, and these systems were collaborating via it, any of the client repositories can be copied back., Can collaborate with different groups of people in different ways., simultaneously within the same project.


What is the difference between Git and GitHub?

Git is a distributed version control tool that can manage a development project's source code history, while GitHub is a cloud-based platform built around the Git tool. Git is a tool a developer installs locally on their computer, while GitHub is an online service that stores code pushed to it from computers running the Git tool. The key difference between Git and GitHub is that Git is an open-source tool developer install locally to manage source code, while GitHub is an online service to which developers who use Git can connect and upload or download resources.
One way to examine the differences between GitHub and Git is to look at their competitors. Git competes with centralized and distributed version control tools such as Subversion, Mercurial, ClearCase, and IBM's Rational Team Concert. On the other hand, GitHub competes with cloud-based SaaS and PaaS offerings, such as GitLab and Atlassian's Bitbucket.

Basic git command

Git task
Notes
Git commands
Tell Git who you are
Configure the author name and email address to be used with your commits.
Note that Git stript some characters(for example trailing periods) from user.name.
git config --global user.name "Sam Smith"
git config --global user.email sam@example.com
Create a new local repository

git init
Check out a repository
Create a working copy of a local repository:
git clone /path/to/repository
For a remote server, use:
git clone username@host:/path/to/repository
Add files
Add one or more files to staging (index):
git add <filename>

git add *
Commit
Commit changes to head (but not yet to the remote repository):
git commit -m "Commit message"
Commit any files you've added with git add, and also commit any files you've changed since then:
git commit -a
Push
Send changes to the master branch of your remote repository:
git push origin master
Status
List the files you've changed and those you still need to add or commit:
git status
Connect to a remote repository
If you haven't connected your local repository to a remote server, add the server to be able to push to it:
git remote add origin <server>
List all currently configured remote repositories:
git remote -v
Branches
Create a new branch and switch to it:
git checkout -b <branch name>
Switch from one branch to another:
git checkout <branch name>
List all the branches in your repo, and also tell you what branch you're currently in:
git branch
Delete the feature branch:
git branch -d <branch name>
Push the branch to your remote repository so others can use it:
git push origin <branch name>
Push all branches to your remote repository:
git push --all origin
Delete a branch on your remote repository:
git push origin :<branch name>
Update from the remote repository
Fetch and merge changes on the remote server to your working directory:
git pull
To merge a different branch into your active branch:
git merge <branch name>
View all the merge conflicts:
View the conflicts against the base file:
Preview changes, before merging:
git diff
git diff --base <filename>
git diff <source branch> <target branch>
After you have manually resolved any conflicts, you mark the changed file:
git add <filename>
Tags
You can use tagging to mark a significant changeset, such as a release:
git tag 1.0.0 <commit ID>
Commit Id is the leading characters of the changeset ID, up to 10, but must be unique. Get the ID using:
git log
Push all tags to the remote repository:
git push --tags origin
Undo local changes
If you mess up, you can replace the changes in your working tree with the last content in the head:
Changes already added to the index, as well as new files, will be kept.
git checkout -- <filename>
Instead, to drop all your local changes and commits, fetch the latest history from the server and point your local master branch at it, do this:
git fetch origin

git reset --hard origin/master
Search
Search the working directory for foo():
git grep "foo()"

The lifecycle of the status of your files.

At this point, you should have a bona fide Git repository on your local machine, and a checkout or working copy of all of its files in front of you. Typically, you’ll want to start making changes and committing snapshots of those changes into your repository each time the project reaches a state you want to record. Remember that each file in your working directory can be in one of two states: tracked or untracked. Tracked files are files that were in the last snapshot; they can be unmodified, modified, or staged. In short, tracked files are files that Git knows about.
Untracked files are everything else any files in your working directory that were not in your last snapshot and are not in your staging area. When you first clone a repository, all of your files will be tracked and unmodified because Git just checked them out and you haven’t edited anything. As you edit files, Git sees them as modified, because you’ve changed them since your last commit. As you work, you selectively stage these modified files and then commit all those staged changes, and the cycle repeats.

Understanding the Workflow of Version Control









What is a CDN?

A content delivery network (CDN) refers to a geographically distributed group of servers which work together to provide fast delivery of Internet content. A CDN allows for the quick transfer of assets needed for loading Internet content including HTML pages, JavaScript files, stylesheets, images, and videos. The popularity of CDN services continues to grow, and today the majority of web traffic is served through CDNs, including traffic from major sites like Facebook, Netflix, and Amazon.
A properly configured CDN may also help protect websites against some common malicious attacks, such as Distributed Denial of Service (DDOS) attacks. CDN has two types, free and commercial CDNs  

Commercial CDNs (sell to content owners and publishers)
Free CDNs (that have a Forever-Free Plan, not selling)

Is a CDN the same as a web host?

While a CDN does not host content and can’t replace the need for proper web hosting, it does help cache content at the network edge, which improves website performance. Many websites struggle to have their performance needs met by traditional hosting services, which is why they opt for CDNs. By utilizing caching to reduce hosting bandwidth, helping to prevent interruptions in service, and improving security, CDNs are a popular choice to relieve some of the major pain points that come with traditional web hosting.

What are the benefits of using a CDN?

Although the benefits of using a CDN vary depending on the size and needs of an Internet property, the primary benefits for most users can be broken down into 4 different components:
  1. Improving website load times - By distributing content closer to website visitors by using a nearby CDN server (among other optimizations), visitors experience faster page loading times. As visitors are more inclined to click away from a slow-loading site, a CDN can reduce bounce rates and increase the amount of time that people spend on the site. In other words, a faster a website means more visitors will stay and stick around longer.
  2. Reducing bandwidth costs - Bandwidth consumption costs for website hosting is a primary expense for websites. Through caching and other optimizations, CDNs are able to reduce the amount of data an origin server must provide, thus reducing hosting costs for website owners.
  3. Increasing content availability and redundancy - Large amounts of traffic or hardware failures can interrupt normal website function. Thanks to their distributed nature, a CDN can handle more traffic and withstand hardware failure better than many origin servers.
  4. Improving website security - A CDN may improve security by providing DDoS mitigation, improvements to security certificates, and other optimizations.


virtualization
There is a gap between development and implementation environments
  • Different platforms
  • Missing dependencies, frameworks/runtimes
  • Wrong configurations
  • Version mismatches

This issue can be overcome
  • Develop in a virtual environment
  • Clone the setup to the implementation platform

Advantages of Virtualization
·       Using Virtualization for Efficient Hardware Utilization
Virtualization decreases costs by reducing the need for physical hardware systems. Virtual machines use efficient hardware, which lowers the quantities of hardware, associated maintenance costs and reduces the power along with cooling the demand. You can allocate memory, space and CPU in just a second, making you more self-independent from hardware vendors.
·       Using Virtualization to Increase Availability
Virtualization platforms offer a number of advanced features that are not found on physical servers, which increase uptime and availability. Although the vendor feature names may be different, they usually offer capabilities such as live migration, storage migration, fault tolerance, high availability and distributed resource scheduling. These technologies keep virtual machines chugging along or give them the ability to recover from unplanned outages.
The ability to move a virtual machine from one server to another is perhaps one of the greatest single benefits of virtualization with far reaching uses. As the technology continues to mature to the point where it can do long-distance migrations, such as being able to move a virtual machine from one data center to another no matter the network latency involved.
·       Disaster Recovery
Disaster recovery is very easy when your servers are virtualized. With up-to-date snapshots of your virtual machines, you can quickly get back up and running. An organization can more easily create an affordable replication site. If a disaster strikes in the data center or server room itself, you can always move those virtual machines elsewhere into a cloud provider. Having that level of flexibility means your disaster recovery plan will be easier to enact and will have a 99% success rate.
·       Save Energy
Moving physical servers to virtual machines and consolidating them onto far fewer physical servers’ means lowering monthly power and cooling costs in the data center. It reduces carbon footprint and helps to clean up the air we breathe. Consumers want to see companies reducing their output of pollution and taking responsibility.
·       Deploying Servers too fast
You can quickly clone an image, master template or existing virtual machine to get a server up and running within minutes. You do not have to fill out purchase orders, wait for shipping and receiving and then rack, stack, and cable a physical machine only to spend additional hours waiting for the operating system and applications to complete their installations. With virtual backup tools like Veeam, redeploying images will be so fast that your end users will hardly notice there was an issue.


·       Save Space in your Server Room or Datacenter
Imagine a simple example: you have two racks with 30 physical servers and 4 switches. By virtualizing your servers, it will help you to reduce half the space used by the physical servers. The result can be two physical servers in a rack with one switch, where each physical server holds 15 virtualized servers.
·       Testing and setting up Lab Environment
While you are testing or installing something on your servers and it crashes, do not panic, as there is no data loss. Just revert to a previous snapshot and you can move forward as if the mistake did not even happen. You can also isolate these testing environments from end users while still keeping them online. When you have completely done your work, deploy it in live.
·       Shifting all your Local Infrastructure to Cloud in a day
If you decide to shift your entire virtualized infrastructure into a cloud provider, you can do it in a day. All the hypervisors offer you tools to export your virtual servers.
·       Possibility to Divide Services
If you have a single server, holding different applications this can increase the possibility of the services to crash with each other and increasing the fail rate of the server. If you virtualize this server, you can put applications in separated environments from each other as we have discussed previously.
Disadvantages of Virtualization

·       Extra Costs
Maybe you have to invest in the virtualization software and possibly additional hardware might be required to make the virtualization possible. This depends on your existing network. Many businesses have sufficient capacity to accommodate the virtualization without requiring much cash. If you have an infrastructure that is more than five years old, you have to consider an initial renewal budget.
·       Software Licensing
This is becoming less of a problem as more software vendors adapt to the increased adoption of virtualization. However, it is important to check with your vendors to understand how they view software use in a virtualized environment.
·       Learn the new Infrastructure
Implementing and managing a virtualized environment will require IT staff with expertise in virtualization. On the user side, a typical virtual environment will operate similarly to the non-virtual environment. There are some applications that do not adapt well to the virtualized environment.
implementations and available tools for each level of visualization

  • Hardware virtualization - VMs, emulators
  • OS level virtualization (Desktop virtualization) - Remote desktop terminals
  • Application level virtualization - Runtimes (JRE/JVM, .NET), engines (games engines)
  • Containerization (also OS/application level) – Dockers
  • Other virtualization types - Database, network, storage


What is a Hypervisor?

The hypervisor, also known as a virtual machine monitor, is a process that creates and runs virtual machines (VMs). A hypervisor allows one host computer to support multiple guest VMs by virtually sharing its resources, like memory and processing. Generally, there are two types of hypervisors. Type 1 hypervisors, called “bare metal,” run directly on the host’s hardware. Type 2 hypervisors, called “hosted,” run as a software layer on an operating system, like other computer programs.

Why uses a Hypervisor?

Hypervisors make it possible to use more of a system’s available resources, and provide greater IT mobility since the guest VMs are independent of the host hardware. This means they can be easily moved between different servers.

Emulation
  • Another (older) way for running one operating system on a different operating system
  • Virtualization requires underlying CPU to be same as a guest was compiled for
  • Emulation allows guest to run on different CPU
  • Necessary to translate all guest instructions from guest CPU to native CPU
  • Emulation, not virtualization
  • Useful when the host system has one architecture, guest compiled for other architecture
  • Company replacing outdated servers with new servers containing different CPU architecture, but still, want to run old applications
  • Performance challenge – order of magnitude slower than native code
  • New machines faster than older machines so can reduce slowdown
  • Very popular – especially in gaming where old consoles emulated on new


What are VMs?

A virtual machine (VM) is an emulation of a computer system. Put simply, it makes it possible to run what appear to be many separate computers on hardware that is actually one computer.

Benefits of VMs
  • All OS resources available to apps
  • Established management tools
  • Established security tools
  • Better known security controls

What are Containers?

With containers, instead of virtualizing the underlying computer like a virtual machine (VM), just the OS is virtualized.

Benefits of Containers
  • Reduced IT management resources
  • Reduced size of snapshots
  • Quicker spinning up apps
  • Reduced & simplified security updates
  • Less code to transfer, migrate, upload workloads


What’s the Diff: VMs vs Containers

VMs
Containers
Heavyweight
Lightweight
Limited performance
Native performance
Each VM runs in its own OS
All containers share the host OS
Hardware-level virtualization
OS virtualization
Startup time in minutes
Startup time in milliseconds
Allocates required memory
Requires less memory space
Fully isolated and hence more secure
Process-level isolation, possibly less secure

Client-Side Development Rich Web based Applications

Client-Side Development Rich Web based Applications Rich Internet applications (RIA) are Web-based applications that have some characteri...