This article explains with Tools of Software Development, Framework vs Plugin vs Liberace, Version control systems, Git , CDN, virtualization
Tools of Software Development
Two types of tools used by software engineers:
1. Analytical tools
- Stepwise refinement
- Cost-benefit analysis
- Software metrics
CASE stands for Computer Aided Software Engineering which is software that supports one or more software engineering activities within a software development process. improving capabilities, functionality and quality of software.
CASE tools may support the following development steps for developing database application:
• Creation of data flow and entity models
• Establishing a relationship between requirements and models
• Development of top-level design
• Development of functional and process description
• Development of test cases.
Why CASE tools are developed:
• Firstly, Quick Installation.
• Time-Saving by reducing coding and testing time.
• Enrich graphical techniques and data flow.
• Optimum use of available information.
• Enhanced analysis and design development.
• Create and manipulate documentation.
• Transfer the information between tools efficiently.
• The speed during the system development increased.
Categories of CASE Tools
• Tools
• Workbenches
• Environments
Examples:
• Checking the consistency of a design
• Compiling a program
• Comparing test results
• Upper-CASE tools (front-end tools)
Assist developer during requirements, analysis, and design workflows or activities
• Lower-CASE tools (back-end tools)
Assist with implementation, testing, and maintenance workflows or activities
• Integrated CASE tools (I-CASE)
provide support for the full life cycle
Collection of tools that together support. (Process workflows (requirements, design, etc.). One or two activities where activity is a related collection of tasks.)
Commercial examples:
• PowerBuilder
• Software Through Pictures
• Software Architect
3. Environments
Support the complete software processor, at least, a large portion of the software process. Normally include several different workbenches which are integrated in some way.
Framework vs Plugin vs Liberace
Plugins provide
- specific tools for development.
- At development time - The plugin (source code files, modules, packages, executables, etc.) is placed in the project, apply some configurations using code
- At runtime - The plug-in will be invoked via the configurations
Libraries provide
- an API, the coder can use it to develop some features, when writing code
- At development time - Add the library to the project (source code files, modules, packages, executables, etc.), Call the necessary functions/methods using the given packages/module/classes
- At runtime - The library will be called by the code.
Framework is
a collection of libraries, tools, rules, structures, and control, to build software systems
- At development time - Create the structure of the application, place your code in necessary places, you may use the given libraries to write your code, you can include additional libraries and plugins,
- At runtime - The framework will call your code (inverse of control)
Version control systems
Version control systems, also known as source control, source code management systems, or revision control systems, are a mechanism for keeping multiple versions of your files so that when you modify a file you can still access the previous revisions.
Now the most popular version control system used are Subversion and Git. Let’s first look at why we need to use a versioning control system and next let’s look at putting our source code in Git source code repository system.
- Version control software keeps track of every modification to the source in a special kind of database
- If a mistake Is made…
- developers can turn back the clock
- compare earlier versions of the code to help fix the mistake
- minimizing disruption to all team members
Why used VCS
- Collaboration - With a VCS, everybody on the team is able to work absolutely freely-on any file at any time., The VCS will later allow you to merge all the changes into a common version
- Storing versions properly - A version control system acknowledges that there is only one project
- Backup
- Local version control systems.
- Centralized version control systems
- Distributed Version Control Systems
1. Local version control systems
Local version control system maintains track of files within the local system. This approach is very common and simple. This type is also error prone which means the chances of accidentally writing to the wrong file is higher.
Oldest VCS, everything is in your Computer, cannot be used for collaborative software development.
In this approach, all the changes in the files are tracked under the centralized server. The centralized server includes all the information of versioned files, and list of clients that check out files from that central place.
Example: Tortoise SVN
Can be used for collaborative software development., Everyone knows to a certain degree what others on the project are doing., Administrators have fine-grained control over who can do what., Most obvious is the single point of failure that the centralized server represents.
Distributed version control systems come into the picture to overcome the drawback of the centralized version control system. The clients completely clone the repository including its full history. If any server dies, any of the client repositories can be copied on to the server which helps restore the server.
Every clone is considered as a full backup of all the data.
Example: Git
Example: Git
No single point of failure., Clients don’t just check out the latest snapshot of the files: they fully mirror the repository., If any server dies, and these systems were collaborating via it, any of the client repositories can be copied back., Can collaborate with different groups of people in different ways., simultaneously within the same project.
What is the difference between Git and GitHub?
Git is a distributed version control tool that can manage a development project's source code history, while GitHub is a cloud-based platform built around the Git tool. Git is a tool a developer installs locally on their computer, while GitHub is an online service that stores code pushed to it from computers running the Git tool. The key difference between Git and GitHub is that Git is an open-source tool developer install locally to manage source code, while GitHub is an online service to which developers who use Git can connect and upload or download resources.
One way to examine the differences between GitHub and Git is to look at their competitors. Git competes with centralized and distributed version control tools such as Subversion, Mercurial, ClearCase, and IBM's Rational Team Concert. On the other hand, GitHub competes with cloud-based SaaS and PaaS offerings, such as GitLab and Atlassian's Bitbucket.
Basic git command
Git task
|
Notes
|
Git commands
|
Tell Git who you are
|
Configure the author name and email address to be used with your commits.
Note that Git stript some characters(for example trailing periods) from user.name.
|
git config --global user.name "Sam Smith"
git config --global user.email sam@example.com
|
Create a new local repository
|
git init
|
|
Check out a repository
|
Create a working copy of a local repository:
|
git clone /path/to/repository
|
For a remote server, use:
|
git clone username@host:/path/to/repository
|
|
Add files
|
Add one or more files to staging (index):
|
git add <filename>
git add *
|
Commit
|
Commit changes to head (but not yet to the remote repository):
|
git commit -m "Commit message"
|
Commit any files you've added with git add, and also commit any files you've changed since then:
|
git commit -a
|
|
Push
|
Send changes to the master branch of your remote repository:
|
git push origin master
|
Status
|
List the files you've changed and those you still need to add or commit:
|
git status
|
Connect to a remote repository
|
If you haven't connected your local repository to a remote server, add the server to be able to push to it:
|
git remote add origin <server>
|
List all currently configured remote repositories:
|
git remote -v
|
|
Branches
|
Create a new branch and switch to it:
|
git checkout -b <branch name>
|
Switch from one branch to another:
|
git checkout <branch name>
|
|
List all the branches in your repo, and also tell you what branch you're currently in:
|
git branch
|
|
Delete the feature branch:
|
git branch -d <branch name>
|
|
Push the branch to your remote repository so others can use it:
|
git push origin <branch name>
|
|
Push all branches to your remote repository:
|
git push --all origin
|
|
Delete a branch on your remote repository:
|
git push origin :<branch name>
|
|
Update from the remote repository
|
Fetch and merge changes on the remote server to your working directory:
|
git pull
|
To merge a different branch into your active branch:
|
git merge <branch name>
|
|
View all the merge conflicts:
View the conflicts against the base file:
Preview changes, before merging:
|
git diff
git diff --base <filename>
git diff <source branch> <target branch>
|
|
After you have manually resolved any conflicts, you mark the changed file:
|
git add <filename>
|
|
Tags
|
You can use tagging to mark a significant changeset, such as a release:
|
git tag 1.0.0 <commit ID>
|
Commit Id is the leading characters of the changeset ID, up to 10, but must be unique. Get the ID using:
|
git log
|
|
Push all tags to the remote repository:
|
git push --tags origin
|
|
Undo local changes
|
If you mess up, you can replace the changes in your working tree with the last content in the head:
Changes already added to the index, as well as new files, will be kept.
|
git checkout -- <filename>
|
Instead, to drop all your local changes and commits, fetch the latest history from the server and point your local master branch at it, do this:
|
git fetch origin
git reset --hard origin/master
|
|
Search
|
Search the working directory for foo():
|
git grep "foo()"
|
The lifecycle of the status of your files.
At this point, you should have a bona fide Git repository on your local machine, and a checkout or working copy of all of its files in front of you. Typically, you’ll want to start making changes and committing snapshots of those changes into your repository each time the project reaches a state you want to record. Remember that each file in your working directory can be in one of two states: tracked or untracked. Tracked files are files that were in the last snapshot; they can be unmodified, modified, or staged. In short, tracked files are files that Git knows about.
Untracked files are everything else — any files in your working directory that were not in your last snapshot and are not in your staging area. When you first clone a repository, all of your files will be tracked and unmodified because Git just checked them out and you haven’t edited anything. As you edit files, Git sees them as modified, because you’ve changed them since your last commit. As you work, you selectively stage these modified files and then commit all those staged changes, and the cycle repeats.
What is a CDN?
A content delivery network (CDN) refers to a geographically distributed group of servers which work together to provide fast delivery of Internet content. A CDN allows for the quick transfer of assets needed for loading Internet content including HTML pages, JavaScript files, stylesheets, images, and videos. The popularity of CDN services continues to grow, and today the majority of web traffic is served through CDNs, including traffic from major sites like Facebook, Netflix, and Amazon.
A properly configured CDN may also help protect websites against some common malicious attacks, such as Distributed Denial of Service (DDOS) attacks. CDN has two types, free and commercial CDNs
Commercial CDNs (sell to content owners and publishers)
Free CDNs (that have a Forever-Free Plan, not selling)
Is a CDN the same as a web host?
While a CDN does not host content and can’t replace the need for proper web hosting, it does help cache content at the network edge, which improves website performance. Many websites struggle to have their performance needs met by traditional hosting services, which is why they opt for CDNs. By utilizing caching to reduce hosting bandwidth, helping to prevent interruptions in service, and improving security, CDNs are a popular choice to relieve some of the major pain points that come with traditional web hosting.
What are the benefits of using a CDN?
Although the benefits of using a CDN vary depending on the size and needs of an Internet property, the primary benefits for most users can be broken down into 4 different components:
- Improving website load times - By distributing content closer to website visitors by using a nearby CDN server (among other optimizations), visitors experience faster page loading times. As visitors are more inclined to click away from a slow-loading site, a CDN can reduce bounce rates and increase the amount of time that people spend on the site. In other words, a faster a website means more visitors will stay and stick around longer.
- Reducing bandwidth costs - Bandwidth consumption costs for website hosting is a primary expense for websites. Through caching and other optimizations, CDNs are able to reduce the amount of data an origin server must provide, thus reducing hosting costs for website owners.
- Increasing content availability and redundancy - Large amounts of traffic or hardware failures can interrupt normal website function. Thanks to their distributed nature, a CDN can handle more traffic and withstand hardware failure better than many origin servers.
- Improving website security - A CDN may improve security by providing DDoS mitigation, improvements to security certificates, and other optimizations.
virtualization
There is a gap between development and implementation environments
- Different platforms
- Missing dependencies, frameworks/runtimes
- Wrong configurations
- Version mismatches
This issue can be overcome
- Develop in a virtual environment
- Clone the setup to the implementation platform
Advantages of Virtualization
· Using Virtualization for Efficient Hardware Utilization
Virtualization decreases costs by reducing the need for physical hardware systems. Virtual machines use efficient hardware, which lowers the quantities of hardware, associated maintenance costs and reduces the power along with cooling the demand. You can allocate memory, space and CPU in just a second, making you more self-independent from hardware vendors.
· Using Virtualization to Increase Availability
Virtualization platforms offer a number of advanced features that are not found on physical servers, which increase uptime and availability. Although the vendor feature names may be different, they usually offer capabilities such as live migration, storage migration, fault tolerance, high availability and distributed resource scheduling. These technologies keep virtual machines chugging along or give them the ability to recover from unplanned outages.
The ability to move a virtual machine from one server to another is perhaps one of the greatest single benefits of virtualization with far reaching uses. As the technology continues to mature to the point where it can do long-distance migrations, such as being able to move a virtual machine from one data center to another no matter the network latency involved.
· Disaster Recovery
Disaster recovery is very easy when your servers are virtualized. With up-to-date snapshots of your virtual machines, you can quickly get back up and running. An organization can more easily create an affordable replication site. If a disaster strikes in the data center or server room itself, you can always move those virtual machines elsewhere into a cloud provider. Having that level of flexibility means your disaster recovery plan will be easier to enact and will have a 99% success rate.
· Save Energy
Moving physical servers to virtual machines and consolidating them onto far fewer physical servers’ means lowering monthly power and cooling costs in the data center. It reduces carbon footprint and helps to clean up the air we breathe. Consumers want to see companies reducing their output of pollution and taking responsibility.
· Deploying Servers too fast
You can quickly clone an image, master template or existing virtual machine to get a server up and running within minutes. You do not have to fill out purchase orders, wait for shipping and receiving and then rack, stack, and cable a physical machine only to spend additional hours waiting for the operating system and applications to complete their installations. With virtual backup tools like Veeam, redeploying images will be so fast that your end users will hardly notice there was an issue.
· Save Space in your Server Room or Datacenter
Imagine a simple example: you have two racks with 30 physical servers and 4 switches. By virtualizing your servers, it will help you to reduce half the space used by the physical servers. The result can be two physical servers in a rack with one switch, where each physical server holds 15 virtualized servers.
· Testing and setting up Lab Environment
While you are testing or installing something on your servers and it crashes, do not panic, as there is no data loss. Just revert to a previous snapshot and you can move forward as if the mistake did not even happen. You can also isolate these testing environments from end users while still keeping them online. When you have completely done your work, deploy it in live.
· Shifting all your Local Infrastructure to Cloud in a day
If you decide to shift your entire virtualized infrastructure into a cloud provider, you can do it in a day. All the hypervisors offer you tools to export your virtual servers.
· Possibility to Divide Services
If you have a single server, holding different applications this can increase the possibility of the services to crash with each other and increasing the fail rate of the server. If you virtualize this server, you can put applications in separated environments from each other as we have discussed previously.
Disadvantages of Virtualization
· Extra Costs
Maybe you have to invest in the virtualization software and possibly additional hardware might be required to make the virtualization possible. This depends on your existing network. Many businesses have sufficient capacity to accommodate the virtualization without requiring much cash. If you have an infrastructure that is more than five years old, you have to consider an initial renewal budget.
· Software Licensing
This is becoming less of a problem as more software vendors adapt to the increased adoption of virtualization. However, it is important to check with your vendors to understand how they view software use in a virtualized environment.
· Learn the new Infrastructure
Implementing and managing a virtualized environment will require IT staff with expertise in virtualization. On the user side, a typical virtual environment will operate similarly to the non-virtual environment. There are some applications that do not adapt well to the virtualized environment.
implementations and available tools for each level of visualization
- Hardware virtualization - VMs, emulators
- OS level virtualization (Desktop virtualization) - Remote desktop terminals
- Application level virtualization - Runtimes (JRE/JVM, .NET), engines (games engines)
- Containerization (also OS/application level) – Dockers
- Other virtualization types - Database, network, storage
What is a Hypervisor?
The hypervisor, also known as a virtual machine monitor, is a process that creates and runs virtual machines (VMs). A hypervisor allows one host computer to support multiple guest VMs by virtually sharing its resources, like memory and processing. Generally, there are two types of hypervisors. Type 1 hypervisors, called “bare metal,” run directly on the host’s hardware. Type 2 hypervisors, called “hosted,” run as a software layer on an operating system, like other computer programs.
Why uses a Hypervisor?
Hypervisors make it possible to use more of a system’s available resources, and provide greater IT mobility since the guest VMs are independent of the host hardware. This means they can be easily moved between different servers.
Emulation
- Another (older) way for running one operating system on a different operating system
- Virtualization requires underlying CPU to be same as a guest was compiled for
- Emulation allows guest to run on different CPU
- Necessary to translate all guest instructions from guest CPU to native CPU
- Emulation, not virtualization
- Useful when the host system has one architecture, guest compiled for other architecture
- Company replacing outdated servers with new servers containing different CPU architecture, but still, want to run old applications
- Performance challenge – order of magnitude slower than native code
- New machines faster than older machines so can reduce slowdown
- Very popular – especially in gaming where old consoles emulated on new
What are VMs?
A virtual machine (VM) is an emulation of a computer system. Put simply, it makes it possible to run what appear to be many separate computers on hardware that is actually one computer.
Benefits of VMs
- All OS resources available to apps
- Established management tools
- Established security tools
- Better known security controls
What are Containers?
With containers, instead of virtualizing the underlying computer like a virtual machine (VM), just the OS is virtualized.
Benefits of Containers
- Reduced IT management resources
- Reduced size of snapshots
- Quicker spinning up apps
- Reduced & simplified security updates
- Less code to transfer, migrate, upload workloads
What’s the Diff: VMs vs Containers
VMs
|
Containers
|
Heavyweight
|
Lightweight
|
Limited performance
|
Native performance
|
Each VM runs in its own OS
|
All containers share the host OS
|
Hardware-level virtualization
|
OS virtualization
|
Startup time in minutes
|
Startup time in milliseconds
|
Allocates required memory
|
Requires less memory space
|
Fully isolated and hence more secure
|
Process-level isolation, possibly less secure
|









No comments:
Post a Comment