1. What is the need for VCS?
2. Differentiate the three models of VCSs, stating their pros and cons
Local version control system maintains track of files within the local system. This approach is very common and simple. This type is also error prone which means the chances of accidentally writing to the wrong file is higher.
• No single point of failure
• Clients don’t just check out the latest snapshot of the files: they fully mirror the repository
• If any server dies, and these systems were collaborating via it,any of the client repositories can be copied back
3. Git and GitHub, are they same or different? Discuss with facts.
1.GitHub is a web-based hosting service for version control using Git.
2.It is mostly used for computer code. It offers all of the distributed version control and source code management functionality of Git as well as adding its own features.
4. Compare and contrast the Git commands, commit and push
- A venture capitalist is an investor who supports a small company in the process of expanding or provides the capital needed for a startup venture.
- Version control systems are a category of software tools that help a software team to manage changes to source code over time
2. Differentiate the three models of VCSs, stating their pros and cons
- Local version control systems
Local version control system maintains track of files within the local system. This approach is very common and simple. This type is also error prone which means the chances of accidentally writing to the wrong file is higher.
- Centralized version control systems
- Can be used for collaborative software development.
- Most obvious is the single point of failure that the centralized server represents.
- Access to the code base and the locking is controlled by the server. When the developer checks their code back in, the lock is released so it’s available for others to check out.
- A centralized version control system means that the version history is stored in a central server. When a developer wants to make changes to certain files, they pull files from that central repository to their own computer.
- Distributed Version Control Systems
- In software development, distributed version control is a form of version control where the complete code base is mirrored on every developer's computer.
- This allows branching and merging to be managed automatically, increases speeds of most operations , improves the ability to work offline, and does not rely on a single location for backups.
• No single point of failure
• Clients don’t just check out the latest snapshot of the files: they fully mirror the repository
• If any server dies, and these systems were collaborating via it,any of the client repositories can be copied back
3. Git and GitHub, are they same or different? Discuss with facts.
- GIT
- Git is a distributed version-control system for tracking changes in source code during software development.
- It is designed for coordinating work among programmers, but it can be used to track changes in any set of files. Its goals include speed, data integrity, and support for distributed, non-linear workflows.
- Git development began in April 2005
- GITHUB
1.GitHub is a web-based hosting service for version control using Git.
2.It is mostly used for computer code. It offers all of the distributed version control and source code management functionality of Git as well as adding its own features.
It provides access control and several collaboration features such as bug tracking, feature requests, task management, and wikis for every project
- git is a distributed version control system, the difference is that commit will changes to your local repository, whereas push will changes up to a remote repo.
- git commit record your changes to the local repository. git push update the remote repository with your local changes.
5. Discuss the use of staging area and Git directory
6. Explain the collaboration workflow of Git, with example
7. Discuss the benefits of CDNs
8. How CDNs differ from web hosting servers?
9. Identify free and commercial CDNs
10. Discuss the requirements for virtualization
VIRTUAL MACHINES
virtualization machine RUNTIMES
11. Discuss and compare the pros and cons of different virtualization techniques in different levels
Guest Operating System Virtualization
Guest OS virtualization is perhaps the easiest concept to understand. In this scenario the physical host computer system runs a standard unmodified operating system such as Windows, Linux, Unix or MacOS X. Running on this operating system is a virtualization application which executes in much the same way as any other application such as a word processor or spreadsheet would run on the system.
Kernel Level Virtualization
Under kernel level virtualization the host operating system runs on a specially modified kernel which contains extensions designed to manage and control multiple virtual machines each containing a guest operating system. Unlike shared kernel virtualization each guest runs its own kernel, although similar restrictions apply in that the guest operating systems must have been compiled for the same hardware as the kernel in which they are running. Examples of kernel level virtualization technologies include User Mode Linux (UML) and Kernel-based Virtual Machine (KVM).
Shared Kernel Virtualization
Shared kernel virtualization (also known as system level or operating system virtualization) takes advantage of the architectural design of Linux and UNIX based operating systems. In order to understand how shared kernel virtualization works it helps to first understand the two main components of Linux or UNIX operating systems. At the core of the operating system is the kernel. The kernel, in simple terms, handles all the interactions between the operating system and the physical hardware. The second key component is the root file system which contains all the libraries, files and utilities necessary for the operating system to function. Under shared kernel virtualization the virtual guest systems each have their own root file system but share the kernel of the host operating system.
12. Identify popular implementations and available tools for each level of visualization
Visualization which can help in speeding up the process of comprehending large and complex data:
13. What is the hypervisor and what is the role of it?
A hypervisor is computer software, firmware or hardware that creates and runs virtual machines.
virtual machine =
operating systems with a virtual operating platform and manages the execution of the guest operating systems.
14. How does the emulation is different from VM?
emulation
Emulators emulate hardware without relying on the CPU being able to run code directly and redirect some operations to a hypervisor controlling the virtual container
Virtual machines
VM is an operating system from the Digital Equipment Corporation that runs in its older mid-range computers.
15. Compare and contrast the VMs and containers/dockers, indicating their advantages and disadvantages
VM
- git directory
git directory that is a bare repository , that is typically used for exchanging histories with others by pushing into it and fetching from it.
- Staging area
6. Explain the collaboration workflow of Git, with example
- Gitflow Workflow is a Git workflow design that was first published and made popular by Vincent Driessen at nvie.
- The Gitflow Workflow defines a strict branching model designed around the project release.
- This workflow doesn't add any new concepts or commands beyond what's required for the Feature BranchWorkflow.
7. Discuss the benefits of CDNs
- Performance: reduced latency and minimized packet loss
- Scalability: automatically scale up for traffic spikes
- SEO Improvement: benefit from the Google SEO ranking factor
- Reliability: automatic redundancy between edge servers
- Lower Costs: save bandwidth with your web host
- Security: KeyCDN mitigates DDoS attacks on edge servers
1. Your Server Load will decrease:
As a result of, strategically placed servers which form the backbone of the network the companies can have an increase in capacity and number of concurrent users that they can handle. Essentially, the content is spread out across several servers, as opposed to offloading them onto one large server.
2. Content Delivery will become faster:
Due to higher reliability, operators can deliver high-quality content with a high level of service, low network server loads, and thus, lower costs. Moreover, jQuery is ubiquitous on the web. There’s a high probability that someone visiting a particular page has already done that in the past using the Google CDN. Therefore, the file has already been cached by the browser and the user won’t need to download again
3. Segmenting your audience becomes easy:
CDNs can deliver different content to different users depending on the kind of device requesting the content. They are capable of detecting the type of mobile devices and can deliver a device-specific version of the content.
4. Storage and Security:
CDNs offer secure storage capacity for content such as videos for enterprises that need it, as well as archiving and enhanced data backup services. CDNs can secure content through Digital Rights Management and limit access through user authentication.
As a result of, strategically placed servers which form the backbone of the network the companies can have an increase in capacity and number of concurrent users that they can handle. Essentially, the content is spread out across several servers, as opposed to offloading them onto one large server.
2. Content Delivery will become faster:
Due to higher reliability, operators can deliver high-quality content with a high level of service, low network server loads, and thus, lower costs. Moreover, jQuery is ubiquitous on the web. There’s a high probability that someone visiting a particular page has already done that in the past using the Google CDN. Therefore, the file has already been cached by the browser and the user won’t need to download again
3. Segmenting your audience becomes easy:
CDNs can deliver different content to different users depending on the kind of device requesting the content. They are capable of detecting the type of mobile devices and can deliver a device-specific version of the content.
4. Storage and Security:
CDNs offer secure storage capacity for content such as videos for enterprises that need it, as well as archiving and enhanced data backup services. CDNs can secure content through Digital Rights Management and limit access through user authentication.
8. How CDNs differ from web hosting servers?
- Web Hosting is used to host your website on a server and let users access it over the internet. A content delivery network is about speeding up the access/delivery of your website’s assets to those users.
- Traditional web hosting would deliver 100% of your content to the user. If they are located across the world, the user still must wait for the data to be retrieved from where your web server is located. A CDN takes a majority of your static and dynamic content and serves it from across the globe, decreasing download times. Most times, the closer the CDN server is to the web visitor, the faster assets will load for them.
- Web Hosting normally refers to one server. A content delivery network refers to a global network of edge servers which distributes your content from a multi-host environment.
Commercial CDNs
Many large websites use commercial CDNs like Akamai Technologies to cache their web pages around the world. A website that uses a commercial CDN works the same way. The first time a page is requested, by anyone, it is built from the web server. But then it is also cached on the CDN server. Then when another customer comes to that same page, first the CDN is checked to determine if the cache is up-to-date. If it is, the CDN delivers it, otherwise, it requests it from the server again and caches that copy.
A commercial CDN is a very useful tool for a large website that gets millions of page views, but it might not be cost effective for smaller websites.
Free SDNs
We use a lot of open source software in our own projects and we also believe it is important to give back to the community to help make the web a better, faster, and more secure place. While there are a number of fantastic premium CDN solutions you can choose from there are also a lot of great free CDNs (open source) you can utilize to help decrease the costs on your next project. Most likely you are already using some of them without even knowing it. Check out some of the free CDNs below.
Many large websites use commercial CDNs like Akamai Technologies to cache their web pages around the world. A website that uses a commercial CDN works the same way. The first time a page is requested, by anyone, it is built from the web server. But then it is also cached on the CDN server. Then when another customer comes to that same page, first the CDN is checked to determine if the cache is up-to-date. If it is, the CDN delivers it, otherwise, it requests it from the server again and caches that copy.
A commercial CDN is a very useful tool for a large website that gets millions of page views, but it might not be cost effective for smaller websites.
- Chrome Frame
- Dojo Toolkit
- Ext JS
- jQuery
- jQuery UI
- MooTools
- Prototype
Free SDNs
We use a lot of open source software in our own projects and we also believe it is important to give back to the community to help make the web a better, faster, and more secure place. While there are a number of fantastic premium CDN solutions you can choose from there are also a lot of great free CDNs (open source) you can utilize to help decrease the costs on your next project. Most likely you are already using some of them without even knowing it. Check out some of the free CDNs below.
- Google CDN
- Microsoft Ajax CDN
- Yandex CDN
- jsDelivr
- cdnjs
- jQuery CDN
10. Discuss the requirements for virtualization
- Virtualization refers to the creation of a virtual resource such as a server, desktop, operating system, file, storage or network.
- The main goal of virtualization is to manage workloads by radically transforming traditional computing to make it.
- Wrong configurations
- Different platforms
- Version mismatches
- frameworks
VIRTUAL MACHINES
- Hardware virtualization
- OS level virtualization
- Application level virtualization
- Containerization
- Other virtualization types
11. Discuss and compare the pros and cons of different virtualization techniques in different levels
Guest Operating System Virtualization
Guest OS virtualization is perhaps the easiest concept to understand. In this scenario the physical host computer system runs a standard unmodified operating system such as Windows, Linux, Unix or MacOS X. Running on this operating system is a virtualization application which executes in much the same way as any other application such as a word processor or spreadsheet would run on the system.
Kernel Level Virtualization
Under kernel level virtualization the host operating system runs on a specially modified kernel which contains extensions designed to manage and control multiple virtual machines each containing a guest operating system. Unlike shared kernel virtualization each guest runs its own kernel, although similar restrictions apply in that the guest operating systems must have been compiled for the same hardware as the kernel in which they are running. Examples of kernel level virtualization technologies include User Mode Linux (UML) and Kernel-based Virtual Machine (KVM).
Shared Kernel Virtualization
Shared kernel virtualization (also known as system level or operating system virtualization) takes advantage of the architectural design of Linux and UNIX based operating systems. In order to understand how shared kernel virtualization works it helps to first understand the two main components of Linux or UNIX operating systems. At the core of the operating system is the kernel. The kernel, in simple terms, handles all the interactions between the operating system and the physical hardware. The second key component is the root file system which contains all the libraries, files and utilities necessary for the operating system to function. Under shared kernel virtualization the virtual guest systems each have their own root file system but share the kernel of the host operating system.
12. Identify popular implementations and available tools for each level of visualization
Visualization which can help in speeding up the process of comprehending large and complex data:
- Watson Analytics
- FusionCharts Suite XT
- QlikView
- Infogram
- Tibco Spotfire
- Tableau
- Datawrapper
13. What is the hypervisor and what is the role of it?
A hypervisor is computer software, firmware or hardware that creates and runs virtual machines.
virtual machine =
operating systems with a virtual operating platform and manages the execution of the guest operating systems.
14. How does the emulation is different from VM?
Virtualization vs. Emulation
Virtualization is a technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. This includes splitting a single physical infrastructure into multiple virtual servers; letting it appear as through each virtual machine is running on its own dedicated hardware and allowing each of them to be rebooted independently
emulation
Emulators emulate hardware without relying on the CPU being able to run code directly and redirect some operations to a hypervisor controlling the virtual container
Virtual machines
VM is an operating system from the Digital Equipment Corporation that runs in its older mid-range computers.
15. Compare and contrast the VMs and containers/dockers, indicating their advantages and disadvantages
VM
A virtual machine (VM) is an emulation of a computer system. Put simply, it makes it possible to run what appear to be many separate computers on hardware that is actually one computer.
The operating systems (“OS”) and their applications share hardware resources from a single host server, or from a pool of host servers. Each VM requires its own underlying OS, and the hardware is virtualized. A hypervisor, or a virtual machine monitor, is software, firmware, or hardware that creates and runs VMs. It sits between the hardware and the virtual machine and is necessary to virtualize the server.
Since the advent of affordable virtualization technology and cloud computing services, IT departments large and small have embraced virtual machines (VMs) as a way to lower costs and increase efficiencies
Popular VM Providers
- VMware vSphere
- VirtualBox
- Xen
- Hyper-V
Benefits of VMs
With containers, instead of virtualizing the underlying computer like a virtual machine (VM), just the OS is virtualized.
Containers sit on top of a physical server and its host OS — typically Linux or Windows. Each container shares the host OS kernel and, usually, the binaries and libraries, too. Shared components are read-only. Sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code, and means that a server can run multiple workloads with a single operating system installation. Containers are thus exceptionally light — they are only megabytes in size and take just seconds to start. Compared to containers, VMs take minutes to run and are an order of magnitude larger than an equivalent container.
In contrast to VMs, all that a container requires is enough of an operating system, supporting programs and libraries, and system resources to run a specific program. What this means in practice is you can put two to three times as many as applications on a single server with containers than you can with a VM. In addition, with containers you can create a portable, consistent operating environment for development, testing, and deployment.
Benefits of Containers
- Reduced & simplified security updates
- Less code to transfer, migrate, upload workloads
Popular Container Providers
- CGManager
- Docker
- Windows Server Containers