Friday, February 22, 2019

t-2

1. What is the need for VCS? 
  • venture capitalist is an investor who supports a small company in the process of expanding or provides the capital needed for a startup venture.
  • Version control systems are a category of software tools that help a software team to manage changes to source code over time
              

2. Differentiate the three models of VCSs, stating their pros and cons 
  • Local version control systems
           

  • Everything is in your Computer 
  • Cannot be used for collaborative software development

Local version control system maintains track of files within the local system. This approach is very common and simple. This type is also error prone which means the chances of accidentally writing to the wrong file is higher.

  • Centralized version control systems
        

  • Can be used for collaborative software development.
  • Most obvious is the single point of failure that the centralized server represents.
  •  Access to the code base and the locking is controlled by the server. When the developer checks their code back in, the lock is released so it’s available for others to check out.

  • centralized version control system means that the version history is stored in a central server. When a developer wants to make changes to certain files, they pull files from that central repository to their own computer.
  • Distributed Version Control Systems
   
       

  • In software development, distributed version control is a form of version control where the complete code base is mirrored on every developer's computer. 
  • This allows branching and merging to be managed automatically, increases speeds of most operations , improves the ability to work offline, and does not rely on a single location for backups.

                     • No single point of failure
                     • Clients don’t just check out the latest snapshot of the files: they fully mirror the repository
                     • If any server dies, and these systems were collaborating via it,any of the client                                           repositories can be copied back


3. Git and GitHub, are they same or different? Discuss with facts.

  • GIT

                  

  • Git is a distributed version-control system for tracking changes in source code during software development. 
  • It is designed for coordinating work among programmers, but it can be used to track changes in any set of files. Its goals include speed, data integrity, and support for distributed, non-linear workflows.
  • Git development began in April 2005
  • GITHUB
                                   

                   1.GitHub is a web-based hosting service for version control using Git
                   2.It is mostly used for computer code. It offers all of the distributed version  control and source code management functionality of Git as well as adding its own                                     features.

It provides access control and several collaboration features such as bug trackingfeature requeststask management, and wikis for every project
4. Compare and contrast the Git commands, commit and push 
  • git is a distributed version control system, the difference is that commit will changes to your local repository, whereas push will changes up to a remote repo. 
  • git commit record your changes to the local repository. git push update the remote repository with your local changes.
            


5. Discuss the use of staging area and Git directory
  • git directory
       git directory that is a bare repository , that is typically used for exchanging histories              with others by pushing into it and fetching from it.
  • Staging area
     Git makes it easier for you to do this by allowing you to specify exactly what changes         should be committed. To accomplish this, Git uses an intermediate area called the             staging area. You can add files one at a time to the staging area


6. Explain the collaboration workflow of Git, with example 
  • Gitflow Workflow is a Git workflow design that was first published and made popular by Vincent Driessen at nvie.
  • The Gitflow Workflow defines a strict branching model designed around the project release.
  • This workflow doesn't add any new concepts or commands beyond what's required for the Feature BranchWorkflow.
example 
                                       

7. Discuss the benefits of CDNs 
  1. Performancereduced latency and minimized packet loss
  2. Scalabilityautomatically scale up for traffic spikes
  3. SEO Improvementbenefit from the Google SEO ranking factor
  4. Reliabilityautomatic redundancy between edge servers
  5. Lower Costssave bandwidth with your web host
  6. SecurityKeyCDN mitigates DDoS attacks on edge servers



1. Your Server Load will decrease:
As a result of, strategically placed servers which form the backbone of the network the companies can have an increase in capacity and number of concurrent users that they can handle. Essentially, the content is spread out across several servers, as opposed to offloading them onto one large server.

2. Content Delivery will become faster:
Due to higher reliability, operators can deliver high-quality content with a high level of service, low network server loads, and thus, lower costs. Moreover, jQuery is ubiquitous on the web. There’s a high probability that someone visiting a particular page has already done that in the past using the Google CDN. Therefore, the file has already been cached by the browser and the user won’t need to download again

3. Segmenting your audience becomes easy:
CDNs can deliver different content to different users depending on the kind of device requesting the content. They are capable of detecting the type of mobile devices and can deliver a device-specific version of the content.

4. Storage and Security:
CDNs offer secure storage capacity for content such as videos for enterprises that need it, as well as archiving and enhanced data backup services. CDNs can secure content through Digital Rights Management and limit access through user authentication.

8. How CDNs differ from web hosting servers? 
  • Web Hosting is used to host your website on a server and let users access it over the internet. A content delivery network is about speeding up the access/delivery of your website’s assets to those users.
  • Traditional web hosting would deliver 100% of your content to the user. If they are located across the world, the user still must wait for the data to be retrieved from where your web server is located. A CDN takes a majority of your static and dynamic content and serves it from across the globe, decreasing download times. Most times, the closer the CDN server is to the web visitor, the faster assets will load for them.
  • Web Hosting normally refers to one server. A content delivery network refers to a global network of edge servers which distributes your content from a multi-host environment.



9. Identify free and commercial CDNs 

Commercial CDNs
Many large websites use commercial CDNs like Akamai Technologies to cache their web pages around the world. A website that uses a commercial CDN works the same way. The first time a page is requested, by anyone, it is built from the web server. But then it is also cached on the CDN server. Then when another customer comes to that same page, first the CDN is checked to determine if the cache is up-to-date. If it is, the CDN delivers it, otherwise, it requests it from the server again and caches that copy.
A commercial CDN is a very useful tool for a large website that gets millions of page views, but it might not be cost effective for smaller websites.
  • Chrome Frame
  • Dojo Toolkit
  • Ext JS
  • jQuery
  • jQuery UI
  • MooTools
  • Prototype

Free SDNs
We use a lot of open source software in our own projects and we also believe it is important to give back to the community to help make the web a better, faster, and more secure place. While there are a number of fantastic premium CDN solutions you can choose from there are also a lot of great free CDNs (open source) you can utilize to help decrease the costs on your next project. Most likely you are already using some of them without even knowing it. Check out some of the free CDNs below.
  • Google CDN
  • Microsoft Ajax CDN
  • Yandex CDN
  • jsDelivr
  • cdnjs
  • jQuery CDN

10. Discuss the requirements for virtualization 
  • Virtualization refers to the creation of a virtual resource such as a server, desktop, operating system, file, storage or network.
  • The main goal of virtualization is to manage workloads by radically transforming traditional computing to make it.
implementation environments 

  • Wrong configurations
  • Different platforms
  • Version mismatches 
  • frameworks

VIRTUAL MACHINES

            
  •  Hardware virtualization
     
  •  OS level virtualization
  •  Application level virtualization
  
  •  Containerization
      

  •  Other virtualization types
     


virtualization machine RUNTIMES
  

11. Discuss and compare the pros and cons of different virtualization techniques in different levels 

Guest Operating System Virtualization

Guest OS virtualization is perhaps the easiest concept to understand. In this scenario the physical host computer system runs a standard unmodified operating system such as Windows, Linux, Unix or MacOS X. Running on this operating system is a virtualization application which executes in much the same way as any other application such as a word processor or spreadsheet would run on the system.


Kernel Level Virtualization


Under kernel level virtualization the host operating system runs on a specially modified kernel which contains extensions designed to manage and control multiple virtual machines each containing a guest operating system. Unlike shared kernel virtualization each guest runs its own kernel, although similar restrictions apply in that the guest operating systems must have been compiled for the same hardware as the kernel in which they are running. Examples of kernel level virtualization technologies include User Mode Linux (UML) and Kernel-based Virtual Machine (KVM).

Shared Kernel Virtualization

Shared kernel virtualization (also known as system level or operating system virtualization) takes advantage of the architectural design of Linux and UNIX based operating systems. In order to understand how shared kernel virtualization works it helps to first understand the two main components of Linux or UNIX operating systems. At the core of the operating system is the kernel. The kernel, in simple terms, handles all the interactions between the operating system and the physical hardware. The second key component is the root file system which contains all the libraries, files and utilities necessary for the operating system to function. Under shared kernel virtualization the virtual guest systems each have their own root file system but share the kernel of the host operating system.



12. Identify popular implementations and available tools for each level of visualization 

Visualization which can help in speeding up the process of comprehending large and complex data:
  • Watson Analytics
  • FusionCharts Suite XT
  • QlikView
  • Infogram
  • Tibco Spotfire
  • Tableau
  • Datawrapper
              

13. What is the hypervisor and what is the role of it? 
hypervisor  is computer software, firmware or hardware that creates and runs virtual machines.
   virtual machine =
                             operating systems with a virtual operating platform and manages the                                      execution of the guest operating systems.
                


14. How does the emulation is different from VM? 
Virtualization vs. Emulation


Virtualization is a technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. This includes splitting a single physical infrastructure into multiple virtual servers; letting it appear as through each virtual machine is running on its own dedicated hardware and allowing each of them to be rebooted independently

emulation 
         Emulators emulate hardware without relying on the CPU being able to run code                   directly and redirect some operations to a hypervisor controlling the virtual container

               

Virtual machines
         VM is an operating system from the Digital Equipment Corporation that                 runs in its older mid-range computers.
            


15. Compare and contrast the VMs and containers/dockers, indicating their advantages and disadvantages 


VM
A virtual machine (VM) is an emulation of a computer system. Put simply, it makes it possible to run what appear to be many separate computers on hardware that is actually one computer.

The operating systems (“OS”) and their applications share hardware resources from a single host server, or from a pool of host servers. Each VM requires its own underlying OS, and the hardware is virtualized. A hypervisor, or a virtual machine monitor, is software, firmware, or hardware that creates and runs VMs. It sits between the hardware and the virtual machine and is necessary to virtualize the server.

Since the advent of affordable virtualization technology and cloud computing services, IT departments large and small have embraced virtual machines (VMs) as a way to lower costs and increase efficiencies

Popular VM Providers
  • VMware vSphere
  • VirtualBox
  • Xen
  • Hyper-V
Benefits of VMs
  • Established security tools
  • Better known security controls


Containers
With containers, instead of virtualizing the underlying computer like a virtual machine (VM), just the OS is virtualized.

Containers sit on top of a physical server and its host OS — typically Linux or Windows. Each container shares the host OS kernel and, usually, the binaries and libraries, too. Shared components are read-only. Sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code, and means that a server can run multiple workloads with a single operating system installation. Containers are thus exceptionally light — they are only megabytes in size and take just seconds to start. Compared to containers, VMs take minutes to run and are an order of magnitude larger than an equivalent container.


In contrast to VMs, all that a container requires is enough of an operating system, supporting programs and libraries, and system resources to run a specific program. What this means in practice is you can put two to three times as many as applications on a single server with containers than you can with a VM. In addition, with containers you can create a portable, consistent operating environment for development, testing, and deployment.

Benefits of Containers
  • Reduced & simplified security updates
  • Less code to transfer, migrate, upload workloads
Popular Container Providers
  • CGManager
  • Docker
  • Windows Server Containers

Friday, February 15, 2019

Tutorial 01 – Introduction to the frameworks


   Programming paradigms means a style, or                       "way" of programming 

1. Compare and contrast declarative and imperative paradigms. 
        
      contrast declarative
        The team descriptive programming is often used as an  opposite to the imperative programming. Shortly speaking .
Image result for networking photos hdThis paradigm allows you to declare WHAT you want to be done.this style of developing implies description of the logic of computation but not its control flow. By describing only the result the you want to get form the application , you can minimize unwanted side effects .Developers describe the results they want to get without explicit description of required steps


               
      imperative paradigms

                Imperative programming is probably the most widely spread paradigm, the most popular       examples of imperative programming languages are C++ ,java and PHP .The main characteristics of such programming languages are direct assignments, common data structures, and global variables. Here's an example of the code written in C++ . It calculates factorial using recursion.


                 


2. Discuss the difference between procedural programming and functional programming.  
  procedural programming
   

 Procedural programming uses a list of instructions to tell the computer what to do step by step. Procedural programming relies on procedures, also known as routines. A procedure contains a series of computational steps to be carried out. Procedural programming is also referred to as imperative or structured programming

  • The output of a routine does not always have a direct correlation with the input.
  • Everything is done in a specific order.
  • Execution of a routine may have side effects.
  • Tends to emphasize implementing solutions in a linear fashion.

Functional programming 

Functional programming is an approach to problem solving that treats every computation as a mathematical function. The outputs of a function rely only on the values that are provided as input to the function and don't depend on a particular series of steps that precede the function.

  • Often recursive.
  • Always returns the same output for a given input.
  • Order of evaluation is usually undefined.
  • Must be stateless. i.e. No operation can have side effects.
  • Good fit for parallel execution
  • Tends to emphasize a divide and conquer approach.
  • May have the feature of Lazy Evaluation.
                

3. Explain the Lambda calculus and Lambda expressions in functional programming. 

      Lambda calculus

Functional Programming - Lambda Calculus. Lambda calculus is a framework developed by Alonzo Church in 1930s to study computations with functions.Function creation − Church introduced the notation λx.E to denote a function in which 'x' is a formal argument and 'E' is the functional body

      Lambda expressions
In computer programming, an anonymous function (function literal, lambda abstraction, or lambda expression) is a function definition that is not bound to an identifier. ... If the function is only used once, or a limited number of times, an anonymous function may be syntactically lighter than using a named function.

4. What is meant by “no side-effects” and “referential transparency” in functional programming? 

      no side-effects

In computer science, an operation, function or expression is said to have a side effect if it modifies some state variable value outside its local environment, that is to say has an observable effect besides returning a value to the invoker of the operation.


    referential transparency

Referential transparency is an oft-touted property of functional languages, which makes it easier to reason about the behavior of programs. I don't think there is any formal definition, but it usually means that an expression always evaluates to the same result in any context.


5. Discuss the key features of Object Oriented Programming. 

The Object oriented programming is one of the newest and most powerful paradigms. The Object-Oriented Programming mentions to the programming methodology based on the objects, in its place of just procedures and functions. These objects are planned into classes, which are allowing to the individual objects to be group together. Modern programming languages containing java, PHP and C or C++ are object-oriented languages

There are three major features in object-oriented programming: encapsulation, inheritance and polymorphism . Encapsulation refers to the creation of self-contained modules that bind processing functions to the data.

Encapsulation- Hides the details of the implementation of an object 


Abstraction- This is focuses on the essential characteristics of some object, relative to the perspective of the viewer


Polymorphism- Mechanism of wrapping the data (variables) and code acting on the data (methods) together as a single unit


6. How the event-driven programming is different from other programming paradigms? 

In computer programming, event-driven programming is a programming paradigm in which the flow of the program is determined by events such as user actions , sensor outputs, or messages from other programs or threads. Event-driven programming is the dominant paradigm used in graphical user interfaces and other applications (e.g., JavaScript ) that are centered on performing certain actions in response to user input.



7. Compare and contrast the Compiled languages, Scripting languages, and Markup languages.

Languages can be categorized according to the way they are processed and executed

      Compiled languages

            A compiled language is the proper and formal language that has been designed to allow programmers to communicate instructions to a computer. Programming language is used to create programs. Compiled language is used to transform data by creating CPU instruction that will rewrite the input data into the desired output. This is a language that encodes programs, meaning that a word in the language can be interpreted as a sequence of actions.Example- BASIC, COBOL, CLEO

      


Scripting languages

A scripting language is  a subset of programming language that is used to mediate  between program in order to generate data. The main feature of scripting language is that it can guide other programs, much like a script that will give an actress their cue to start this part. It is a language that is meant to be interpreted , rather than compiled , emphasizing its purpose as a subset of all programming languages . Example - java , jsp , php

 Markup languages


       

Markup language is not considered to be a programming language simply because the term is not well defined. A markup language is uses to control the presentation of the data, like representing structured data.For Example - HTML   
       

 


8. Discuss the role of the virtual run time machines.

In computing, a virtual machine is an emulation of a computer system. Virtual machines are based on computer architectures and provide functionality of a physical computer. Their implementations may involve specialized hardware, software, or a combination.


9. Find how the JS code is executed (What is the run time? where do you find the interpreter?) 

The source code is passed through a program called a compiler, which translates it into byte code that the machine understands and can execute. In contrast,JavaScript has no compilation step. Instead, an interpreter in the browser reads over the JavaScript code, interprets each line, and runs it.


10. Explain how the output of an HTML document is rendered, indicating the tools used to display the output. 



Using html tags
ex:header tag
     div tag
     href tags
     links

     Doctype html
     <html>
     <head>
     </head>
     <body>
          <h1>hello world</h1>
          <h3> Ruwin Tharanga</h3>
     </body>
     </html>


Output

helloworld




11. Identify different types of CASE tools, Workbenches, and Environments for different types of software systems (web-based systems, mobile systems, IoT systems, etc.). 

CASE tools

Computer Aided Software Engineering (CASE) tools are used throughout the engineering life cycle of the software systems

CASE tools are set of software application programs, which are used to automate SDLC activities. CASE tools are used by software project managers, analysts and engineers to develop software system.
There are number of CASE tools available to simplify various stages of Software Development Life Cycle such as Analysis tools, Design tools, Project management tools, Database Management tools, Documentation tools are to name a few.
Use of CASE tools accelerates the development of project to produce desired result and helps to uncover flaws before moving ahead with next stage in software development.

12. Discuss the difference between framework, library, and plugin, giving some examples. 


      framework

Framework. A framework, or software framework, is a platform for developing software applications. It provides a foundation on which software developers can build programs for a specific platform. ... A framework may also include code libraries, a compiler, and other programs used in the software development process.

    library

Libraries provide an API, the coder can use it
to develop some features, when writing code
At development time
•Add the library to the project (source code files,
modules, packages, executables)
•Call the necessary functions/methods using the
given packages/module/classes
At run time
•The library will be called by the code
                                      

    plugin

Plugins are packages of code that extend the core functionality of WordPress. WordPress plugins are made up of PHP code and other assets such as images, CSS , and JavaScript.










t-11

1. Distinguish the term “Rich Internet Applications” (RIAs) from “Rich Web-based Applications” (RiWAs).  Rich Internet Applications A ric...