Wednesday, May 22, 2019

Selenium Grid + Docker + AWS EC2

This is going to be a simple (and single) entry point for setting up and running Selenium Grid inside of Docker containers on AWS EC2.

So steps will be the next:


1. Create and start EC2 instance
2. Open port 4444 as it’s used by the grid through the adding needed security rule:
    I. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/
    II. In the navigation pane, choose the Security Groups link. 
    III. Select the Inbound tab, choose Edit.
    IV. In the dialog, choose Add Rule and do the following:
        - Type: Custom TCP
        - Protocol: TCP
        - Part Range: 4444
        - Source: My IP
        - Description: Selenium Grid (or any other one)

Once the Selenium Grid is up and running, if 52.222.124.100 (for example) is the public facing IP address  connecting to http://52.222.124.100:4444/grid/console will connect to the console.

Login into and install Docker:

Install Docker on the AWS instance: 

$ sudo yum install -y docker

Start the docker service: 

$ sudo service docker start

Add the ec-2-user to the Docker group: 

$ sudo usermod -a -G docker ec2-user

Close the Mac Terminal and reopen it to reset permissions.


$ docker network create grid

$ docker run -d -p 4444:4444 --net grid --name selenium-hub selenium/hub:3.11.0-bismuth

$ docker run -d --net grid -e HUB_HOST=selenium-hub -v /dev/shm:/dev/shm selenium/node-chrome:3.11.0-bismuth

$ docker run -d --net grid -e HUB_HOST=selenium-hub -v /dev/shm:/dev/shm selenium/node-firefox:3.11.0-bismuth

With docker-compose file it will be easier.

Install docker-compose:

$ sudo curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-'uname -s'-'uname -m' -o /usr/local/bin/docker-compose

$ sudo chmod +x /usr/local/bin/docker-compose

In home directory (directory from where You’re starting grid):

$ touch docker-compose.yaml

Paste next content (example from SeleniumHQ)
---------------------------------------------------------------------------

# To execute this docker-compose yml file use docker-compose -f <file_name> up  
 # Add the "-d" flag at the end for deattached execution  
 version: '2'  
 services:  
  firefox:  
   image: selenium/node-firefox:3.11.0-bismuth  
   volumes:  
    - /dev/shm:/dev/shm  
   depends_on:  
    - hub  
   environment:  
    HUB_HOST: hub  
   
  chrome:  
   image: selenium/node-chrome:3.11.0-bismuth  
   volumes:  
    - /dev/shm:/dev/shm  
   depends_on:  
    - hub  
   environment:  
    HUB_HOST: hub  
   
  hub:  
   image: selenium/hub:3.11.0-bismuth  
   ports:  
    - "4444:4444"  

---------------------------------------------------------------------------

$ docker-compose up -d - to start grid

$ docker-compose down - to stop grid


If needed more than 1 node either for Chrome or Firefox just add the entry into docker-compose file.

Example of usage will be something like that from Your selenium setup (for Jenkins it can be done separately):

@BeforeMethod(alwaysRun = true)
public void setupBaseTest() throws Exception {
    DesiredCapabilities dr = null;
    dr = DesiredCapabilities.chrome();
    driver = new RemoteWebDriver(new URL("http://52.222.124.100:4444/grid/console"), dr);
}

Monday, May 6, 2019

Jenkins Build Server + AWS EC2

There are quite a few examples on the internet how to set Jenkins build server on the EC2 instance from AWS so I won't need to explicitly explain all steps and rather will put the list of them.

So first You'll need an AWS account which is abnormally easy to create. After that, we can proceed with launching a free EC2 instance (t2.micro) and assigning security groups to it.

1. Settings for EC2 instance:
- Network --> Default VPC (or Your custom one if You have it already)
- Subnet --> Default subnet (or Your public subnet if You have one)
- Auto-assign Public IP should be enabled
- Before launching instance go to Edit security groups --> add Your security group (if You have one, otherwise go to Create Security Group --> Add name & description --> Set Your VPC --> Add rule (create SSH, HTTP, Custom TCP Rule port should be 8080))
- Launch

2. Connect to the EC2 instance:
- After launching instance there's an option in the menu about how to connect to Your instance otherwise follow next steps
$ ssh -i /path/my-key-pair.pem ec2-user@your-instance-public-address

3. Install and launch Jenkins
- $ sudo yum update -y (usually this is recommended step for all new instances of Amazon Linux)
$ sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo (getting Jenkins package)
- $ sudo rpm --import https://pkg.jenkins.io/redhat/jenkins.io.key (import Jenkins key)
- $ sudo install jenkins -y
- $ sudo service jenkins start
- http://your-server-public-DNS:8080 (address for your Jenkins server)
- $ sudo cat /var/lib/jenkins/secrets/initialAdminPassword (initial Jenkins admin password)

After connecting to Jenkins it's recommended to setup Your password and install needed plugins (one from important list is Amazon EC2 plugin if You are planning to connect that instance to other AWS boxes)

Friday, January 4, 2019

Chown command cheat sheet


1. Change the owner of a file

# chown [owner_name] [file_name]

Example

# chown root temp.txt


2. Change the group of a file

# chown :[group_name] [file_name]

Example

# chown :root temp.txt


3. Change both owner and the group

# chown [owner_name]:[group_name] [file_name]

Example

# chown root:root temp.txt


4. Using chown command on symbolic link file

# chown [owner_name]:[group_name] [file_symbolic_link]

Example

# chown root:root temp_symlnk

with -h flag change will be enforced

# chown -h root:root temp_symlnk

with -R flag we can use chown against symbolic link directory

# chown -R -H root:root temp_symlnk


5. Change owner only if a file is owned by a particular user

# chown --from=[owner_name] [new_owner_name] [file_name]

Example

# chown --from=root jenkins temp.txt


6. Change group only if a file already belongs to a certain group

# chown --from=:[group_name] :[new_group_name] [file_name]

Example

# chown --from=:root :jenkins temp.txt


7. Copy the owner/group settings from one file to another

# chown --reference=[source_file] [destination_file]

Example

# chown --reference=settings.txt temp.txt


8. Change the owner/group of the files by traveling the directories recursively

# chown -R [user_name]:[group_name] [directory_name]/

Example

# chown -R jenkins:jenkins jenkins/


9. List all the changes made by the chown command

# chown -v -R [user_name]:[group_name] [file_name]

Example

# chown -v -R root:jenkins temp.txt

CTAL - TA Cheat Sheet (Part 2)


Specification-Based Techniques 
  • Equivalence Partitioning  - any test level - finds functional defects in the handling of various data values 
  • Boundary Value Analysis  - any test level - finds displacement or omission of boundaries, and may find cases of extra boundaries. This technique finds defects regarding the handling of the boundary values, particularly errors with less-than and greater-than logic (i.e., displacement). It can also be used to find non-functional defects, for example tolerance of load limits (e.g., system supports 10,000 concurrent users). 
  • Decision Tables - used to test the interaction between combinations of conditions - integration, system and acceptance test levels (potentially component) - finds incorrect processing based on particular combinations of conditions resulting in unexpected results. During the creation of the decision tables, defects may be found in the specification document. The most common types of defects are omissions (there is no information regarding what should actually happen in a certain situation) and contradictions. Testing may also find issues with condition combinations that are not handled or are not handled well 
  • Cause-Effect Graphing - typically used as the basis for creating decision tables - integration, system and acceptance test levels (potentially component) - find the same types of combinatorial defects as are found with decision tables. In addition, the creation of the graphs helps define the required level of detail in the test basis, and so helps improve the detail and quality of the test basis and helps the tester identify missing requirements. 
  • State Transition Testing - used to test the ability of the software to enter into and exit from defined states via valid and invalid transitions - any test level - defects include incorrect processing in the current state that is a result of the processing that occurred in a previous state, incorrect or unsupported transitions, states with no exits and the need for states or transitions that do not exist. During the creation of the state machine model, defects may be found in the specification document. The most common types of defects are omissions (there is no information regarding what should actually happen in a certain situation) and contradictions 
  • Combinatorial Testing - used when testing software with several parameters, each one with several values, which gives rise to more combinations than are feasible to test in the time allowed - usually applied to the integration, system and system integration levels - defects found with this type of testing is defects related to the combined values of several parameters 
  • Use Case Testing - provides transactional, scenario-based tests that should emulate usage of the system - usually applied at the system and acceptance testing levels (can be used on integration level) - defects include mishandling of defined scenarios, missed alternate path handling, incorrect processing of the conditions presented and awkward or incorrect error reporting 
  • User Story Testing - requirements are prepared in the form of user stories which describe small functional units that can be designed, developed, tested and demonstrated in a single iteration - used for both functional testing and non-functional testing  - defects are usually functional in that the software fails to provide the specified functionality. Defects are also seen with integration issues of the functionality in the new story with the functionality that already exists. Because stories may be developed independently, performance, interface and error handling issues may be seen 
  • Domain Analysis - domain is a defined set of values - used for decision tables, equivalence partitioning and boundary value analysis to create a smaller set of tests - can be done at any level of testing but is most frequently applied at the integration and system testing levels - defects include functional problems within the domain, boundary value handling, variable interaction issues and error handling (particularly for the values that are not in a valid domain). 

CTAL - TA Cheat Sheet (Part 1)


V-Model

  • System test planning occurs concurrently with project planning, and test control continues until system test execution and closure are complete. 
  • System test analysis and design occur concurrently with requirements specification, system and architectural (high-level) design specification, and component (low-level) design specification. 
  • System test environment (e.g., test beds, test rig) implementation might start during system design, though the bulk of it typically would occur concurrently with coding and component test, with work on system test implementation activities stretching often until just days before the start of system test execution. 
  • System test execution begins when the system test entry criteria are all met (or waived), which typically means that at least component testing and often also component integration testing are complete. System test execution continues until the system test exit criteria are met. 
  • Evaluation of system test exit criteria and reporting of system test results occur throughout system test execution, generally with greater frequency and urgency as project deadlines approach. 
  • System test closure activities occur after the system test exit criteria are met and system test execution is declared complete, though they can sometimes be delayed until after acceptance testing is over and all project activities are finished. 

Involvement during Test Plan creation along with Test Manager:

  • Be sure the test plans are not limited to functional testing. All types of testing should be considered in the test plan and scheduled accordingly. For example, in addition to functional testing, the Test Analyst may be responsible for usability testing. That type of testing must also be covered in a test plan document. 
  • Review the test estimates with the Test Manager and ensure adequate time is budgeted for the procurement and validation of the testing environment. 
  • Plan for configuration testing. If multiple types of processors, operating systems, virtual machines, browsers, and various peripherals can be combined into many possible configurations, plan to apply testing techniques that will provide adequate coverage of these combinations. 
  • Plan to test the documentation. Users are provided with the software and with documentation. The documentation must be accurate to be effective. The Test Analyst must allocate time to verify the documentation and may need to work with the technical writing staff to help prepare data to be used for screen shots and video clips. 
  • Plan to test the installation procedures. Installation procedures, as well as backup and restore procedures, must be tested sufficiently. These procedures can be more critical than the software; if the software cannot be installed, it will not be used at all. This can be difficult to plan since the Test Analyst is often doing the initial testing on a system that has been pre-configured without the final installation processes in place. 
  • Plan the testing to align with the software lifecycle. Sequential execution of tasks does not fit into most schedules. Many tasks often need to be performed (at least partly) concurrently. The Test Analyst must be aware of the selected lifecycle and the expectations for involvement during the design, development and implementation of the software. This also includes allocating time for confirmation and regression testing. 
  • Allow adequate time for identifying and analysing risks with the cross-functional team. Although usually not responsible for organising the risk management sessions, the Test Analyst should expect to be involved actively in these activities. 

Quantitative data - 
percentage of planning activities completed, percentage of coverage attained, number of test cases that have passed, failed.


After test planning TA uses scope definition to:
  • Analyze the test basis 
  • Identify the test conditions 

Entry criteria for test analysis:
  • There is a document describing the test object that can serve as the test basis 
  • This document has passed review with reasonable results and has been updated as needed after the review 
  • There is a reasonable budget and schedule available to accomplish the remaining testing work for this test object 

Test conditions are typically identified by analysis of the test basis and the test objectives. 

Standard considerations about test conditions for TA:
  • It is usually advisable to define test conditions at differing levels of detail. Initially, high-level conditions are identified to define general targets for testing, such as “functionality of screen x”. Subsequently, more detailed conditions are identified as the basis of specific test cases, such as “screen x rejects an account number that is one digit short of the correct length”. Using this type of hierarchical approach to defining test conditions can help to ensure the coverage is sufficient for the high-level items. 
  • If product risks have been defined, then the test conditions that will be necessary to address each product risk must be identified and traced back to that risk item. 

The process of test design includes the following activities: 
  • Determine in which test areas low-level (concrete) or high-level (logical) test cases are most appropriate 
  • Determine the test case design technique(s) that provide the necessary test coverage 
  • Create test cases that exercise the identified test conditions 

When designing tests, it is important to remember the following: 
  • Some test items are better addressed by defining only the test conditions rather than going further into defining scripted tests. In this case, the test conditions should be defined to be used as a guide for the unscripted testing. 
  • The pass/fail criteria should be clearly identified. 
  • Tests should be designed to be understandable by other testers, not just the author. If the author is not the person who executes the test, other testers will need to read and understand previously specified tests in order to understand the test objectives and the relative importance of the test. 
  • Tests must also be understandable by other stakeholders such as developers, who will review the tests, and auditors, who may have to approve the tests. 
  • Tests should be designed to cover all the interactions of the software with the actors (e.g., end users, other systems), not just the interactions that occur through the user-visible interface. Inter-process communications, batch execution and other interrupts also interact with the software and can contain defects so the Test Analyst must design tests to mitigate these risks. 
  • Tests should be designed to test the interfaces between the various test objects. 

Test case design includes the identification of the following: 
  • Objective 
  • Preconditions, such as either project or localized test environment requirements and the plans for their delivery, state of the system, etc. 
  • Test data requirements (both input data for the test case as well as data that must exist in the system for the test case to be executed) 
  • Expected results 
  • Post-conditions, such as affected data, state of the system, triggers for subsequent processing, etc. 

Test work products (created during Test Design) might be affected by:
  • Project risks (what must/must not be documented) 
  • The “value added” which the documentation brings to the project 
  • Standards to be followed and/or regulations to be met 
  • Lifecycle model used (e.g., an Agile approach aims for “just enough” documentation) 
  • The requirement for traceability from the test basis through test analysis and design 

Test implementation 
includes creating automated tests, organizing tests (both manual and automated) into execution order, finalizing test data and test environments, and forming a test execution schedule, including resource allocation, to enable test case execution to begin. This also includes checking against explicit and implicit entry criteria for the test level in question and ensuring that the exit criteria for the previous steps in the process have been met. 

Five primary dimensions in which test progress is monitored:
  • Product (quality) risks 
  • Defects 
  • Tests 
  • Coverage 
  • Confidence 

Test case information can include: 
  • Test case creation status (e.g., designed, reviewed) 
  • Test case execution status (e.g., passed, failed, blocked, skipped) 
  • Test case execution information (e.g., date and time, tester name, data used) 
  • Test case execution artefacts (e.g., screen shots, accompanying logs)

The Test Analyst should be actively involved in the following risk-based testing tasks: 
  • Risk identification 
  • Risk assessment 
  • Risk mitigation 

Sample risks that might be identified in a project include: 
  • Accuracy issues with the software functionality, e.g., incorrect calculations 
  • Usability issues, e.g., insufficient keyboard shortcuts 
  • Learnability issues, e.g., lack of instructions for the user at key decision points 

Factors influencing business risk include: 
  • Frequency of use of the affected feature 
  • Business loss 
  • Potential financial, ecological or social losses or liability 
  • Civil or criminal legal sanctions 
  • Safety concerns 
  • Fines, loss of license 
  • Lack of reasonable workarounds 
  • Visibility of the feature 
  • Visibility of failure leading to negative publicity and potential image damage 
  • Loss of customers 

During the project, Test Analysts should seek to do the following: 
  • Reduce product risk by using well-designed test cases that demonstrate unambiguously whether test items pass or fail, and by participating in reviews of software artifacts such as requirements, designs, and user documentation 
  • Implement appropriate risk mitigation activities identified in the test strategy and test plan 
  • Re-evaluate known risks based on additional information gathered as the project unfolds, adjusting likelihood, impact, or both, as appropriate 
  • Recognize new risks identified by information obtained during testing 

Each future planned test cycle should be subjected to new risk analysis to take into account such factors as: 
  • Any new or significantly changed product risks 
  • Unstable or defect-prone areas discovered during the testing 
  • Risks from fixed defects 
  • Typical defects found during testing 
  • Areas that have been under-tested (low test coverage)