Thursday 21 April 2022

Unified Process (UP) Best Practices

  • Get high risk and high value requirements first
  • Constant user feedback and engagement
  • Early cohesive core architecture
  • Test early, often, and realistically
  • Apply use cases where needed
  • Do some visual modeling with UML
  • Manage requirements and scope creep
  • Manage change requests and configuration

How to persist the selected value of the select box after form submit?

 <?php
if ($_POST['submit']) {
    if ($_POST['srf'] != "") {
        $srf = $_POST['srf'];
    } else {
        $srfErr = "This field is required.";
    }
}
?>
<form method="post">
<select name="srf" id="srf" class="form-control">
<option value="">Select</option>
<option <?php if (isset($srf) && $srf=="1") echo "selected";?>>1</option>
<option <?php if (isset($srf) && $srf=="2") echo "selected";?>>2</option>
<option <?php if (isset($srf) && $srf=="3") echo "selected";?>>3</option>
</select> <?php if($srfErr!= ""){ ?>
        <p><b><?php echo $srfErr;  ?></b></p>
</div>
    <?php }?>
<input type="submit" name="submit" value="submit" />
</form>

Unified Process Phases

Inception

  • Inception is not a requirements phase; rather a feasibility phase, where just enough investigation is done to support a decision to continue or stop. –
  • The life-cycle objectives of the project are stated, so that the needs of every stakeholder are considered. Scope and boundary conditions, acceptance criteria and some requirements are established.
  • Approximate vision, business case, scope, vague estimates.

Inception - Activities

  •  Formulate the scope of the project: Needs of every stakeholder, scope, boundary conditions and acceptance criteria established.
  •  Plan and prepare the business case: Define risk mitigation strategy, develop an initial project plan and identify known cost, schedule, and profitability trade-offs.
  • Synthesize candidate architecture: Candidate architecture is picked from various potential architectures
  • Prepare the project environment

Inception - Exit criteria

  • An initial business case containing at least a clear formulation of the product vision - the core requirements - in terms of functionality, scope, performance, capacity, technology base.
  • Success criteria (example: revenue projection).
  • An initial risk assessment.
  • An estimate of the resources required to complete the elaboration phase.

Elaboration

  • An analysis is done to determine the risks, stability of vision of what the product is to become, stability of architecture and expenditure of resources. 
  • Refined vision, iterative implementation of core architecture, resolution of high risks, identification of most requirements and scope, more realistic estimates

Elaboration - Entry criteria

  • The products and artifacts described in the exit criteria of the previous phase. 
  • The plan approved by the project management, and funding authority, and the resources required for the elaboration phase have been allocated

Elaboration - Activities

  • Define the architecture: Project plan is defined. The process, infrastructure and development environment are described. 
  • Validate the architecture.  
  • Baseline the architecture: To provide a stable basis for the bulk of the design and implementation effort in the construction phase.

Elaboration - Exit criteria 
  • A detailed software development plan, with an updated risk assessment, a management plan, a staffing plan, a phase plan showing the number and contents of the iteration , an iteration plan, and a test plan
  • The development environment and other tools 
  • A baseline vision, in the form of a set of evaluation criteria for the final product.
  • A domain analysis model, sufficient to be able to call the corresponding architecture ‘complete’. 
  • An executable architecture baseline. 

Construction 

  • The Construction phase is a manufacturing process. It emphasizes managing resources and controlling operations to optimize costs, schedules and quality. This phase is broken into several iterations. 
  •  Iterative implementation of the remaining lower risk and easier elements, and preparation for deployment. 

Construction - Entry criteria 
  • The product and artifacts of the previous iteration. The iteration plan must state the iteration specific goals
  • Risks being mitigated during this iteration. 
  • Defects being fixed during the iteration. 

Construction - Activities 
  • Develop and test components: Components required satisfying the use cases, scenarios, and other functionality for the iteration are built. Unit and integration tests are done on Components. 
  • Manage resources and control process. 
  • Assess the iteration: Satisfaction of the goal of iteration is determined.

Construction - Exit Criteria 
  • The same products and artifacts, updated, plus
  • A release description document, which captures the results of an iteration 
  • Test cases and results of the tests conducted on the products
  • An iteration plan, detailing the next iteration 
  • Objective measurable evaluation criteria for assessing the results of the next iteration(s).  

Transition 

  • The transition phase is the phase where the product is put in the hands of its end users. It involves issues of marketing, packaging, installing, configuring, supporting the user. community, making corrections, etc. 
  • Beta tests, deployment. 

Transition - Entry criteria
  • The product and artifacts of the previous iteration, and in particular a software product sufficiently mature to be put into the hands of its users.

Transition - Activities  
  • Test the product deliverable in a customer environment. 
  • Fine tune the product based upon customer feedback 
  • Deliver the final product to the end user 
  • Finalize end-user support material.

Transition - Exit criteria 
  • An update of some of the previous documents, as necessary, the plan being replaced by a “post-mortem” analysis of the performance of the project relative to its original and revised success criteria; 
  • A brief inventory of the organization’s new assets as a result this cycle.  


Wednesday 20 April 2022

ER [Entity–relationship] Model

ER Model is a popular high-level (conceptual) data model. It is an approach to designing Semantic Conceptual schema of a Database. ER model allows us to describe the data involved in a real-world environment in terms of objects and their relationships, which are widely used in design of database. ER model provides preliminary concepts or idea about the data representation which is later modified to achieve final detailed design.

Important concepts/notions used in ER modeling are-

Entity 

It is an object in real-world or some idea or concept which can be distinguished from other objects.

Ex.: person, school, class, department, weather, salary, temperature etc.

Entity has independent existence. 

Entity type

Each entity belongs to an Entity type that defines the structure.

Entity Set 

It is a Collection of similar objects.

Attribute

Reflects a property of an object or entity. We have following types of attributes.

> Simple attribute

> Composite attribute

> Single valued attribute

> Multi-valued attribute

> Derived attribute

> Stored attribute

Key

Is an Attribute of an entity type whose value can uniquely identify an entity in a set.

Relationship

The association between entities is known as relationship.

Domain of an attribute

The set of possible values is known as domain of an attribute



Degree of a Relationship

Degree is the number of entity type participate in a relationship. 
If there are two entity types involved it is a binary relationship type
eg: Manager manages employee

If there are three entity types involved it is a ternary relationship type  

Cardinality of a relationship

Relationships are rarely one-to-one. 
For example, a manager usually manages more than one employee. This is described by the cardinality of the relationship, for which there are four possible categories. 



The Agile Principles

1. Satisfy the customer through early and continuous delivery of valuable software.

2. Welcome changing requirements, even late in development.

3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter time scale.

4. Business people and developers must work together daily throughout the project.

5. Build projects around motivated individuals.

6. The most efficient and effective method of conveying info. 

7. Working software is the primary measure of progress.

8. Agile processes promote sustainable development.

9. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

10. Continuous attention to technical excellence and good design enhances agility.

11. Simplicity – the art of maximizing the amount of work NOT done is essential.

12. The best architectures, requirements, and designs emerge from self- organizing teams.

13. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Data Model

Data Model is a collection of concepts that can be used to describe the structure of the database. Structure means data types, relationships, constraints etc. DBMS allows a user to define the data to be stored in terms of a data model. 

1.  High-level data models 

2.  Low-level data models

3. Representational or Implementation data models 

High-level Data Models

Use set of concepts to describe the database, where the descriptions are close to user views. High-level data models are also known as conceptual models. In conceptual data modeling we use concepts like – entity, attributes, relationship etc. 

Low-level Data Models

Give details about how the data is stored in a computers (storage level details).

Representational/Implementation Data Models

This is in between high-level and low-level data models. Here we represent the concepts described in conceptual model using a specific structures like, networks, objects, tables, trees etc. Ex: Relational Model, NW Model, Hierarchical Model, Object Model, Object relational model etc. 

Relational Model

The central data description construct in this model is a relation, which can be thought of as a set of records. 

Schema

Description of data in terms of a data model is called a schema. A relation schema specifies the name of the relation, fields, type etc. 

eg:. Student (sid: string; name: string; age: integer) every row follows the schema of the relation.

The following are some important representational data models (DBMS Specific) 

1. Network Model

Though the basic structure is a record, the relationships are captured using links. The database can be seen as an arbitrary network of records connected by links. Ex.: GE’s Integrated Data store (IDS), in Early 1960s 

2. Hierarchical Model 

The records containing data are organized as a collection of trees. Ex.: IBMs IMS (Information Management System), in late 1960s 

3. Relational Model (early 1970s)

Data & relationships are captured as tables & keys. Ex.: Oracle, IBMs DB2, MySQL, Informix, Sybase, MS Access, Ingress, MySQL etc. The basic storage structure is a record. 

4. Object Data Model

Objects created through object–oriented programs can be stored in database. Ex.: Object Store 

5. Object Relational Model

Objects can be stores in tables. Ex.: Oracle, Informix 

Database Schema

Description of a database is called as database Schema.

Three-Schema Architecture:

A database can be described using three different levels of abstractions. Description at each level can be defined by a schema. For each abstraction we focus on one of the specific issues such as user views, concepts, storage etc. 

1. External schema: Used to describe the database at external level. Also described in terms of the data model of that DBMS. This allows data access to be customized at the level of individual users/groups/applications. Any external schema has one or more views and relations from the conceptual schema. This schema design is guided by end user requirements. 

2. Conceptual schema (logical schema) Describes the stored data in terms of the data model specific to that DBMS. In RDBMS conceptual schema describes all relations that are stored in the database. Arriving at good choice of relations, fields and constraints is known as conceptual database design. 

3. Physical schema: Describes the physical storage strategy for the database. 

Tuesday 19 April 2022

Fatal error: Cannot redeclare Function() in PhP

Fatal error: Cannot redeclare count_words() (previously declared) in PhP
Solution
1. Don't declare a function inside a loop. Declare before them.
or
You should include the file (wherein that function exists) only once. So instead of
include ("function.php");
use     
include_once("function.php");


Monday 18 April 2022

Integrate TinyMCE Editor in PhP

1. Download the latest version of TinyMCE SDK 

    https://www.tiny.cloud/get-tiny/

2. Extract the folder [tinymce] and put in your application root folder.

eg: C:\xampp\htdocs\tinymcedemo\tinymce [tinymcedemo is the root folder]

3. Create a php file "test.php" and place the below code 

<!DOCTYPE html>
<html>
<head>
  <script src="tinymce/tinymce.min.js"></script>
  <script type="text/javascript">
  tinymce.init({
    selector: '#mytextarea'
  });
  </script>
</head>
<body>
  <h1>TinyMCE </h1>
  <form method="post">
    <textarea id="mytextarea"></textarea>
  </form>
</body>
</html>

How to disable phpinfo() in a hosting environment?

Login to server WHM as a root user

Edit the php.ini file. Add below line

disable_functions = phpinfo

Restart the server

Friday 15 April 2022

Fatal error: Cannot pass parameter by reference in PHP

Fatal error: Cannot pass parameter  by reference in PHP.

$stmt->bind_param("ssss",$userid,$theme,$title,"test");

The error is with 'test' in the bind_param call.

All parameters to bind_param must be passed by reference. A string is a primitive value, and cannot be passed by reference.

You can fix this by creating a variable and passing that as a parameter instead:

$testvar="test";

$stmt->bind_param("ssss",$userid,$theme,$title,$testvar);

Enable or disable the "Powered by TinyMCE" branding.

Tinymce branding property allow you to enable or disable the "Powered by TinyMCE" branding.

tinymce.init({
  selector: 'textarea',  
  branding: false
});

Thursday 14 November 2019

Adding Excel export button hides/removes the Page Length Dropdown in Data Tables

Add below line in javascript table initialization

"dom": '<"top"Bf>rt<"bottom"lip><"clear">'

Complete Code 
<script>
        $(document).ready(function () {       
        $('#example').DataTable( {
                "processing": true,               
                "lengthMenu": [[25, 50, -1], [ 25, 50, "All"]],
                "dom": '<"top"Bf>rt<"bottom"lip><"clear">',
                "serverSide": true,
                "ajax": "server_processing.php"
            } );
        });                     

</script>

Friday 27 May 2016

Cloud Computing

Cloud computing is a model for enabling ubiquitous,convenient,on- demand network access to a shared pool of configurable computing resources(e.g.,networks, servers, storage, applications, and  services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.(byNIST)

“an Internet based computing paradigm that delivers on-demand software and hardware computing capability as a ‘service’ through virtualization where the end user is completely abstracted from the computing resources”

3-4-5 Rule

3 : Services
4 : Deployment Models
5 : Characteristics

3 : Services

1. IaaS (Infrastructure as a Service)

The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls)





2. PaaS (Platform as a Service)

The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations


Windows Azure, Google App Engine, Hadoop, etc. are some well-known PaaS platforms

3. SaaS (Software as a Service)

The capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings



Moto: “No Software
Ex: Salesforce.com for CRM, Google Docs for document sharing, and web email systems like Gmail, Hotmail, and Yahoo! Mail


Cloud Service Models 
















Utility Computing


  • Utility computing is the packaging of computing resources, such as computation, storage and applications, as a metered service similar to traditional public utility (such as electricity, water, natural gas, or the telephone network).
  • This model has the advantage of a low or no initial cost to acquire computer resources; instead, computational resources are essentially rented
  • You get connected to the utility companies’ “public” infrastructure
  • You get these utility services on‐demand
  • And you pay‐as‐you use (metered service)
  • Cloud computing is the most recent technology innovation which has made utility computing a reality!

Distributed Computing

  • A distributed computing system is basically a collection of processors interconnected by a communication network in which each processor has its own local memory and other peripherals, and the communication between any two processors of the system takes place by message passing over the communication network
  • Loosely coupled systems
  • Examples: Cluster, Grid, P2P, Cloud computing, IOT
  • Uses middleware, which enables computers to coordinate their activities and to share the resources of the system.
  • Single integrated computing facility from user perspective
  • Can include heterogeneous computations where some nodes may perform a lot more computation, some perform very little computation and a few others may perform specialized functionality (like processing visual graphics)
  • Using which efficient scalable programs can be designed so that independent processes are scheduled on different nodes and they communicate only occasionally to exchange results
  • Cloud computing is also a specialized form of distributed computing, where distributed SaaS applications utilize thin clients (such as browsers) which offload computation to cloud-hosted servers (and services).
  • Additionally,cloud-computing vendors providing (IaaSand PaaS) solutions may internally use distributed computing to provide highly scalable cost-effective infrastructure and platform.

Cluster Computing


  • A computer cluster is a group of loosely or tightly coupled computers that work together closely so that in many respects it can be viewed as though it were a single computer
  • Better performance and availability and more cost‐effectiveness over single computer with same capabilities
Characteristics:
  • Loosely / tightly coupled computers
  • Centralized Job management & scheduling
  • Coined in 1987

Grid Computing:

  • Grid is a collection of a large number of loosely coupled, heterogeneous, and geographically dispersed systems in different administrative domains
  • Generally owned by multiple organizations that is coordinated to allow them to solve a common problem
Characteristics
  • Loosely coupled computers
  • Distributed Job management & scheduling
  • Originated (early 1990s)
Vision: To enable computing to be delivered as a utility
This vision is most often presented with an analogy to electrical power grids, from which it derives the name “grid”
  • The key emphasis of grid computing was to enable sharing of computing resources or forming a pool of shared resources that can then be delivered to users.
  • Focus of grid computing was limited to enabling shared use of resources with common protocols for access
  • a particular emphasis was given to handle heterogeneous infrastructure-typically a university data center.
  • "coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations.” -Ian Foster & Steve Tucker -> "Anatomy of Grid”
  • There are also very specific differences between a grid computing infrastructure and the features one should expect from a cloud computing infrastructure
  • Grid (three-point checklist)?
    • Co-ordinates resources that are not subjecttocentralized control
    • Using standard, open, general purpose protocols and interfaces
    • To deliver nontrivial quality of service

Advantages -Distributed computing

Inherently Distributed applications:
  • Several applications are inherently distributed in nature and require distributed computing system for their realization
Information Sharing among Distributed Users:
  • In a distributed computing system, information generated by one of the users can be easily and efficiently shared by the users working at other nodes of the system .
Resource Sharing:
  • Sharing of software resources such as software libraries and databases as well as hardware resources such as printers, hard disks, etccan also be done in a very effective way among all the computers and the users of a single distributed computing system.
Extensibility and Incremental Growth:
  • It is possible to gradually extend the power and functionality of a distributed computing system by simply adding additional resources (both hardware and software) to the system as and when the need arises .
  • Incremental growth is very attractive feature because for most existing and proposed applications it is practically impossible to predict future demands of the system.
  • Addition of new resources to an existing system can be performed without significant disruption of the normal functioning of the system.
Shorter Response Times and Higher Throughput:
  • The multiple processors of the distributed computing system can be utilized properly for providing shorter response times and higher throughput than a single processor centralized system.
  • Another method often used in distributed computing systems for achieving better overall performance is to distribute the load more evenly among the multiple processors by moving the jobs from currently overloaded processors to lightly loaded ones
Higher Reliability:
  • Reliability refers to the degree of tolerance against errors and component failures in a system.
  • A reliable system prevents loss of information even in the event of component failures
  • An important aspect of reliability is availability, which refers to the fraction of time for which a system is available for use.
Better Flexibility in Meeting User’s Needs:
  • A distributed computing system may have a pool of different types of computers, in which case the most appropriate one can be selected for processing a user’s job depending on the nature of the job.
  • Better Price-Performance Ratio:
  • With the rapidly increasing power and reduction in prize of the microprocessors, combined with the increasing speed of communication network, distributed computing systems potentially have a much better price performance ratio than a single large centralized system.

Ubiquitous/ Pervasive computing


  • It is an advanced computing concept where computing is made to appear everywhere and anywhere.
  • Ubiquitous computing can occur using any device, in any location, and in any format. A user interacts with the computer, which can exist in many different forms -laptop, tablets, terminals, phones, etc.
  • Move beyond desktop machine
  • Ex: digital audio players, radio-frequency identification tags, PDAs, smartphones, GPS, and interactive whiteboards
  • Present or noticeable in every part of a thing or place
  • Information processing engaged in every day’s activities and objects

Monday 5 May 2014

Installing bedtools on centOS

1. Download bedtools from
https://github.com/arq5x/bedtools2/releases

2. Save it to a local folder (eg: softwares folder)
3. Untar it using below command
    
# tar xvzf bedtools-2.19.1.tar.gz

4. You can find all the bedtools function in bedtools-2.19.1/bin folder
5. You have to set the PATH environmental variable for bedtools so that OS can locate the bedtools program,  even if it is not in the current directory

Setting PATH environmental variable for bedtools

1. Open /etc/profile.d
2. Create a document and name it as bedtools.sh
3. Add the below line in bedtools.sh
    export PATH = $PATH:/softwares/bedtools-2.19.1/bin
4. Save it and close bedtools.sh
5. If you want to load the environment variables within bedtools.sh without having to restart the machine, you can use the source command as in
     # source bedtools.sh

Install SRA toolkit on CentOS

1. Download SRA toolkit from
http://eutils.ncbi.nih.gov/Traces/sra/?view=software

2. For centos 64 bit, we have to download sratoolkit.2.3.5-2-centos_linux64.tar.gz
3. Save it to a local folder (eg: softwares folder)
4. Untar it using below command
    
# tar xvzf sratoolkit.2.3.5-2-centos_linux64.tar.gz

5. You can find all the toolkit functions in the sratoolkit.2.3.5-2-centos_linux64/bin folder
6. You have to set the PATH environmental variable for toolkit so that OS can locate the fastq-dump program,  even if it is not in the current directory

Setting PATH environmental variable for SRA Toolkit

1. Open /etc/profile.d
2. Create a document and name it as fastq-dump.sh
3. Add the below line in fastq-dump.sh
    export PATH = $PATH:/softwares/sratoolkit.2.3.5-2-centos_linux64/bin
4. Save it and close fastq-dump.sh
5. If you want to load the environment variables within fastq-dump.sh without having to restart the machine, you can use the source command as in
     # source fastq-dump.sh

Thursday 24 April 2014

Add image/logo in browser tab for website

How to add image/logo in browser tab for website?

Place the below code under <head> tag in html

<link rel="shortcut icon" href="path to your logo">

Monday 24 February 2014

Installing xampp 1.8.3 on cent os 6.4

1. Download Xampp 1.8.3 for linux
2. Copy "xampp-linux-x64-1.8.3-3-installer.run" into Computer/Filesystem/opt folder
3. Open terminal from opt folder
4. Type below commands

    [root@localhost opt]# chmod +x xampp-linux-x64-1.8.3-3-installer.run
    [root@localhost opt]# ./xampp-linux-x64-1.8.3-3-installer.run

5. Window installation wizard will appear
















6. Complete the installation process

Friday 21 February 2014

Fatal server error: Cannot establish any listening sockets - Make sure an X server isn't already running

Issue
I have installed vncserver and when I tried to start it but I get the error below
"WARNING: The first attempt to start Xvnc failed, possibly because the font catalog is not properly configured. Attempting to determine an appropriate font path for this system and restart Xvnc using that font path ... Could not start Xvnc.
_XSERVTransSocketUNIXCreateListener: ...SocketCreateListener() failed _XSERVTransMakeAllCOTSServerListeners: server already running Warning: Xalloc: requesting unpleasantly large amount of memory: 0 bytes.
Fatal server error: Cannot establish any listening sockets - Make sure an X server isn't already running _XSERVTransSocketUNIXCreateListener: ...SocketCreateListener() failed _XSERVTransMakeAllCOTSServerListeners: server already running Warning: Xalloc: requesting unpleasantly large amount of memory: 0 bytes.
Fatal server error: Cannot establish any listening sockets - Make sure an X server isn't already running"
Solution
Terminate all VNCs processes. You can find the process number by running the following command:
#ps -e | grep vnc
11028   ?        00:00:00 Xvnc
kill the process using
#skill -p <pid>
#skill -p 11028 

Reinstall vncserver

Tiger VNC on Centos 6.3 fails, font catalog not properly configured

Issue
Whe  I try to set up Tiger VNC server on CentOS 6.3 but I'm getting the following error message when I run the command vncserver:
WARNING: The first attempt to start Xvnc failed, possibly because the font catalog is not properly configured. Attempting to determine an appropriate font path for this system and restart Xvnc using that font path ... Could not start Xvnc.
/usr/bin/Xvnc: symbol lookup error: /usr/bin/Xvnc: undefined symbol: pixman_composite_trapezoids
/usr/bin/Xvnc: symbol lookup error: /usr/bin/Xvnc: undefined symbol: pixman_composite_trapezoids

Solution
To resolve the above issue, run the below command in terminal

#yum install pixman pixman-devel libXfont

Installing VNCserver in centOS 6.4


1.    Installing GNOME Desktop
Install a window manager in order to get a full-featured GUI desktop
[root@localhost ~] #yum groupinstall "X Window System" "Desktop"
2.    Installing TightVNC Server
For CentOS 6, the server is: tigervnc-server not:  vnc-server
[root@localhost ~] #yum install tigervnc-server
3.    Create a user and a vnc login
[root@localhost ~]# useradd tom 
[root@ localhost ~]# passwd tom 
Changing password for user tom  . 
New password: 
Retype new password: 
passwd: all authentication tokens updated successfully. 
4.    Login to new user and set vnc password
[root@localhost ~]# su - tom
[tom@localhost ~]$ vncpasswd 
Password: 
Verify: 
[tom@localhost ~]$ ls .vnc
passwd
[tom@localhost ~]$ exit
logout
5.    Start vnc
Create Xstartup Script by starting the vncserver
[root@localhost ~]#  service vncserver start
Starting VNC server: 1:tom
New 'localhost.localdomain:1 (tom)' desktop is localhost.localdomain:1

Starting applications specified in /home/tom/.vnc/xstartup
Log file is /home/tom/.vnc/localhost.localdomain:1.log 
[root@localhost ~]# su -l tom
[tom@localhost ~]$ ls .vnc
localhost.localdomain:1.log  localhost.localdomain:1.pid  passwd  xstartup
[tom@localhost ~]$ exit
logout
6.    Edit the server configuration file
Edit /etc/sysconfig/vncservers, and add the following to the end of the file.
VNCSERVERS="1:tom"
VNCSERVERARGS[1]="-geometry 1024x768"
For multiple users repeat the steps 3 to 5 and add the user to the VNCSERVERS list and add a VNCSERVERARGS[x] entry.
So for two users:
VNCSERVERS="1:tom 2:john" 
VNCSERVERARGS[1]="-geometry 1024x768" 
VNCSERVERARGS[2]="-geometry 640x480" 
7.    Open the port by editing the iptables
stop iptables

[root@localhost ~]# service iptables stop

open /etc/sysconfig/iptables

add the below lines before COMMIT

-A INPUT -p tcp -m state --state NEW -m tcp -m multiport --dports 5901:5910,6001:6010 -j ACCEPT

start iptables

root@localhost ~]# service iptables start

Install Vnc Viewer for remote accessing the desktop

1.    Download VNC Client

Now go to your Windows or Linux machine and download VNC Viewer client and install in your system to access the centos server desktop.

2.    Connect to Remote Desktop(centos server) Using Client

After you installed the VNC Viewer client, open it you’ll get similar to below screen. Enter VNC Server IP address( IP address of centos server in which vncserver is installed) along with VNC ID (i.e 1) for user tom.













Uninstalling vncserver

[root@localhost ~]# service vncserver stop
[root@localhost~]#yum remove tigervnc-server