Wednesday, December 16, 2009

EDI - Schema Validation error while developing

Today I was trying few edi schema, On trying to validate in VS 2005 for the BizTalk Project, I found a error
../X12_BatchSchema.xsd: error BEC2004: Object reference not set to an instance of an object.
..\X12_BatchSchema.xsd: error BEC2004: Validate Schema failed for file: .
..\X12_BatchSchema.xsd: error BEC2004: Validate Instance failed for schema X12_BatchSchema.xsd, file: .
Component invocation succeeded
.


I tried to debug this issue, I found after lots of experiment. I removed my property schema, which I used for the schema. The error dissapeared and Validation succeeded. I am not sure why the error occured, Looks like the validation component not able to recognise the proper schema if there is property schema in the project.

Tip
So during development do your property promotion after validating you modified edi schema

Monday, November 30, 2009

Adding our own Linux startup scripts

Do we need to start something when Linux system starts?
Its not a service.... But I need to run this command when system starts....

Yes here is a small part which astonished me as I have not learnt this for years and missed it when I need....

Let us take a sample case: We might need to start a SVN daemon on the system.
#svnserve -d /srv/repositories

We need to run the above command on every start-up autiomatically. So we don't need to start this daemon manually.

Simple way is add this along with other startup scripts. Find which runlevel the system runs normally.

[root@sf03 ~]# runlevel
N 3
[root@sf03 ~]#

Our server runs in run-level 3 so lets take that as an example.
The server runs on Fedora Linux 10

The startup scripts for run-level 3 resides in the directory /etc/rc.d/rc3.d/
The scripts for run-level 5 will be at /etc/rc.d/5.d/

The directory contains shell scripts that runs on the ascending order on by one.

The last script that runs is S99local

Which has the content similar to this.
[root@sf03 ~]# cat /etc/rc.d/rc3.d/S99local
#!/bin/sh

#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local
[root@sf03 ~]#


Use the vi editor and add the startup command we need to add to this.

Example:

[root@sf03 ~]# cat /etc/rc.d/rc3.d/S99local
#!/bin/sh

#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.


touch /var/lock/subsys/local

# Start SVN Server at startup


svnserve -d /srv/repositories


[root@sf03 ~]#

Restart the server and check the script.

The Canonical Data Model

We have used canonical Data Model(http://www.eaipatterns.com/CanonicalDataModel.html) pattern for my client, Which minimise the dependencies from any integration application, which may use data format.

I love the pattern of creating canonical schema since, you will have a single entity through out the Enterprise
Example, An Order, You will be having a single XML schema defined, which talks every thing about order w.r.t to the company. Billing System in the organisation is interested in knowing the customer information, financial information from order to execute its business process. Inventory system is interested in the know the item information and number of items from order, to check for availability and so on. All these details can be get from a single Order canonical schema
Designing the Canonical schema Since BizTalk and other EAI tools work well with XML. The schema is an XML schema. it contains a two parts Header and Body.
The Header will have all the information about the business document (eg order). We at EMI, call this as enterprise header.

  1. Subject - Represent the action of the document or the represent the business document (Example Create.Order, Update.Order or Highvalue.Order
  2. Version - Document version keeps changing when you add/modify/delete element or attribute in the XML Schema
  3. Source system - Which system provides data for the canonic0
  4. Unique ID - To identify the Message
  5. Batch - you can add the batch information, if you are handling messages in batching

Well, What more, Add as much as information that might be interested w.r.t to your organisation. for example unique document number base on the source system. The Enterprise header should be same through out all your canonical schema. There will be one Enterprise Header through out the organisation

The Body part will contain the common business document (Say Order). Some Companies have a seperate department/team, which control what they look like. if you are lucky to have this department already in place, then your canonical schema body part is ready. or else you should define itSome good ways to identify the what should go into canoncal schema. Identify the information for you schema from source systems(Note : you may have more than one system provide the same business document). and cosolidate the structure. Easier one is the Master data. Identify the Master Data Management system in your enterprise. you can quickly create lot of canonical schema (Ex customer, employee, product etc)Follow International Standards, you can refer EDI Standards to create your body of your canonic. (Give meaning full name in your canonics)

Always populate the header only in the Middleware and remove the header before sending to target system. Since most of the target system expects their own format

So whats the Advantage
01. Transformation is Simple.
02. Help Developers the Business entities easily
03. Adding new subscriber will take lesser time to commission
04. Meaning full names in the Schema helps to align IT with the Business

Keep the php-pear up-to-date

PHP has evolved a lot and when we need add-on libraries, we opt for PEAR packages or PECL extentions to add more libraries that resolves our purpose.

Recently in one of our servers CEntOS 5.2, we were about to install phpUnits to run unit tests in it.

Unfortunately phpUnit was not installed on it.

The website gave the following options to install.

pear channel-discover pear.phpunit.de


and

pear install phpunit/PHPUnit


But the installation failed........ Oops it was odd to understand why?

The real cause was the pear module has not been upgraded to latest version the the new standard packages were not installed with this.

It would be better to do
pear upgrade pear
before we start any pear installations. Keep the pear up-to-date to make it work with latest library packages.

Friday, October 30, 2009

Integrated Project Tracking Tools

Project Management is an art in software development. When too many requirements, bugs and more falls into the project with parallel releases and more. Things become hectic to manage them separate with tools.

  1. Commits goes in after code freeze
  2. Commits in cruical areas
  3. Bugs are added. Count jumps more than accepted limit.
When we have tools like SVN or CVS or any other version control system we could track the changes in it. We have tools like websvn, fisheye and more to see the SVN commits. Bugzilla and more to track the bugs

But things are tough when we need to use too many tools to look into to get our final data for management.

Integration of these tools is a good option and getting all the data at one place would be better solution.

Let us look into the existing tools to do those.
1. Trac - A cool python based lightweight Project management tool.
2. Redmine - A RoR based project management tool
3. Indefero - PHP based project management tool

Integrates well with Version Control system RSS/Atom Feeds are available, with multiple project support. Helps to track bugs and manage releases with features and sprints.

We have the above offline applications which can be downloaded and configured to integrate with our environment tools and serve our network.

We have providers like.
sourceforge.net - Open source projects support
code.google.com - Open source projects support
indefero.net - Open source and private projects support
kenai.com - Open source projects support
and more............

Each tool has its own pros and cons but all of these helps in having control of the project from planning to delivery and to post delivery support.

Thursday, October 22, 2009

Web SVN Repository Browser

The need for a Source Control System in a development environment often increases but they are not just the Versioning system they need to do more...

A developer using SVN has many options with his IDE to work with SVN like diff between revisions, comparing and browsing histories etc., but will the IDE fill the complete usage requirement of the SVN?

How about a config manager or a project manager looking into the code base to get some information, do they need to check out the code and use IDE?

Our previous USVN gave few options to browse through the code but it supports only the view for latest version. In the scenarios like this we often require more tools to do this.

Now we are about to explore the WebSVN a tool to browse the repository at different revisions, get a RSS feed intimation when a new checkin happens, also to tar and archive the repository from the branch we need.

Lets look into it.

WebSVN is provided by Tigris the famous SVN tool provider.

What do we need to install WebSVN?
1. PHP Hosted Server ( I prefer a Fedora Linux as it could install all dependencies)
2. PECL and PEAR support to install few modules required by PHP
3. SVN

Installing WebSVN
Login as root or use sudo to perform yum installation

#yum install websvn

It installs all dependencies with WebSVN.

Making WebSVN accessible for external world.

Make the installed directory of WebSVN a sub directory in the existing web server.
#ln -s /usr/share/websvn /var/www/html

Edit the config.php

add the following line before the LOOK AND FEEL Section in the config file.

$config->addRepository('NameToDisplay', 'URL (e.g. http://path/to/rep)', 'group', 'username', 'password');

For Example
$config->addRepository('HR App', 'http://svnserver.local/repository/hrm/', NULL, 'admin', 'admin');

Access the website matching the path we could see WebSVN working as below.



We can add more projects/repositories by adding similar config lines as explained above. The details of the project will look as shown below.


The RSS Feeds can be subscribed and the new check-in and repository changes can be accessed via RSS updates.

The WebSVN also alows to make tar and download repositories by changing more configuration in the config.php.


The WebSVN solves problems like version comparison, change notifications and more.

To conclude its a good utilility for SVN, with few drawbacks.
1. The repository addtion requires config file change - Better if we could do in front end.
2. Has no authentication system so if a repository is added every one who has access to the WebSVN can see all repositories.

Soon we will look into more tools similar to this.

Wednesday, September 23, 2009

Zip - pipeline component

Recently I got a requirement from my client to Integrate BizTalk with the Amazon cloud, Basically we have to send notification message to a FTP Location in Amazon.
As Amazon charges based upon the pay per usage, we had a requirement to compress the file before sending to the Amazon location to reduce the cost..

I tried to search for Pipeline component, Most of the pipeline component comes with the cost (
http://www.nsoftware.com/products/biztalk/default.aspx). So I decided to create my own, with the help of open source .Net Library which is available at http://www.icsharpcode.net/OpenSource/SharpZipLib/
About the component

ZipFile is a pipeline component which Archives the file in encoding stage of the send Pipeline. This is a generalized component, which can be used wherever required.
The ZipFile Name based upon the Adapter Configuration.
File Name inside the Archieve If the "OutboundTransportLocation” (
http://schemas.microsoft.com/BizTalk/2003/system-properties) has value the file name will be extracted and it will be the same name as Zip File . Else MessageID Property will be used as file name inside Zip.
The File Extension for the File inside the zip is configurable "FileExtension"
Steps

On receiving the message, the component will

  • Read the body of the message (Which is a stream) to the Zip function and the message name
  • Zip the stream, Update the Name of the Zipped File and return as a memory stream.
  • Process the memory stream to update the message body and return as message

Code snippet




public IBaseMessage Execute(IPipelineContext pContext, IBaseMessage pInMsg)
{
try
{
// Name of the File inside the Zip Starts Here
String MessageId ="";
String Extension = "." + _FileExtension;

MessageId = pInMsg.Context.Read("OutboundTransportLocation", "http://schemas.microsoft.com/BizTalk/2003/system-properties").ToString() ;

if (MessageId == "")
{
MessageId = pInMsg.MessageID + Extension;
}
else
{
try
{
//From OutboundTransportLocation
MessageId = MessageId.Remove(0, MessageId.LastIndexOf(@"/") + 1);
MessageId = MessageId.Remove(MessageId.LastIndexOf(@"."));
MessageId = MessageId + Extension;
}
catch
{
MessageId = pInMsg.MessageID + Extension;
}

}

// Name of the File inside the Zip Ends Here

MemoryStream outmsg = new MemoryStream();

IBaseMessagePart bodyPart = pInMsg.BodyPart;
if (bodyPart != null)
{
ZipingTheStream(bodyPart.Data, outmsg, MessageId);
}

return CreateNewMessage(pContext, pInMsg, outmsg);

}
catch (Exception ex)
{
throw new System.ApplicationException(ex.Message);
}
}

///
/// Helper function that creates a new message and copies all parts
/// and their properties. Copies new stream to current message
///

/// Context
/// Orginal BTS message
/// Contains contents of message to create
///
public static IBaseMessage CreateNewMessage(IPipelineContext pipelineContext,
IBaseMessage message, Stream streamOut)
{
if (null == pipelineContext)
throw new ArgumentNullException("pipelineContext");
if (null == message)
throw new ArgumentNullException("message");
if (null == streamOut)
throw new ArgumentNullException("streamOut");

try
{
ICloneable c = (ICloneable)message;
IBaseMessage outMsg = (IBaseMessage)c.Clone();
streamOut.Position = 0;
outMsg.BodyPart.Data = streamOut;

// Clean up resources
pipelineContext.ResourceTracker.AddResource(outMsg.BodyPart.Data);

return outMsg;
}
catch (Exception ex)
{
//EventLog.WriteEntry(ex.Source , ex.message);
throw ex;
}
}

public static void ZipingTheStream(Stream InMsgBody, Stream ZipStream, String FileName)
{
try
{
ZipOutputStream ZipOutStream = new ZipOutputStream(ZipStream);
ZipOutStream.SetLevel(9);

ZipEntry InMsgBodyEntry = new ZipEntry(ZipEntry.CleanName(FileName));
InMsgBodyEntry.DateTime = DateTime.Now;
InMsgBodyEntry.Size = InMsgBody.Length;
ZipOutStream.PutNextEntry(InMsgBodyEntry);
byte[] BufferTransfer = new byte[1024 * 1024];

int Filebyte;
Stream FileTobeZipped = InMsgBody;
for (; ; )
{
//we copy the file to the Zip stream, block after block
Filebyte = FileTobeZipped.Read(BufferTransfer, 0, 1024 * 1024);
if (Filebyte == 0) break;
ZipOutStream.Write(BufferTransfer, 0, Filebyte);
}
//ZipOutStream.

ZipOutStream.Finish();
//ZipOutStream.Close();


}
catch (Exception ex)
{
throw ex;
}
}

Friday, September 18, 2009

Apache Proxy Security Issue

Recently We were deploying a ROR (Ruby On Rails) application to be specific its Redmine. Since our web server had too many virtual host running on Apache we couldn't run the webrick web server directly on port 80. We decided to run it on Port 8000 and let apache virtual host for this redmine will be proxying to the port 8000

Whats the configutration?

-------------- Configurations Begins ---------------
<VirtualHost...... >
ProxyRequests On
ProxyVia On


....

ProxyPass / http://localhost:8000/
ProxyPassReverse / http://localhost:8000/

.....
</VirtualHost.>
-------------- Configurations Ends -----------------

Later a month we observed our web server became too slow, We saw the response taking too much time. Looking at the performance Apache was consuming more memory and cpu load.

Just a top command explained the change in apache's behaviour

Looking into Apache's access log we saw too many web requests unrelated domains to the server were accessed. Finally we realised that the apache became a proxy server and now it is acting as a proxy to many people and they access their banned sites through the apache's proxy service.

The fix is to remove the following entries

ProxyRequests On
ProxyVia On

The ROR application was still proxied because of the other entry in the VirtualHost. The application still worked and we stopped the open proxy behaviour.

Once the fix was done, we observed all the proxy request in the access logs were denied with 404 and thus the server is saved ;-)

Tuesday, September 01, 2009

SQUID Load balancing for web applications

Squid Load Balancer Configurations

A short note on we have done to make the squid load balancer working
The servers are CEnt OS 5.x

Total machines 2 (Don't ask me why it is 2, This is what I had to test and play with in my lab)
Machine 1 - IPs 192.168.5.50 and 192.168.5.51 (2 LAN cards)
Machine 2 - IPs 192.168.100 and 192.168.5.101 (2 LAN cards)

The machine 1 and 2 Will have Apache application running on them which needs to be load balanced.

Problem: We need a seperate machine which should act as load balancer for both, but we don't have any other machine than these 2.
Solution: Machine 1 will act as load balancer and also as a application server(not adviceable) but can go ahead if we run out of resource as we have no other option.

DNS Names
DOMAIN Name for public access webapp.office.lan

webapp.office.lan - 192.168.5.50
server_1_a.office.lan - 192.168.5.50
server_1_b.office.lan - 192.168.5.51

server_2_a.office.lan - 192.168.5.100
server_2_b.office.lan - 192.168.5.101

The web application is configured in both the server with apache vhosts.


The default site is webapp.office.lan and squid listens to the queries on that.

So the first part is making the application work in

server_1_b.office.lan - 192.168.5.51
server_2_b.office.lan - 192.168.5.101

Configuring apache right to make this work properly.
Since Squid and Apache are going to run on same machine(1) and are to be used in same port number 80 we need to make apache listen only for requests on IP:192.168.5.51

To do the above
Change the httpd.conf
Modify the line
Listen 80
to
Listen 192.168.5.51:80

So apache listens only the IP: 192.168.5.51 on Port 80

Now configure the vhost of the web app accordingly in Apache on IP:192.168.5.51
Verify do the site works by accessing server_1_b.office.lan

Configuring App in server_2_b.office.lan - 192.168.5.101
Since the app is now configured on a different machine we don't need to change the Apache Listen property.
but once configured just check the site works with server_2_b.office.lan URL

Squid in Action
Installing squid in CentOS is as same as installing Apache with yum installer.

Configuring Squid
Add the following lines in /etc/squid/squid.conf

------------------------------ Lines to be added in squid.conf -------------------------------
#Make SQUID Listen on PORT 80
http_port 192.168.5.50:80 defaultsite=webapp.office.lan vhost

# Mapping 192.168.5.51 as server_1
cache_peer server_1_b.office.lan parent 80 0 no-query originserver name=server_1 login=PASS
cache_peer_domain server_1 server_1_b.office.lan login=PASS

# Mapping 192.168.5.101 as server_2
cache_peer server_2_b.office.lan parent 80 0 no-query originserver name=server_2 login=PASS
cache_peer_domain server_2 server_2_b.office.lan login=PASS

cache_peer server_1_b.office.lan parent 80 0 round-robin no-query originserver login=PASS
cache_peer server_2_b.office.lan parent 80 0 round-robin no-query originserver login=PASS

----------------------------- End of lines to be added in squid.conf ----------------------


Now the squid can be restarted.
The server will be listening to webapp.office.lan each request will be diverted to different server based on the round robin flow and if any one fails the other will server the request continuously we can add 'n' servers similar to this to make the count higher
Note all the vhost in the Apache should listen to domain name "webapp.office.lan".

Hope this gave a useful information on SQUID loadbalancing, This is not only for apache but can be any webserver serving similarly.

Wednesday, August 19, 2009

SQUID Load Balancing For HTTP-AUTH Applications

Recently I was working with SQUID Load Balancing server for one of the PHP Based Web Application.

The app uses HTTP-AUTH for one of its protected directory. It uses Apache .htaccess with .htpasswd Unfortunately the login was completely failing in the live environment but not in test environment.

The difference found was the live environment had a SQUID Load balancing which was not in TEST (Something wrong should not be the case, both environments should resemble similar).

Then it was observed that the User name and password sent from the client is not reaching the real-application server. It is chopped at the SQUID.

Why SQUID is not passing the information?
SQUID has features to do proxy / load balancing with authentication, where SQUID assumes that the AUTH header is for SQUID and not for the web application so it never forwards the AUTH Header.

How to forward the AUTH Header?
Looking in to the squid.conf

cache_peer IP.ADDRESS parent 80 0 no-query originserver login=PASS

The last suffix login=PASS fixed the problem.

The login=PASS forwards the HTTP-AUTH credentials to the destination server.

Saturday, August 15, 2009

SQUID Clearing cache in a load balancer / caching server

Are you running a SQUID Caching server before your web server to boost up the performance?
If yes and you face issues when the content of the site changes. It might be due to the squid's cache still having the old content. The following steps will help to refresh the cache.

Why we need to clear the cache?
In most cases the content in the cache is out dated with the live data.

What is the normal way to clear the cache?
We can clear the cache by removing the files in the cache directory.

Where is the cache directory?
The cache directory changes from system to system based on the configuration file settings.
We can find the cache directory by looking for the cache_dir property in the /etc/squid/squid.conf file.
Steps to clear the cache:
1. Login as privileged user.
2. Shutdown squid.
Eg: Fedora / Redhat / CentOS
# service squid stop
3. Remove the cache files.
The directory is the path specified in cache_dir
#rm -rf /var/spool/squid/*
4. Start the squid again
# service squid start
5. We should be able to see a message cache created in cache_dir directory.

Friday, August 14, 2009

Make Dynamic VirtualHost in Apache

Are you working in Apache? Are you configuring VritualHost often?
Here is a cool solution that avoids us configuring the VirtualHost directive in Apache often.
Any name based apache virtual host will be automatically mapped with some predefined directory path. Which reduces the time of configuring the apache vhosts.

The below example config change in apache will do the following
  1. Configures all virtual host to the /srv/www directory
  2. All VirtualHost by name should have a directory with the domain name. Ex: Vhost test.example.com will have a directory /srv/www/test.example.com
  3. The Document root directory will be htdocs by default
  4. The error log will be added to the specific domain file and all access logs are added to common file.

Example Configuration to add in httpd.conf
# this log format can be split per-virtual-host based on the first field
LogFormat "%V %h %l %u %t \"%r\" %s %b" vcommon
CustomLog logs/access_log vcommon
#ErrorLog logs/%0_error_log

# include the server name in the filenames used to satisfy requests
VirtualDocumentRoot /srv/www/%0/htdocs
VirtualScriptAlias /srv/www/%0/cgi-bin

Friday, August 07, 2009

Easy SVN Web Administration

In the last month blog we were looking into "How to install and configure an SVN server".
The blog gives a basic SVN server configuration with command line and managing it through the command line. When we end up with more projects / users we need more repositories and managing the users authentication details becomes a nightmare. To simplify the user creation and repository management we can go for a web based SVN solution.

;-) Ohh don't think that we are going to do the Apache setup for each repository and more. We have a better solution.

Here comes a better SVN with USVN (User friendly SVN)

What do we need for this?
Requirements:
  • PHP 5 (5.1.2 <= ver)
  • apache2
  • mod_dav_svn enable
  • mod_rewrite enable
  • subversion
  • mod_svn enable
  • mod_authz_svn enable
Steps to setup.
1. Download USVN from http://www.usvn.info/download
2. Extract the zip to the root directory of apache web directory
3. Access the page via web to start the installation and proceed through the installation.

Sample configurations followed in Success Factory.
1. Apache Config

###########################################
# Vhost: svn.successfactory.local #
# Note: we need proper DNS setup to #
# make the URL work #
###########################################
<VirtualHost *:80>
DocumentRoot /srv/www/svn.successfactory.local/htdocs
ServerName svn.successfactory.local
<Directory />
AllowOverride All
</Directory>
<Location /repository/>
ErrorDocument 404 default
DAV svn
Require valid-user
SVNParentPath /srv/svn
SVNListParentPath off
AuthType Basic
AuthName "USVN"
AuthUserFile /srv/svn/htpasswd
AuthzSVNAccessFile /srv/svn/authz
</Location>
</VirtualHost>

The SVN repository resides in /srv/svn as explained in SVN installation post.
An SQLite DB is selected to maintain the SVN management informations.

After installation we can manage the SVN through web like
http://svn.successfactory.local

You will come across a login page as shown below.




We can manage new projects / users / groups through the simple web panel as shown below.





Hope this USVN brings a peace of mind in administring multiple SVN repositories.

Tuesday, August 04, 2009

Apache RewriteMap with RewriteLock

Recently I was working with a image gallery site. The site was developed with PHP on Apache which stores images and renders it out.

The photos where stored in a similar path stated below.
/images/photo_id/photos_style/photo_id.jpg

Example:
/images/200/portrait/200.jpg

The requirement was not to show the original path in URL and it should resemble like the below
/<photo_id>/<photo_id>_<_style>.jpg

Example:
/200/200_portrait.jpg

The logic had more complexity than explained here which required a math calculation to get the complete path. To achieve the calculation a perl rewrite rule was introduced.

RewriteMap prg MapType:/path/to/rewrite_rule.pl

The perl script was something similar to below with more logic
#!/usr/bin/perl
$| = 1;
while () {
# ...put here any transformations or lookups...
print $_;
}
The script started working well by redirecting to original directory (internally) with output like
/images/200/portrait/200.jpg

But when the server got loaded heavily with more requests. The output scrambled like
/mages/200/portit/200.jpg
/ramages/200/portrait/200.jpg

etc...

Which was due to the perl script not in sync with Apache. The problem was solved when a RewriteLock was introduced. But still a surprise how this solved it immediately... ;-)

RewriteLock "/path/to/empty/lock/file"
in the global section of httpd.conf

Wednesday, July 29, 2009

Linux: Setting UP DNS Cache server

1. The DNS servers in a network may be with huge traffic or it might have more downtime resulting in failure of resolving domain names.

2. May be a dialup machine has very slow internet connection where resolving a DNS query might take more time.

The solution for both the problems is to have a caching DNS server. Installing a dnsmasq and running it as a service on local host will resolve the issue.

Steps to Setup DNS Cache Server
(Following lines works good in Fedora / Redhat / CentOS)

Install dnsmasq
$ yum install dnsmasq

Make dnsmasq start on boot
$ chkconfig dnsmasq on

Start dnsmasq immediately
$ service dnsmasq start

Change the network setting to work through this cache server


Open the network settings

Add the Primary DNS as localhost by adding 127.0.0.1
Move the primary and secondary to secondary and tertiary.
Click File->Save

Restart the network
$ service network restart

Test the network DNS resolving speed after the first time access to the site. It will be much faster as it comes from local.

Saturday, July 25, 2009

Install & Configure SVN Server

There are many tutorials available to work with SVN and the best of all is the
svnbook.read-bean.com, This article is about making a quick SVN server with very few steps without much issues.

Server Environment: Redhat / CentOS / Fedora

Install SVN
# yum install subversion

Create SVN Directory


# mkdir -p /srv/svn

Start SVN Server
# svnserve -d /srv/svn

Create SVN Repository
# svnadmin create /srv/svn/myproject

Create Users in /srv/svn/myproject/conf/passwd (Add the following lines)

admin = adM!nPassw0rd
developer = password


Grant User Permissions in /srv/svn/myproject/conf/auth (Add the following lines)

[/]
admin = rw
developer =
rw

Test the setup
Checkout from different macine or in same machine with different directory

192.168.1.1 is assumed IP of the SVN server

# svn co http://192.168.1.1/myproject/ myproject

Test SVN Commit

# cd myproject
# echo "Hello World" > test.txt
# svn add test.txt
# svn commit -m "Test Commit"


You can checkout the same project from any machine but now you will find the test.txt.

More details on how to backup and restore svn and structuring to use are explained step by step in TechysPage(SVN-Revision-Control)
It would be good to go through the article in TechysPage.

The recent POST on User friendly SVN will help you to configure SVN with Admin panel

My First Blog for Killer Configurations

After years of this dream to share what I want to...................
After months of this dream to write a blog on what I want to...................

Finally it happened.

To be short to the point this is my first blog post. This blog is going to be more on Configuration Management.

The blog is named as Killer Configurations as configuring any server / application kills our time, we seek for answer days and night killing our self while the answer is always a simple configuration change.

This blog is for saving people by providing the tips / solution on configuration and to reduce the time of problem solving.

To be more in detail it will be on
  • Server Setup
  • Application configuration
  • Installing Tools
  • Optimizing the servers
  • Tricky problem solving in server configurations
  • Task Automations
  • Ease up server / service administration
  • etc........
The topic is not just this will continue more......................
Hope this blog solves the need of the reader ;-)

Wait for upcoming posts.