Uncategorized

Load, Execute and Unload Application classes in JVM

Summary:

           We usually experience some kind of delay if we have to run a java program in any environment. It will take good amount of time to create a process to run the JVM and on top of that load the application classes and execute them. In some documents it is described as a cold start problem.

          What if we continuously run a  JVM process and dynamically load, execute and unload those classes from JVM. This doesn’t need to create a JVM process and we just need to load and fire those classes.

 How this can work?

Every time when we want to execute a piece of java code, load the compiled class files using a new instance of class loader in a JVM. Each class loader instance can act as a container for those loaded application classes. Each instance of the class loader will have its own set of classes and they are part of class loader which is loading them.

(Same class can be loaded multiple times if it loaded by multiple instances of class loaders i.e one for each class loader instance).

 Ideally URLClassLoader can be used to load classes from a jar file. And load and execute the required classes. We can unload those classes by simply garbage collecting the newly created classloader instance. This intern, make all those classes loaded by this class loader to be available for GC form metaspace. And metaspace can be cleaned by garbage collectors for loading new classes.

This is analogous to aws lambda’s, but we don’t need to create those JVM processes each time we want to run a piece of java code.

Code

Load the “fatjar” file into the running JVM

Path jarFilePath = FileUtil.saveFile(fileName, multipartFile);

Load the classes using a custom class loader

URL url = jarFilePath.toUri().toURL();

URLClassLoader child = new URLClassLoader(new URL[] { url });

Load the class definition and invoke using following code

Class classToLoad = Class.forName(fullClassName, true, child);

Method method = classToLoad.getDeclaredMethod(methodName);

Object instance = classToLoad.newInstance();

Object result = method.invoke(instance);

Prototype:

The application has been developed using spring boot which is locate at https://bitbucket.org/prabhukvn/tinlet/src/develop/

The above application can be tested with a sample application,

https://bitbucket.org/fastcartservices/tinlet-ts/src

How to test?

Run the tinlet application as a spring boot app and use the following command to test it

curl –location –request POST ‘localhost:8080/tinlet/run’ \
–form ‘file=@”/C:/prabhukvn/projects/tinlet-ts/target/tinlet-ts-0.0.1-SNAPSHOT-jar-with-dependencies.jar”‘ \
–form ‘fullClassName=”com.kvn.tinletts.TinletOne”‘ \
–form ‘methodName=”hello”‘

you can import this command into postman too.

Uncategorized

A Quick Reference To HTTP

HTTP 0.9 Protocol:

  • Defined in 1991
  • Latency is not at all a factor in HTTP 0.9
  • In order to perform an http request client has to open a new TCP connection which is closed by server after responding to it.
  • HTTP 0.9 User 3 way hand shake mechanism to establish a connection
  • Its s simple text based protocol
  • Only GET Method is allowed and Response Is HTML
  • No Headers and Status Codes
  • Invented by Tim Berners-lee at CERN
  • Need a TCP connection for each Request

TCP 3 way Handshake:

HTTP 0.9 using telnet

HTTP 1.0 Protocol:

  • Introduced in 1996
  • HEAD and POST Methods were added
  • Concept of header fields are introduced
  • User-Agent introduced in Request (for debugging purpose)
  • Content-Length introduced in Response (to identify the proper response)
  • Caching , Authorization etc
  • Still it Needs a TCP connection for each Request
  • Status Codes are introduced
  • Non-HTML can also be transmitted over this protocol

Telnet Using HTTP 1.0:

HTTP 1.1 Protocol:

  • Published om 1997
  • Persistent Connection Introduced
  • The same connection can be used for consecutive request for single embedded documents like images etc.
  • Chunked Responses were supported
  • HOST header has been introduced. This allowed hosting different domains from same IP address
  • Content Negotiation, encoding and type are introduced.
  • GET, HEAD, POST, PUT, DELETE, TRACE, OPTIONS

Telnet Using HTTP 1.1

Keep-alive Header:

 This header allowed the connection to be opened for some time

Keep-Alive: timeout=600,max100

Connection: Keep-Alive

 

Telnet Using Keep Alive HTTP 1.1

HTTP/2 protocol:

  • Started in 2015
  • Http2 is a binary, multiplexed, network protocol
  • Frames and streams are used to communicate using this protocol
  • Frames are exchanged over TCP connection instead of text based messages. The following frame types are available.
  • HEADERS, DATA,SETTINGS, or GOAWAY are different frame types.

There are two ways to use http2 protocol

  1. Using upgrade header
    1. Send an http connection with upgrade header

GET /index.html HTTP/1.1

Host: server.example.com

Connection: Upgrade, HTTP2-Settings

Upgrade: h2c

HTTP2-Settings: <base64url encoding of HTTP/2 SETTINGS payload>

  1. Http 2 compatible server will accept the connection with switching protocols.

HTTP/1.1 101 Switching Protocols

Connection: Upgrade

Upgrade: h2c

[ HTTP/2 connection …

ALPN: Application level Protocol Negotiation:

ALPN allows a TLS connection to negotiate which application level will be running across it.

After establishing an HTTP/2 connection each endpoint has to send a connection preferences as a final confirmation and to establish the initial settings for http2 connection.

CURL Using HTTP 2.0

Here is the code rut a jetty server with http2 protocol

https://github.com/prabhukvn/springboot-jetty-http2

Uncategorized

How to check runtime log implementer in spring boot

Spring/ spring boot comes with default logger configuration for application logging. Spring uses logging façade library called sl4j, hence we can easily change underlying logger implementation by simply adding the required jar files in the classpath. Spring/ spring boot by default picks logabck implementer for application logging. However this can be easily changed by adding other implementations like log4j and removing logback. But if you want to check the current runtime library which is getting used in spring application then simply add following controller and get the run time implenter.

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/logger")
public class SimpleController {

	private static final Logger LOGGER = LoggerFactory.getLogger(SimpleController.class);
	

	
	@GetMapping("/test")
	public Class<? extends Logger> checkLoger() {
		
		return LOGGER.getClass();
		
	}
}

you can even log runtime class if you want to.

Uncategorized

Custom Endpoints Under /actuator

Spring “actuator” is an add on feature which will help you to monitor and manage applications using various end points. These end ponits are invoked using JMX as well as HTTP proptocols. And these endpoints can also be enabled/ disabled depending on the environment Or requirements. All these endpoints are availabe under/actuator base path in a spring boot application which has actuator as the dependency in it.

We can even add custom endpoints under /actuator base path if we want to expose additional end points apart from what is already exposed. The operations on these custom end points can be invoked via JMX as well as HTTP protocol.

Actuator has provided some of the following classes to expose custom endpoints under /actutor base path. And there are some custom Annotation which can expose technology specific end points like @JMXEndPoint or @WebEndPoint too.

import org.springframework.boot.actuate.endpoint.annotation.DeleteOperation;
import org.springframework.boot.actuate.endpoint.annotation.Endpoint;
import org.springframework.boot.actuate.endpoint.annotation.ReadOperation;
import org.springframework.boot.actuate.endpoint.annotation.Selector;
import org.springframework.boot.actuate.endpoint.annotation.WriteOperation;

Writing some custom end points

import org.springframework.boot.actuate.endpoint.annotation.DeleteOperation;
import org.springframework.boot.actuate.endpoint.annotation.Endpoint;
import org.springframework.boot.actuate.endpoint.annotation.ReadOperation;
import org.springframework.boot.actuate.endpoint.annotation.Selector;
import org.springframework.boot.actuate.endpoint.annotation.WriteOperation;
import org.springframework.http.MediaType;
import org.springframework.stereotype.Component;

import com.google.gson.Gson;

@Component
@Endpoint(id = "custom")
public class CustomBean {

	@ReadOperation(produces = MediaType.APPLICATION_JSON_VALUE)
	public String getCustomData() {
		Gson gson = new Gson();
		return gson.toJson(new CustomData("test", 5));
	}

	@ReadOperation(produces = MediaType.APPLICATION_JSON_VALUE)
	public String getCustomDataByName(@Selector String name) {
		Gson gson = new Gson();
		return gson.toJson(new CustomData(name, 5));
	}

	@WriteOperation(produces = MediaType.APPLICATION_JSON_VALUE)
	public String updateCustomData(String name, int counter) {
		CustomData customData = new CustomData(name, counter);
		return "custom data updated";
	}
	
	@WriteOperation(produces = MediaType.APPLICATION_JSON_VALUE)
	public String updateCustomDataName(@Selector String name) {
		CustomData customData = new CustomData(name,5);
		return "custom data updated";
	}
	
	@DeleteOperation
	public String deleteCustomData(@Selector String name) {
		
		return "success";
	}
}

These end can be found under actuator as,

And In JMX console as

This is an excellent way of adding additional application management end points in spring applications.

Uncategorized

Kafka Consumer with re-try (re try) mechanism

As we know kafka is a high throughput , low latency message broker platform with almost real time message processing.

But there are some use cases where we need to retry a message in case of any failures, especially network failures in microservices environment. And as there is no “delay queue” mechanism available in kafka, we can’t really process a given mkessage after certain amount of delay.

But we can still implement a retry mechanism by disabling auto commit and introducing the delay in the application code instead of at Kafka side.

The retry (re-try) flow as described below,

The above diagram is almost self explanatory.

Here is the code for consumer in java

package com.fastcart.kafkamessaging.standalones;

import java.time.Duration;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import java.util.Random;
import java.util.concurrent.DelayQueue;
import java.util.concurrent.Delayed;
import java.util.concurrent.TimeUnit;

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.common.serialization.StringDeserializer;

import com.google.common.primitives.Ints;

public class Consumer {

	private long WAIT_TIME = 5000;
	private int MAX_RETRY_COUNT = 6;
	private int current_retry_count = 0;
	private DelayQueue<DelayMessage> delayQueue = new DelayQueue<>();

	public static void main(String[] args) throws InterruptedException {
		Consumer c = new Consumer();

		c.startConsumer();

	}

	private void startConsumer() {

		
		KafkaConsumer<String, String> consumer = createConsumer();
		// TODO resume all partitions on start up if there are any
		if(consumer.paused()!=null) {
		consumer.resume(consumer.paused());
		}
		System.out.println("Consumer started...");
		try {
			while (true) {

				try {
					ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofMillis(1000));
				//	System.out.println("---------- Batch Of records consumer 1 :" + consumerRecords.count());
					for (ConsumerRecord<String, String> record : consumerRecords) {
						System.out.println("key: " + record.key() + " and Value :" + record.value());
						boolean doRetry = doSomeProcess(record);
						if (doRetry && current_retry_count < MAX_RETRY_COUNT) {
							// if retry is true
							System.out.println("retry in progress..."+current_retry_count);
							current_retry_count++;
							TopicPartition topicPartition = (TopicPartition) consumerRecords.partitions().toArray()[record.partition()];
							consumer.seek(topicPartition, record.offset());
							List<TopicPartition> topicPartitionList = new ArrayList<>();
							topicPartitionList
									.add(topicPartition);
							delayQueue.put(new DelayMessage("Delay Process", WAIT_TIME));
							consumer.pause(topicPartitionList);
							delayQueue.take();
							consumer.resume(topicPartitionList);

						} else {
							consumer.commitSync();
						}
					}
				} catch (InterruptedException e) {
					e.printStackTrace();
				}

			}
		} finally {
			consumer.close();
			// TODO: we can also re initiate consumer in case of failures
		}
	}

	private boolean doSomeProcess(ConsumerRecord<String, String> record) {
		System.out.println("Print the record" + record);
		int number = new Random().nextInt(1000) ;
		return number % 2 == 0;

	}

	/**
	 * Create Consumer
	 * 
	 * @return
	 */
	private KafkaConsumer<String, String> createConsumer() {
		Properties properties = new Properties();
		properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
		properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
		properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
		properties.put(ConsumerConfig.GROUP_ID_CONFIG, "dev-fastcart-group");
		properties.put(ConsumerConfig.CLIENT_ID_CONFIG, "fastcart-client" + new Random().nextInt());
		// start reading from last committed messages instead of current messages.
		properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
		properties.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 1);
		// disable automatic commits
		properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
		properties.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, this.getMaxPollIntervalInMs());
		properties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, this.getSessionTimeOutInMs());

		KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);
		List<String> topics = new ArrayList<>();
		topics.add("fastcart-topic");
		consumer.subscribe(topics);
		return consumer;
	}

	private int getSessionTimeOutInMs() {
		return (int) Math.addExact(WAIT_TIME, 2 * 60 * 1000);
	}

	private int getMaxPollIntervalInMs() {
		return (int) Math.addExact(WAIT_TIME, 1 * 60 * 1000);
	}

}

class DelayMessage implements Delayed {
	private String message = null;
	private long delayTime;

	public DelayMessage(String message, long delayTimeInMills) {

		this.message = message;
		this.delayTime = System.currentTimeMillis() + delayTimeInMills;
	}

	@Override
	public long getDelay(TimeUnit unit) {
		long diff = delayTime - System.currentTimeMillis();
		return unit.convert(diff, TimeUnit.MILLISECONDS);
	}

	@Override
	public int compareTo(Delayed o) {
		return Ints.saturatedCast(this.delayTime - ((DelayMessage) o).delayTime);
	}

	@Override
	public String toString() {
		// TODO Auto-generated method stub
		return "Message:" + message;
	}
}

This can be tested with following Producer

package com.fastcart.kafkamessaging.standalones;

import java.util.Properties;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import org.apache.kafka.common.serialization.StringSerializer;

public class Producer {

	public static void main(String[] args) {

		Properties properties = new Properties();
		properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
		properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
		properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
		
		KafkaProducer<String, String> producer = new KafkaProducer<>(properties);
		Future<RecordMetadata> response;
		for(int i=0;i<2;i++) {
		ProducerRecord<String, String> record = new ProducerRecord<String, String>("fastcart-topic","message_key","Hello World-"+System.currentTimeMillis());

		 response= producer.send(record);
		producer.flush();
		try {
			RecordMetadata recordMetdata = response.get();
			System.out.println("Partition number: "+recordMetdata.partition());
		} catch (InterruptedException | ExecutionException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
		}
		producer.close();
		
	}

}

Uncategorized

Solr Search Engine for Desktop

                     It takes good amount of time to find a file in Windows machines. Here is simple system where we can index all the desktop files in solr search engine, a famous open source search engine and can be searched in seconds.

Solr is an open source search engine from apache solr https://lucene.apache.org/solr/.

Step 1: Download the solr and run it in cloud mode. Here is the guide to get start on solr. https://lucene.apache.org/solr/guide/7_7/solr-tutorial.html#solr-tutorial

once you download and extract the solr into a file, just start the solr with following command

>./bin/solr start -c -p 8983 -s ../example/cloud/node1/solr

This will start the first node.  This may take approximately 30 sec to start it. And it will also start zookeeper ensemble to manage the nodes. Once first node starts, here is the command to start second node.

>./bin/solr start -c -p 7574 -s ../example/cloud/node2/solr -z localhost:9983

this will start the solr engine in two node cluster and can be accessed at http://localhost:8983/solr

Step 2: Create a collection for indexing files

>solr create -c latest -s 2 -rf 2

Step 3: Write a program to extract and index the files names in solr search Engines.

package com.kvn.web.solrClients;

import java.io.File;
import java.io.IOException;
import java.sql.Timestamp;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Optional;
import java.util.function.Predicate;
import java.util.stream.Collectors;

import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.HttpSolrClient;

public class ListFilesOfType {

	static String baseFoldersToIndex[] = { "C:/docs", "C:/Java Majestic", "C:/JavaMajestic",
			"C:/Users/prabhukvn/Documents", "c:/Users/prabhukvn/Downloads", "C:/POCS", "C:/softwares" };

	Predicate<File> isFile = (f1) -> f1.isFile();
	String baseUrl = "http://localhost:8983/solr/latest";
	HttpSolrClient client = null;

	static List<String> fileTypes = new ArrayList<>();
	static List<String> filePaths = new ArrayList<>();

	static {
		fileTypes.add(".pdf");
		fileTypes.add(".doc");
		fileTypes.add(".docx");

	}
	static {
		filePaths.addAll(Arrays.asList(baseFoldersToIndex));

	}

	public static void main(String[] args) {
		ListFilesOfType obj = new ListFilesOfType();

		long startTime = System.currentTimeMillis();
		filePaths.stream().map(path -> new File(path)).forEach(obj::processFile);
		System.out.println("Total Time:"+(System.currentTimeMillis()-startTime));

	}

	public void processFile(File file) {
		//System.out.println("----" + file.getAbsolutePath() + "--------");
		File listOfFiles[] = file.listFiles();

		if (listOfFiles != null) {
			List<FileDoc> fileDocs = Arrays.stream(listOfFiles).parallel().filter(isFile.and(this::filterFiles))
					.map(this::createFileDoc).collect(Collectors.toList());
			if (fileDocs != null && fileDocs.size() > 0) {
				this.sendToSolar(fileDocs);
			}
			Arrays.stream(listOfFiles).filter(f2 -> f2.isDirectory()).forEach(this::processFile);
		}	

	}

	private void sendToSolar(List<FileDoc> fileDocs) {
		try {
			// System.out.println("--------------------------------------");
			fileDocs.parallelStream().forEach(f3 -> System.out.println(f3.getName()));
			HttpSolrClient solrClient = this.connect();
			solrClient.addBeans(fileDocs);
			solrClient.commit();
		} catch (SolrServerException e) {
			e.printStackTrace();
		} catch (IOException e) {
			e.printStackTrace();
		}

	}

	FileDoc createFileDoc(File file) {
		FileDoc fileDoc = new FileDoc(file.getName(), file.getAbsolutePath(), new Timestamp(file.lastModified()));
		return fileDoc;
	}

	public boolean filterFiles(File f) {
		
		Optional<String> f4 = fileTypes.parallelStream().filter(fileType -> f.getName().endsWith(fileType)).findFirst();
		return f4.isPresent();
	}

	/**
	 * Connect to solr using solr client.
	 * 
	 * @return
	 */
	private HttpSolrClient connect() {
		if (client == null) {
			client = new HttpSolrClient.Builder(baseUrl).withConnectionTimeout(10000).withSocketTimeout(60000).build();
		}
		return client;

	}

}


And Maven Dependency
		<dependency>
			<groupId>org.apache.solr</groupId>
			<artifactId>solr-solrj</artifactId>
			<version>7.7.0</version>
		</dependency>

And use http://localhost:8983/solr/#/pdf-files/query to search the data.

Or http://localhost:8983/solr to open the dashboard

Uncategorized

Java Server to Handle More Concurrent Connections

The following program can handle almost 65000 (65K) request with a single instance.

/**
 * This server socket program has been created to test the programe with -client and -server JVM parameters.
 */
package sockets;

import java.io.IOException;
import java.io.OutputStream;
import java.net.ServerSocket;
import java.net.Socket;
import java.net.SocketException;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.LinkedBlockingQueue;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**
 * @author prabhu kvn
 *
 */
public class ServerSocketDemoWithBlockingQueue {

	Logger logger = LoggerFactory.getLogger(this.getClass().getName());

	/**
	 * A blocking queue to handle connections. this can hold up to one lakh sockets
	 */
	/**
	 * Default values
	 */
	private int queueLength = 10000;
	int serverThreadCount = 1;
	BlockingQueue<Socket> queue = new LinkedBlockingQueue<>(queueLength);

	/**
	 * Default Constructor
	 */
	public ServerSocketDemoWithBlockingQueue() {

	}

	/**
	 * Default Constructor
	 */
	public ServerSocketDemoWithBlockingQueue(int serverThreadCount, int queueLength) {

		this.serverThreadCount = serverThreadCount;
		this.queueLength = queueLength;
		this.queue = new LinkedBlockingQueue<>(queueLength);

	}

	/**
	 * Entry point for server start up
	 * 
	 * @param args
	 */
	public static void main(String[] args) {

		ServerSocketDemoWithBlockingQueue demo = new ServerSocketDemoWithBlockingQueue();
		demo.startServer();
	}

	/**
	 * Starting the server to handle asynch connections
	 */
	private void startServer() {

		logger.info("Starting the server with {} threads and queue size {}", serverThreadCount, queueLength);
		ExecutorService service = Executors.newFixedThreadPool(serverThreadCount);
		for (int i = 0; i < serverThreadCount; i++) {
			service.submit(new ServerProcessing(queue));
		}
		ServerSocket serverSocket = null;
		try {

			// create the server socket
			serverSocket = new ServerSocket(8181, 100000);

			while (true) {
				Socket socket = serverSocket.accept();
				try {
					queue.put(socket);
				} catch (InterruptedException e) {
					logger.error("Problem in accepting new connections : {}", e);
					e.printStackTrace();
				}

			}

		} catch (IOException e) {
			logger.error("Problem in creating server socket {}", e);
			e.printStackTrace();
		} finally {
			if (null != serverSocket) {
				try {
					serverSocket.close();
					logger.info("Server Socket closed Successfully.");
				} catch (IOException e) {
					logger.error("Problem in closing server socket.");
					e.printStackTrace();
				}
			}
		}

	}

}

/**
 * Processing class
 * 
 * @author prabhukvn
 *
 */
class ServerProcessing extends Thread {
	BlockingQueue<Socket> queue;
	volatile int counter = 0;

	public ServerProcessing(BlockingQueue<Socket> queue) {
		this.queue = queue;
	}

	@Override
	public void run() {
		while (true) {
			try {

				process(queue.take());
			} catch (InterruptedException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			} catch (SocketException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			} catch (IOException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			}
		}
	}

	/**
	 * Execute the actual work.F
	 * @param socket
	 * @return
	 * @throws IOException
	 * @throws SocketException
	 * @throws InterruptedException
	 */
	private int process(Socket socket) throws IOException, SocketException, InterruptedException {
		System.out.println("Server Socket:" + socket);
		OutputStream out = socket.getOutputStream();
		out.write(("Request received" + counter).getBytes());
		counter++;
		socket.setKeepAlive(true);
		// Thread.sleep(1000);
		// allow client to close the connection
		socket.close();
		return counter;
	}
}

Uncategorized

Selection Sort Java Program

Selection sort is a kind of sorting mechanism where every element is compared with the every other element to sort the elements.
The following programs performs a selection sort on integers and also also calculates number of iteration required for sorting entire array. And also it simulates how the array is actually sorted on each iteration.

 


package sorting;

import java.util.Arrays;

public class SelectionSort {

public static void main(String[] rawInput) {

int numberOfIterations = 0;
int length = rawInput.length;
int input[] = new int[length];
for (int k = 0; k < length; k++) {
input[k] = Integer.parseInt(rawInput[k]);
}

System.out.println("Raw input" + Arrays.toString(input));

if (length == 0) {
System.out.println("Invalid input ");
} else {
for (int i = 0; i < length; i++) {
numberOfIterations++;
// iterate the rest of the array
for (int j = i; j input[j]) {
int temp = input[i];
input[i] = input[j];
input[j] = temp;
System.out.println(Arrays.toString(input));
}

}

}
}

System.out.println("after sorting" + Arrays.toString(input));
System.out.println("Total Number of iterations:" + numberOfIterations);

}

}

If we run the above program with following input,
Raw input[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
[8, 9, 7, 6, 5, 4, 3, 2, 1, 0]
[7, 9, 8, 6, 5, 4, 3, 2, 1, 0]
[6, 9, 8, 7, 5, 4, 3, 2, 1, 0]
[5, 9, 8, 7, 6, 4, 3, 2, 1, 0]
[4, 9, 8, 7, 6, 5, 3, 2, 1, 0]
[3, 9, 8, 7, 6, 5, 4, 2, 1, 0]
[2, 9, 8, 7, 6, 5, 4, 3, 1, 0]
[1, 9, 8, 7, 6, 5, 4, 3, 2, 0]
[0, 9, 8, 7, 6, 5, 4, 3, 2, 1]
[0, 8, 9, 7, 6, 5, 4, 3, 2, 1]
[0, 7, 9, 8, 6, 5, 4, 3, 2, 1]
[0, 6, 9, 8, 7, 5, 4, 3, 2, 1]
[0, 5, 9, 8, 7, 6, 4, 3, 2, 1]
[0, 4, 9, 8, 7, 6, 5, 3, 2, 1]
[0, 3, 9, 8, 7, 6, 5, 4, 2, 1]
[0, 2, 9, 8, 7, 6, 5, 4, 3, 1]
[0, 1, 9, 8, 7, 6, 5, 4, 3, 2]
[0, 1, 8, 9, 7, 6, 5, 4, 3, 2]
[0, 1, 7, 9, 8, 6, 5, 4, 3, 2]
[0, 1, 6, 9, 8, 7, 5, 4, 3, 2]
[0, 1, 5, 9, 8, 7, 6, 4, 3, 2]
[0, 1, 4, 9, 8, 7, 6, 5, 3, 2]
[0, 1, 3, 9, 8, 7, 6, 5, 4, 2]
[0, 1, 2, 9, 8, 7, 6, 5, 4, 3]
[0, 1, 2, 8, 9, 7, 6, 5, 4, 3]
[0, 1, 2, 7, 9, 8, 6, 5, 4, 3]
[0, 1, 2, 6, 9, 8, 7, 5, 4, 3]
[0, 1, 2, 5, 9, 8, 7, 6, 4, 3]
[0, 1, 2, 4, 9, 8, 7, 6, 5, 3]
[0, 1, 2, 3, 9, 8, 7, 6, 5, 4]
[0, 1, 2, 3, 8, 9, 7, 6, 5, 4]
[0, 1, 2, 3, 7, 9, 8, 6, 5, 4]
[0, 1, 2, 3, 6, 9, 8, 7, 5, 4]
[0, 1, 2, 3, 5, 9, 8, 7, 6, 4]
[0, 1, 2, 3, 4, 9, 8, 7, 6, 5]
[0, 1, 2, 3, 4, 8, 9, 7, 6, 5]
[0, 1, 2, 3, 4, 7, 9, 8, 6, 5]
[0, 1, 2, 3, 4, 6, 9, 8, 7, 5]
[0, 1, 2, 3, 4, 5, 9, 8, 7, 6]
[0, 1, 2, 3, 4, 5, 8, 9, 7, 6]
[0, 1, 2, 3, 4, 5, 7, 9, 8, 6]
[0, 1, 2, 3, 4, 5, 6, 9, 8, 7]
[0, 1, 2, 3, 4, 5, 6, 8, 9, 7]
[0, 1, 2, 3, 4, 5, 6, 7, 9, 8]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
after sorting[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Total Number of iterations:65
Uncategorized

Queue with O(1) complexity and unlimited size

The queue which works with First In First Out (FIFO) mechanism. This is the queue program in java with insertion complexity of O(1) in java.

/**
*
*/
package java8pract.datastructures;

import java.util.ArrayList;
import java.util.List;

/**
* @author prabhukvn
*
*/
public class CustomQueue<T extends Comparable> {
Node firstNode;
Node lastNode;
Node tempNode;
int stackSize = 0;

public void push(T element) {
Node node = null;
if (null == firstNode) {
node = new Node(null, element, null);
firstNode = node;
stackSize++;

} else {

Node middleNode = tempNode;
node = new Node(middleNode, element, null);
middleNode.next = node;
lastNode = node;
stackSize++;
}
tempNode = node;

}

public int getSize() {
return this.stackSize;
}

public T pop() {

T element = firstNode.element;
firstNode = firstNode.next;
firstNode.previous=null;
stackSize--;
return element;

}

/**
* Get all the elements in linked list
*
* @return
*/
public List<Node> getAll() {
List<Node> list = new ArrayList();
Node temp = firstNode;
while (temp != null) {
list.add(temp);
System.out.println(temp.element);
temp = temp.next;

}
return list;
}

}

Uncategorized

Stack with O(1) push complexity

This is a stack program where the algorithm works on Last in First Out (LIFO) concept. This is a typical stack program with insertion complexity as O(1) i.e constant.


/**
*
*/
package java8pract.datastructures;

import java.util.ArrayList;
import java.util.List;

/**
* @author prabhukvn
*
*/
public class CustomStack<T extends Comparable> {
Node firstNode;
Node lastNode;
Node tempNode;
int stackSize = 0;

public void push(T element) {
Node node = null;
if (null == firstNode) {
node = new Node(null, element, null);
firstNode = node;
stackSize++;

} else {

Node middleNode = tempNode;
node = new Node(middleNode, element, null);
middleNode.next = node;
lastNode = node;
stackSize++;
}
tempNode = node;

}

public int getSize() {
return this.stackSize;
}

public T pop() {

T element = lastNode.element;
lastNode = lastNode.previous;
lastNode.next=null;
stackSize--;
return element;

}

/**
* Get all the elements in linked list
*
* @return
*/
public List<Node> getAll() {
List<Node> list = new ArrayList();
Node temp = firstNode;
while (temp != null) {
list.add(temp);
System.out.println(temp.element);
temp = temp.next;

}
return list;
}

}