Learn to write CSV data using FlatFileItemWriter
. It is an item writer that writes data to a file or stream. The location of the output file is defined by a Resource
and must represent a writable file.
Table of Contents Project Structure Write data CSV files with FlatFileItemWriter Maven Dependency Demo
Project Structure
In this project, we will learn to –
- Read 3 CSV files from
input/*.csv
usingMultiResourceItemReader
. - Write whole data to
output/outputData.csv
file usingFlatFileItemWriter
.

Write data CSV files with FlatFileItemWriter
You need to use FlatFileItemWriter
to write lines which were read from CSV files. It write the content to any Resource
passed to writer.setResource()
method.
package com.howtodoinjava.demo.config; import org.springframework.batch.core.Job; import org.springframework.batch.core.Step; import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing; import org.springframework.batch.core.configuration.annotation.JobBuilderFactory; import org.springframework.batch.core.configuration.annotation.StepBuilderFactory; import org.springframework.batch.core.launch.support.RunIdIncrementer; import org.springframework.batch.item.file.FlatFileItemReader; import org.springframework.batch.item.file.FlatFileItemWriter; import org.springframework.batch.item.file.MultiResourceItemReader; import org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper; import org.springframework.batch.item.file.mapping.DefaultLineMapper; import org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor; import org.springframework.batch.item.file.transform.DelimitedLineAggregator; import org.springframework.batch.item.file.transform.DelimitedLineTokenizer; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.core.io.FileSystemResource; import org.springframework.core.io.Resource; import com.howtodoinjava.demo.model.Employee; @Configuration @EnableBatchProcessing public class BatchConfig { @Autowired private JobBuilderFactory jobBuilderFactory; @Autowired private StepBuilderFactory stepBuilderFactory; @Value("input/inputData*.csv") private Resource[] inputResources; private Resource outputResource = new FileSystemResource("output/outputData.csv"); @Bean public FlatFileItemWriter<Employee> writer() { //Create writer instance FlatFileItemWriter<Employee> writer = new FlatFileItemWriter<>(); //Set output file location writer.setResource(outputResource); //All job repetitions should "append" to same output file writer.setAppendAllowed(true); //Name field values sequence based on object properties writer.setLineAggregator(new DelimitedLineAggregator<Employee>() { { setDelimiter(","); setFieldExtractor(new BeanWrapperFieldExtractor<Employee>() { { setNames(new String[] { "id", "firstName", "lastName" }); } }); } }); return writer; } @Bean public Job readCSVFilesJob() { return jobBuilderFactory .get("readCSVFilesJob") .incrementer(new RunIdIncrementer()) .start(step1()) .build(); } @Bean public Step step1() { return stepBuilderFactory.get("step1").<Employee, Employee>chunk(5) .reader(multiResourceItemReader()) .writer(writer()) .build(); } @Bean public MultiResourceItemReader<Employee> multiResourceItemReader() { MultiResourceItemReader<Employee> resourceItemReader = new MultiResourceItemReader<Employee>(); resourceItemReader.setResources(inputResources); resourceItemReader.setDelegate(reader()); return resourceItemReader; } @SuppressWarnings({ "rawtypes", "unchecked" }) @Bean public FlatFileItemReader<Employee> reader() { //Create reader instance FlatFileItemReader<Employee> reader = new FlatFileItemReader<Employee>(); //Set number of lines to skips. Use it if file has header rows. reader.setLinesToSkip(1); //Configure how each line will be parsed and mapped to different values reader.setLineMapper(new DefaultLineMapper() { { //3 columns in each row setLineTokenizer(new DelimitedLineTokenizer() { { setNames(new String[] { "id", "firstName", "lastName" }); } }); //Set values in Employee class setFieldSetMapper(new BeanWrapperFieldSetMapper<Employee>() { { setTargetType(Employee.class); } }); } }); return reader; } }
public class Employee { String id; String firstName; String lastName; //public setter and getter methods }
id,firstName,lastName 1,Lokesh,Gupta 2,Amit,Mishra 3,Pankaj,Kumar 4,David,Miller
id,firstName,lastName 5,Ramesh,Gupta 6,Vineet,Mishra 7,Amit,Kumar 8,Dav,Miller
id,firstName,lastName 9,Vikas,Kumar 10,Pratek,Mishra 11,Brian,Kumar 12,David,Cena
Maven Dependency
Look at project dependencies.
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd; <modelVersion>4.0.0</modelVersion> <groupId>com.howtodoinjava</groupId> <artifactId>App</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>App</name> <url>http://maven.apache.org</url> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.0.3.RELEASE</version> </parent> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-batch</artifactId> </dependency> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <scope>runtime</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> <repositories> <repository> <id>repository.spring.release</id> <name>Spring GA Repository</name> <url>http://repo.spring.io/release</url> </repository> </repositories> </project>
Demo
Before running the application, look at complete code of App.java
which run the application as Spring boot application.
package com.howtodoinjava.demo; import org.springframework.batch.core.Job; import org.springframework.batch.core.JobParameters; import org.springframework.batch.core.JobParametersBuilder; import org.springframework.batch.core.launch.JobLauncher; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.scheduling.annotation.EnableScheduling; import org.springframework.scheduling.annotation.Scheduled; @SpringBootApplication @EnableScheduling public class App { @Autowired JobLauncher jobLauncher; @Autowired Job job; public static void main(String[] args) { SpringApplication.run(App.class, args); } @Scheduled(cron = "0 */1 * * * ?") public void perform() throws Exception { JobParameters params = new JobParametersBuilder() .addString("JobID", String.valueOf(System.currentTimeMillis())) .toJobParameters(); jobLauncher.run(job, params); } }
#Disable batch job's auto start spring.batch.job.enabled=false spring.main.banner-mode=off
Run the application
Run the application as Spring boot application, and watch the console. Batch job will start at start of each minute. It will read the input file, and print the read values in console.
1,Lokesh,Gupta 2,Amit,Mishra 3,Pankaj,Kumar 4,David,Miller 5,Ramesh,Gupta 6,Vineet,Mishra 7,Amit,Kumar 8,Dav,Miller 9,Vikas,Kumar 10,Pratek,Mishra 11,Brian,Kumar 12,David,Cena
Drop me your questions in comments section.
Happy Learning !!