How to Read CSV File in Bash
Last updated
Last updated
The file represents data in a plain text format, where each row is a record and columns are separated by commas. Since CSV files are simple and compatible, they are used across various applications, programming languages, and platforms. In this tutorial, we’ll learn how to parse values from Comma-Separated Values (CSV) file in Bash.
There are two popular ways. Either using a loop to process each line or using the IFS (Internal Field Separator) to separate fields in the CSV file and store them in an array. By mastering these techniques, you will become adept at reading and processing CSV files efficiently using Bash.
One of the methods for reading a CSV file is by using a loop. This method is very simple, using a while loop to iterate through each line of the CSV file and store them in an array.
Let us use the following CSV file named “sample.csv”.
Now, we should write a Bash script with a loop that allows us to store each line of the file in an array.
|
IFS stands for Internal Field Separator, which is a special variable in Bash scripting which splits a string into an array based on a specific delimiter. When we use IFS, we can interact with a separate value within each line. It is a main difference between these two methods of processing a CSV file. This approach is useful when we want to access specific values within each CSV file line.
Now, we will consider the example of the Bash script that uses IFS:
|
This bash script is almost the same as the previous one, but as you can see, IFS environmental variable with a value comma was added.
The read command read the input till it reaches the comma separator, so the second loop prints out all values that are in the first column. If we want to extract the whole line, then we can remove the comma.
|
And we get again the output like it was without IFS.
The loop method is straightforward and allows us to store each line of the file in an array, whereas the IFS method provides more flexibility by allowing us to interact with individual values within each line, based on a specific delimiter.
Let's look into a Bash script that can help to read CSV files containing server data into arrays.
The CSV file contains the following server metrics:
|
Special characters (commas, quotes, newlines) can cause different issues during working with CVS files. To correctly read and process these files, we need to handle these special characters properly.
Now, we consider two other approaches to handling special characters. For instance, we have the following CSV file:
To remove double quotes in my output, we may utilize two text processing tools sed or awk.
Added one sed command to the script that removes the double quotes from the output.
|
After adding the 8th line, the output looks the following way:
In the following script, we have used the awk command to remove double quotes.
|
This section will consider how it is possible to handle columns when reading CSV files. Working with columns in CSV files is one of the common tasks during processing data and a comprehensive understanding of how to manipulate and extract specific columns is important.
The cut command is a utility that can be used to select specific columns from a CSV file. It reaches out via a delimiter and a list of field numbers. Here’s an example:
For instance, it is necessary to extract the first column, then the script should look the following way:
|
The script above is flexible since it allows to extract different columns from the file, in order to get, for example, the last 2 columns, then just to change the value denoted columns.
|
Let’s suppose there is a task to extract the column with names and phones. Fortunately, awk is a powerful text-processing tool that is able to solve the challenge, the script will be such:
|
Another method, possibly, is the simplest way since it is straightforward and does not require specific syntax.
|
In this example, also the task was to extract the columns name and phone. It was specified delimiter comma in IFS variable and listed all columns from the file.
As you see, the result is correct, the script extracted specified columns.
There are two methods of processing CSV files: using a loop or using IFS (Internal Filed Separator)
A loop approach is straightforward allowing to store whole line.
An IFS approach helps to work with each value within a line separately.
There are tools to handle different special characters that it grants an opportunity to process files more flexibly.
The provided example described the real-world application of Bash scripts relative to CSV files.