Q1. Explain what is Hive?
Ans: Hive is an ETL and Data warehousing tool developed on top of Hadoop Distributed File System (HDFS). It is a data warehouse framework for querying and analysis of data that is stored in HDFS. Hive is an open-source-software that lets programmers analyze large data sets on Hadoop.
Q2. When to use Hive?
Ans:
Q3. Mention what are the different modes of Hive?
Ans: Depending on the size of data nodes in Hadoop, Hive can operate in two modes.
These modes are:
Q4. Mention when to use Map reduce mode?
Ans: Map reduce mode is used when;
Q5. Mention key components of Hive Architecture?
Ans: Key components of Hive Architecture includes,
Q6. Mention what are the different types of tables available in Hive?
Ans: There are two types of tables available in Hive:
Q7. Explain what is Metastore in Hive?
Ans: Metastore is a central repository in Hive. It is used for storing schema information or metadata in the external database.
Q8. Mention what Hive is composed of?
Ans: Hive consists of 3 main parts:
Q9. Mention what are the type of database does Hive support?
Ans: For single user metadata storage, Hive uses derby database and for multiple user Metadata or shared Metadata case Hive uses MYSQL.
Q10. Mention Hive default read and write classes?
Hive default read and write classes are:
Q11. Mention what are the different modes of Hive?
Ans: Different modes of Hive depends on the size of data nodes in Hadoop.
These modes are:
Q12. Why is Hive not suitable for OLTP systems?
Ans: Hive is not suitable for OLTP systems because it does not provide insert and update function at the row level.
Q13. Differentiate between Hive and HBase
Hive | HBase |
Enables most of the SQL queries | This doesn’t allow SQL queries |
Doesn’t support record level insert, update, and delete operations on table | It supports |
It is a data warehouse framework | It is NoSQL database |
Hive run on the top of MapReduce | HBase runs on the top of HDFS |
Q14. Explain what is a Hive variable? What for we use it?
Ans: Hive variable is created in the Hive environment that can be referenced by Hive scripts. It is used to pass some values to the hive queries when the query starts executing.
Q15. Mention what is ObjectInspector functionality in Hive?
Ans: ObjectInspector functionality in Hive is used to analyze the internal structure of the columns, rows, and complex objects. It allows to access the internal fields inside the objects.
Q16. Mention what is (HS2) HiveServer2?
Ans: It is a server interface that performs following functions:
Some advanced features Based on Thrift RPC in its latest version include:
Q17. Mention what Hive query processor does?
Ans: Hive query processor convert graph of MapReduce jobs with the execution time framework. So that the jobs can be executed in the order of dependencies.
Q18. Mention what are the components of a Hive query processor?
Ans: The components of a Hive query processor include:
Q19. Mention what is Partitions in Hive?
Ans: Hive organizes tables into partitions.
Q20. Mention when to choose “Internal Table” and “External Table” in Hive?
Ans: In Hive you can choose internal table:
You can choose External table:
Q21. Mention if we can name view same as the name of a Hive table?
Ans: No. The name of a view must be unique compared to all other tables and as views present in the same database.
Q22. Mention what are views in Hive?
Ans: In Hive, Views are Similar to tables. They are generated based on the requirements.
Q23. Explain how Hive Deserialize and serialize the data?
Ans: Usually, while read/write the data, the user first communicate with inputformat. Then it connects with Record reader to read/write record. To serialize the data, the data goes to row. Here deserialized custom serde use object inspector to deserialize the data in fields.
Q24. What is Buckets in Hive?
Ans:
Q25. In Hive, how can you enable buckets?
Ans: In Hive, you can enable buckets by using the following command,
set.hive.enforce.bucketing=true;
Q26. In Hive, can you overwrite Hadoop MapReduce configuration in Hive?
Ans: Yes, you can overwrite Hadoop MapReduce configuration in Hive.
Q27. Explain how can you change a column data type in Hive?
Ans: You can change a column data type in Hive by using command,
ALTER TABLE table_name CHANGE column_name column_name new_datatype;
Q28. Mention what is the difference between order by and sort by in Hive?
Ans:
Q29. Explain when to use explode in Hive?
Ans: Hadoop developers sometimes take an array as input and convert into a separate table row. To convert complex data types into desired table formats, Hive use explode.
Q30. Mention how can you stop a partition form being queried?
Ans: You can stop a partition form being queried by using the ENABLE OFFLINE clause with ALTER TABLE statement.
Q31. Compare Pig and Hive
Criteria | Pig | Hive |
Architecture | Procedural data flow language | SQL type declarative language |
Application | Programming purposes | Report creation |
Operational field | Client side | Server side |
Support for avro files | Yes | No |
Q32. What is the definition of Hive? What is the present version of Hive and explain about ACID transactions in Hive?
Ans: Hive is an open source data warehouse system. We can use Hive for analyzing and querying in large data sets of Hadoop files. It’s similar to SQL. The present version of hive is 0.13.1. Hive supports ACID transactions: The full form of ACID is Atomicity, Consistency, Isolation, and Durability. ACID transactions are provided at the row levels, there are Insert, Delete, and Update options so that Hive supports ACID transaction.
Q33. Explain what is a Hive variable. What do we use it for?
Ans: Hive variable is basically created in the Hive environment that is referenced by Hive scripting languages. It provides to pass some values to the hive queries when the query starts executing. It uses the source command.
Q34. What kind of data warehouse application is suitable for Hive? What are the types of tables in Hive?
Ans: Hive is not considered as a full database. The design rules and regulations of Hadoop and HDFS put restrictions on what Hive can do.Hive is most suitable for data warehouse applications.
Where:
Q35. Can We Change settings within Hive Session? If Yes, How?
Ans: Yes we can change the settings within Hive session, using the SET command. It helps to change Hive job settings for an exact query.
Example: The following commands shows buckets are occupied according to the table definition.
hive> SET hive.enforce.bucketing=true;
We can see the current value of any property by using SET with the property name. SET will list all the properties with their values set by Hive.
hive> SET hive.enforce.bucketing;
hive.enforce.bucketing=true
And this list will not include defaults of Hadoop. So we should use the below like
SET -v
It will list all the properties including the Hadoop defaults in the system.
Interested in learning Hive? Well, we have the comprehensive Hive Training Course to give you a head start in your career.
Q36. Is it possible to add 100 nodes when we have 100 nodes already in Hive? How?
Ans: Yes, we can add the nodes by following the below steps.
Q37. Explain the concatenation function in Hive with an example .
Ans: Concatenate function will join the input strings. We can specify the
‘N’ number of strings separated by a comma.
Example:
CONCAT ('Intellipaat','-','is','-','a','-','eLearning',’-’,’provider’);
Output:
Intellipaat-is-a-eLearning-provider
So, every time we set the limits of the strings by ‘-‘. If it is common for every strings, then Hive provides another command
CONCAT_WS. In this case,we have to specify the set limits of operator first.
CONCAT_WS ('-',’Intellipaat’,’is’,’a’,’eLearning’,‘provider’);
Output: Intellipaat-is-a-eLearning-provider.
Wish to Learn Hive? Click Here
Q38. Trim and Reverse function in Hive with examples.
Ans: Trim function will delete the spaces associated with a string.
Example:
TRIM(‘ INTELLIPAAT ‘);
Output:
INTELLIPAAT
To remove the Leading space
LTRIM(‘ INTELLIPAAT’);
To remove the trailing space
RTRIM(‘INTELLIPAAT ‘);
In Reverse function, characters are reversed in the string.
Example:
REVERSE(‘INTELLIPAAT’);
Output:
TAAPILLETNI
Q39. How to change the column data type in Hive? Explain RLIKE in Hive.
Ans: We can change the column data type by using ALTER and CHANGE.
The syntax is :
ALTER TABLE table_name CHANGE column_namecolumn_namenew_datatype;
Example: If we want to change the data type of the salary column from integer to bigint in the employee table.
ALTER TABLE employee CHANGE salary salary BIGINT;RLIKE: Its full form is Right-Like and it is a special function in the Hive. It helps to examine the two substrings. i.e, if the substring of A matches with B then it evaluates to true.
Example:
Trueà‘Intellipaat’ RLIKE ‘tell’
True (this is a regular expression)à‘Intellipaat’ RLIKE ‘^I.*’
Q40. What are the components used in Hive query processor?
Ans: The components of a Hive query processor include:
Q41. What is Buckets in Hive?
Ans: The present data is partitioned and divided into different Buckets. This data is divided on the basis of Hash of the particular table columns.
Q42. Explain process to access sub directories recursively in Hive queries.
Ans: By using below commands we can access sub directories recursively in Hive
hive> Set mapred.input.dir.recursive=true;
hive> Set hive.mapred.supports.subdirectories=true;
Hive tables can be pointed to the higher level directory and this is suitable for the directory structure which is like /data/country/state/city/
Q43. What are the components used in Hive query processor?
Ans: The components of a Hive query processor include:
Q44. How to skip header rows from a table in Hive?
Ans: Header records in log files
System=….
Version=…
Sub-version=….
In the above three lines of headers that we do not want to include in our Hive query. To skip header lines from our tables in the Hive,set a table property that will allow us to skip the header lines.
CREATE EXTERNAL TABLE employee (
name STRING,
job STRING,
dob STRING,
id INT,
salary INT)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘ ‘ STORED AS TEXTFILE
LOCATION ‘/user/data’
TBLPROPERTIES("skip.header.line.count"="2”);
Q45. What is the maximum size of string data type supported by hive? Mention the Hive support binary formats.
Ans: The maximum size of string data type supported by hive is 2 GB.
Hive supports the text file format by default and it supports the binary format Sequence files, ORC files, Avro Data files, Parquet files.
Sequence files: Splittable, compressible and row oriented are the general binary format.
ORC files: Full form of ORC is optimized row columnar format files. It is a Record columnar file and column oriented storage file. It divides the table in row split. In each split stores that value of the first row in the first column and followed sub subsequently.
AVRO data files: It is same as a sequence file splittable, compressible and row oriented, but except the support of schema evolution and multilingual binding support.
Q46. What is the precedence order of HIVE configuration?
Ans: We are using a precedence hierarchy for setting the properties
Q47. If you run a select * query in Hive, Why does it not run MapReduce?
Ans: The hive.fetch.task.conversion property of Hive lowers the latency of mapreduce overhead and in effect when executing queries like SELECT, FILTER, LIMIT, etc., it skips mapreduce function
Q48. How Hive can improve performance with ORC format tables?
Ans: We can store the hive data in highly efficient manner in the Optimized Row Columnar file format. It can simplify many Hive file format limitations. We can improve the performance by using ORC files while reading, writing and processing the data.
Set hive.compute.query.using.stats-true;
Set hive.stats.dbclass-fs;
CREATE TABLE orc_table (
idint,
name string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘\:’
LINES TERMINATED BY ‘\n’
STORES AS ORC;
Q49. Explain the functionality of Object-Inspector.
Ans: It helps to analyze the internal structure of row object and individual structure of columns in HIVE. It also provides a uniform way to access complex objects that can be stored in multiple formats in the memory.
Instance of Java class
A standard Java object
A lazily initialized object
The Object-Inspector tells structure of the object and also ways to access the internal fields inside the object.
Q50. Whenever we run hive query, new metastore_db is created. Why?
Ans: Local metastore is created when we run Hive in embedded mode. And before creating it checks whether the metastore exists or not and this metastore property is defined in the configuration file hive-site.xml. Property is“javax.jdo.option.ConnectionURL” with default value “jdbc:derby:;databaseName=metastore_db;create=true”.So to change the behavior of the location to an absolute path, so that from that location meta-store will be used.
Q51. How can we access the sub directories recursively?
Ans: By using below commands we can access sub directories recursively in Hive:
hive> Set mapred.input.dir.recursive=true;
hive> Set hive.mapred.supports.subdirectories=true;
Hive tables can be pointed to the higher level directory and this is suitable for the directory structure which is like /data/country/state/city/
Q52. What are the uses of explode Hive?
Ans: Hadoop developers consider the array as their inputs and convert them into a separate table row. To convert complicate data types into desired table formats Hive is essentially using explode.
Q53. What is available mechanism for connecting from applications, when we run hive as a server?
Ans:
Q54. How do we write our own custom SerDe?
Ans: End users want to read their own data format instead of writing, so the user wants to write a Deserializer than SerDe.
Example: The RegexDeserializer will deserialize the data using the configuration parameter ‘regex’, and a list of column names.
If our SerDe supports DDL, we probably want to implement a protocol based on DynamicSerDe. It’s non-trivial to write a “thrift DDL” parser.
Q55. Mention the date data type in Hive. Name the Hive data type collection.
Ans: The TIMESTAMP data type stores date in java.sql.timestamp format.
Three collection data types in Hive:
Q56. Can we run UNIX shell commands from Hive? Can Hive queries be executed from script files? How? Give an example.
Ans: Yes, we can run UNIX shell commands from Hive using the! Mark before the command .For example: !pwd at hive prompt will list the current directory.
We can execute Hive queries from the script files by using the source command.
Example:
Hive> source /path/to/file/file_with_query.hql
Click Here to Know More<<HIVE