In addition if you are loading dynamic/static partitions to the final table from other temp table with hive statement(like insert into final table partition(..) select * from temp table), then you don't need to do any of the above methods because as you are using hive statement to load a partition then hive will update the metadata of the final table. hive> msck repair table testsb.xxx_bk1; FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask What does exception means. 1hive. #bigdata #hive #interview MSCK repair: When an external table is created in Hive, the metadata information such as the table schema, partition information You are not logged in. Let me show you workaround for how to pivot table in hive. The nature of simulating nature: A Q&A with IBM Quantum researcher Dr. Jamie We've added a "Necessary cookies only" option to the cookie consent popup. For Hive CLI, Pig, and MapReduce users access to Hive tables can be controlled using storage based authorization enabled on the metastore server. Usage null". What is better choice and why? ( Or this could be placed where each day logs are getting dumped and you need to pint logs table here). Did you ever get to the bottom of your issues? Making statements based on opinion; back them up with references or personal experience. If the policy doesn't allow that action, then Athena can't add partitions to the metastore. Does Counterspell prevent from any further spells being cast on a given turn? When I try to access an S3 object, I get the error "Request has expired." 2023, Amazon Web Services, Inc. or its affiliates. 2 comments YevhenKv on Aug 9, 2021 Sign up for free to join this conversation on GitHub . Hive msck repair not working managed partition tab Open Sourcing Clouderas ML Runtimes - why it matters to customers? How do I troubleshoot 403 Access Denied errors from an Amazon S3 bucket with public read access? However, users can run a metastore check command with the repair table option: 1HiveHDFS. The SYNC PARTITIONS option is equivalent to calling both ADD and DROP PARTITIONS. whereas, if I run the alter command then it is showing the new partition data. Connect and share knowledge within a single location that is structured and easy to search. null Resolution: The above error occurs when hive.mv.files.thread=0, increasing the value of the parameter to 15 fixes the issue This is a known bug The default value of the property is zero, it means it will execute all the . Read More Alter Table Partitions in HiveContinue. Thanks a lot for your answers. MSCK REPAIR TABLE 1 MSCK REPAIR TABLE MSCK REPAIR TABLE Amazon Athena 1000 () where Yesterday, you inserted some data which is. SET mapred.input.dir.recursive=true; Asking for help, clarification, or responding to other answers. How can this new ban on drag possibly be considered constitutional? Solution. By limiting the number of partitions created, it prevents the Hive metastore from timing out or hitting an out of memory . Run MSCK REPAIRTABLEto register the partitions. Enter the MSCK REPAIR query. If you run the query from Lambda function or other AWS services, please try to add following policy on execution role. It needs to traverses all subdirectories. 2.Run metastore check with repair table option. msck repair table user; . 03:47 AM, Created . we have already partitioned data in year and month for orders. Do roots of these polynomials approach the negative of the Euler-Mascheroni constant? ZK; Zookeeper * 2.1 Zookeeper; 2.2 - 2.2.1 step4 FileTxnSnapLog We should use an ALTER TABLE query in such cases. null Lets take a look at look at collect_set and collect_list and how can we use them effectively. Azure Databricks uses multiple threads for a single MSCK REPAIR by default, which splits createPartitions () into batches. I am trying to execute MSCK REPAIR TABLE but then it returns, The query ID is 956b38ae-9f7e-4a4e-b0ac-eea63fd2e2e4. In this blog, we will take look at another set of advanced aggregation functions in hive. This is overkill when we want to add an occasional one or two partitions to the table. You have to put data in directory named 'region=eastregio' in table location directory: Thanks for contributing an answer to Stack Overflow! Why am I getting a 200 response with "InternalError" or "SlowDown" for copy requests to my Amazon S3 bucket? Required fields are marked *, document.getElementById("comment").setAttribute( "id", "a8f1ec1e59b0b63bcb41b03077d06087" );document.getElementById("ae02750350").setAttribute( "id", "comment" );Comment *. Not the answer you're looking for? We have taken backup one of the production database data and moved it to development local filesystem.In development movied data from local mountpoint to hive database hdfs location. Hive stores a list of partitions for each table in its metastore. Let us run MSCK query and see if it adds that entry to our table. Possible Causes A directory in the HiveServer log file /var/log/Bigdata/hive/hiveserver/hive.log does not comply with the partition format. Not the answer you're looking for? Like most things in life, it is not a perfect thing and we should not use it when we need to add 1-2 partitions to the table. 02-13-2019 No partitions. Even when a MSCK is not executed, the queries against this table will work since the metadata already has the HDFS location details from where the files need to be read. Read More Creating Database in HiveContinue. You should look at the HS2 logs to see if there were any errors from msck command which ignored such partitions. The list of partitions is stale; it still includes the dept=sales hiveshow tables like '*nam MSCK REPAIR is a useful command and it had saved a lot of time for me. Created on All rights reserved. What version of Hive are you using? https://docs.aws.amazon.com/athena/latest/ug/msckrepair-table.html#msck-repair-table-troubleshooting, TAO Dashboard deployment failed (table `ta_organizational_view_reports` doesn't exist), MSCK REPAIR TABLE returns FAILED org.apache.hadoop.hive.ql.exec.DDLTask. Hivemsck repair table table_name Hivemsck repair table table_nameFAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask > MapReduce MRS > > Hive AI Gallery - AIModelArts > AIModelArts > AI Gallery AI Gallery - AIModelArts However, underscores (_) are the only special characters that Athena supports in database, table, view, and column names. In other words, it will add any partitions that exist on HDFS but not in metastore to the metastore. it worked successfully. If the table cannot be found Azure Databricks raises a TABLE_OR_VIEW_NOT_FOUND error. null This query ran against the "costfubar" database, unless qualified by the query. Is there a solutiuon to add special characters from software and how to do it. Find centralized, trusted content and collaborate around the technologies you use most. On the other hand, a partitioned table will have multiple directories for each and every partition. Run MSCK REPAIR TABLE to register the partitions. Created Question:2. But there is better way. You only run MSCK REPAIR TABLE while the structure or partition of the external table is changed. We can now check our partitions. In the following example, the database name is alb-database1. MSCK command without the REPAIR option can be used to find details about metadata mismatch metastore. directory. One example that usually happen, e.g. Do new devs get fired if they can't solve a certain bug? Site design / logo 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA. 89051 296 1 Hive 1.1 Hive. When you run MSCK REPAIR TABLE or SHOW CREATE TABLE, Athena returns a ParseException error: Solution 2 10:44 PM. 2.Run metastore check with repair table option. This command updates Delta table metadata to the Unity Catalog service. MSCK REPAIR PRIVILEGES January 11, 2023 Applies to: Databricks SQL Databricks Runtime Removes all the privileges from all the users associated with the object. HIVE-17824 hive msck repair metastorehdfs. How do I find the cost of my Amazon S3 buckets? I am also getting this error intermittently. Hive Facebook To learn more, see our tips on writing great answers. You use a field dt which represent a date to partition the table. With this option, it will add any partitions that exist on HDFS but not in metastore to the metastore. nu. This statement (a Hive command) adds metadata about the partitions to the Hive catalogs. Why we need to run msck Repair table statement everytime after each ingestion? 06-13-2018 Let us see it in action. rev2023.3.3.43278. hive msck repair_hive mack_- . I am new for Apache Hive. Here are some common causes of this behavior: Review the IAM policies attached to the user or role that you're using to run MSCK REPAIR TABLE. If, however, new partitions are directly added to HDFS , the metastore (and hence Hive) will not be aware of these partitions unless the user runs either of below ways to add the newly add partitions. If the policy doesn't allow that action, then Athena can't add partitions to the metastore. hiveORCFile msck repair table"""" Is there a single-word adjective for "having exceptionally strong moral principles"? Sounds like magic is not it? 2Hive . Using Kolmogorov complexity to measure difficulty of problems? By giving the configured batch size for the property hive.msck.repair.batch.size it can run in the batches internally. When select statement triggered it worked. We can easily create tables on already partitioned data and use MSCK REPAIR to get all of its partitions metadata. Ans 1: The exception posted is very generic. When there is a large number of untracked partitions, there is a provision to run MSCK REPAIR TABLE batch wise to avoid OOME (Out of Memory Error). For Databricks SQL Databricks Runtime 12.1 and above, MSCK is optional. vegan) just to try it, does this inconvenience the caterers and staff? How can I troubleshoot the 404 "NoSuchKey" error from Amazon S3? The default value of the property is zero, it means it will execute all the partitions at once. 2HiveHQLMapReduce. MSCK REPAIR TABLE hdfs dfs -puthdfs apihivehive hivemetastore hiveinsertmetastore ALTER TABLE table_name ADD PARTITION MSCK REPAIR TABLE Also, would be worth to take a look at hive.msck.path.validation configuration in case it is set to "ignore" which silently ignores invalidate partitions. Starting with Hive 1.3, MSCK will throw exceptions if directories with disallowed characters in partition values are found on HDFS. The cache fills the next time the table or dependents are accessed. Supported browsers are Chrome, Firefox, Edge, and Safari. Repairing a existing table or recreating it ? The MSCK REPAIR TABLE command scans a file system such as Amazon S3 for Hive compatible partitions that were added to the file system after the table was created. It needs to traverses all subdirectories. i.e. Another way to recover partitions is to use ALTER TABLE RECOVER PARTITIONS. Issue: Trying to run "msck repair table <tablename>" gives the below error Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. synchronize the metastore with the file system, HDFS for example. When you use the AWS Glue Data Catalog with Athena, the IAM policy must allow the glue:BatchCreatePartition action. What video game is Charlie playing in Poker Face S01E07? This is an automated email from the ASF dual-hosted git repository. by Theo Tolv Many guides, including the official Athena documentation, suggest using the command MSCK REPAIR TABLE to load partitions into a partitioned table. Found that here https://aws.amazon.com/premiumsupport/knowledge-center/athena-aws-glue-msck-repair-table/. The difference between the phonemes /p/ and /b/ in Japanese. What is a word for the arcane equivalent of a monastery? What if the partition directories are empty? This command saves a lot of time as we do not need to add each partition manually. Please post the error message on our forum or contact customer support with Query Id: 856e146a-8b13-4175-8cd8-692eef6d3fa5, The table was created correctly in Glue with. to or removed from the file system, but are not present in the Hive metastore. This may or may not work. HIVE_METASTORE_ERROR: com.facebook.presto.spi.PrestoException: Required Table Storage Descriptor is not populated. Curious if you ever were able to get the root cause for this. Hive. 1hadoopsparkhudi How to show that an expression of a finite type must be one of the finitely many possible values? Yeyyy. Read More Pivot rows to columns in HiveContinue. Restrictions on Hive Commands and Statements Staging Ground Beta 1 Recap, and Reviewers needed for Beta 2, hdfs partitioned data back up when overwriting a hive table, How to update partition metadata in Hive , when partition data is manualy deleted from HDFS, Apache hive MSCK REPAIR TABLE new partition not added, handle subfolders after partitions in hive, hive external table on parquet not fetching data, Error executing MSCK REPAIR TABLE on external Hive table (Hive 2.3.6), msck repair a big table take very long time, hive daily msck repair needed if new partition not added. Why does Mister Mxyzptlk need to have a weakness in the comics? Why?We have done testsb database creation and Table creation with ddl script.And moved the data from local to hdfs hive table location. All the above mentioned ways we have to do if you are directly adding a new directory in hdfs or other ways instead of hive. Now the table is not giving the new partition content of factory3 file. It will include the symbols on package, but will increase your app size. Applies to: Databricks SQL Databricks Runtime 10.0 and above. Apache hive MSCK REPAIR TABLE new partition not added, How Intuit democratizes AI development across teams through reusability. Can I know where I am doing mistake while adding partition for table factory? purge . nu 0 Hi, I am trying to execute MSCK REPAIR TABLE but then it returns FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Browse other questions tagged, Where developers & technologists share private knowledge with coworkers, Reach developers & technologists worldwide. This could be one of the reasons, when you created the table as external table, the MSCK REPAIR worked as expected. Created Or running it just one time at the table creation is enough . MSCK REPAIR can also add new partitions to already existing table. Find answers, ask questions, and share your expertise, Hive msck repair not working managed partition table. "ignore" will try to create partitions anyway (old behavior). Created rev2023.3.3.43278. Why zero amount transaction outputs are kept in Bitcoin Core chainstate database? Connect and share knowledge within a single location that is structured and easy to search. The default option for MSC command is ADD PARTITIONS. hive DDL 1 hive> ALTER TABLE table_name RENAME TO new_table_name; 2 hive> ALTER TABLE table_name SET TBLPROPERTIES ('comment' = new_comment); 3 hive> ALTER TABLE table_name ADD [IF NOT EXISTS] PARTITION parti . Why are non-Western countries siding with China in the UN? null", MSCK REPAIR TABLE behaves differently when executed via Spark Context vs Athena Console/boto3. Log in to post an answer. We can MSCK REPAIR command. This command with this argument will fail if the target table is not stored in Unity Catalog. How Intuit democratizes AI development across teams through reusability. Where else select * from table; query able to fetch in non-partition table. In other words, it will add any partitions that exist on HDFS but not in metastore to the metastore. Site design / logo 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA. FAQ-msck repair table tablename execution error; FAQ-beeline; FAQ-insert into . valuesSelect; FAQ - Hivehdfs; FAQ-Hive parquetnull The main problem is that this command is very, very inefficient. Staging Ground Beta 1 Recap, and Reviewers needed for Beta 2, Failure to repair partitions in Amazon Athena, How to update partition metadata in Hive , when partition data is manualy deleted from HDFS, Hive External table does not showing in Namenode (Cloudera-QuickstartVm), Can not contact a hive table partition, after delete hdfs file related to partition, Error executing MSCK REPAIR TABLE on external Hive table (Hive 2.3.6), hive daily msck repair needed if new partition not added, Apache Hive Add TIMESTAMP partition using alter table statement, Hive table requires 'repair' for every new partitions while inserting parquet files using pyspark. HiveFacebook. ALTER TABLE table_name RECOVER PARTITIONS; I had same error, but resolved it to attach "s3:ListBucket" permission for underlying bucket on execution role. If the data paths are different, you can manually edit the generated alltables.sql file to reflect any changes. so If I add a new partition for a new day ingestion , I have to run this command , may you confirm please ? The Amazon S3 path name must be in lower case. For non-Delta tables, it repairs the tables partitions and updates the Hive metastore. A place where magic is studied and practiced? What is the correct way to screw wall and ceiling drywalls? There are multiple use cases when we need to transpose/pivot table and Hive does not provide us with easy function to do so. More info about Internet Explorer and Microsoft Edge. 01:47 PM. If the table is cached, the command clears the tables cached data and all dependents that refer to it. When I run MSCK REPAIR TABLE, Amazon Athena returns a list of partitions, but then fails to add the partitions to the table in the AWS Glue Data Catalog. If you preorder a special airline meal (e.g. Using it we can fix broken partition in the Hive table. hive. MSCK REPAIR TABLE factory; Now the table is not giving the new partition content of factory3 file. we have all of our partitions showing up in our table. Failed to read 'G:\ApkiC-Sharp\GNote\GNote\GNote.Android\obj. Created Review the IAM policies attached to the user or role that you're using to run MSCK REPAIR TABLE. MSCK REPAIR TABLE returns FAILED org.apache.hadoop.hive.ql.exec.DDLTask. This command updates the metadata of the table. For example in the root directory of table; When you run msck repair table partitions of day; 20200101 and 20200102 will be added automatically. All rights reserved. whereas, if I run the alter command then it is showing the new partition data. AWS support for Internet Explorer ends on 07/31/2022. Maintain that structure and then check table metadata if that partition is already present or not and add an only new partition. If a new partition is added manually by creating the directory and keeping the file in HDFS, a MSCK will be needed to refresh the metadata of the table to let it know about the newly added data. From data into HDFS I generate Hive external tables partitioned by date . Need the complete error message that was seen on the terminal upon running MSCK to come to see what could have gone wrong. See you next article. For example, a table T1 in default database with no partitions will have all its data stored in the HDFS path - "/user/hive/warehouse/T1/" . I hope This will help you. Calculating probabilities from d6 dice pool (Degenesis rules for botches and triggers), Short story taking place on a toroidal planet or moon involving flying. You can say that its easy. Athenahive. would we see partitions directly in our new table? MSCK REPAIR TABLE Glue . Using Kolmogorov complexity to measure difficulty of problems? Clouderas new Model Registry is available in Tech Preview to connect development and operations workflows, [ANNOUNCE] CDP Private Cloud Base 7.1.7 Service Pack 2 Released, [ANNOUNCE] CDP Private Cloud Data Services 1.5.0 Released. emp_part that stores partitions outside the warehouse. A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker. 02:39 AM we can add each partition using alter command right? null The query ID is 956b38ae-9f7e-4a4e-b0ac-eea63fd2e2e4 English petraindo asked 6 years ago 1509 views 5 Answers Now, we will learn how to drop some partition or add a new partition to the table in hive. Consider the below example. You should not attempt to run multiple MSCK REPAIR TABLE <table-name> commands in parallel. Ans 2: For an unpartitioned table, all the data of the table will be stored in a single directory/folder in HDFS. '/apps/hive/warehouse/maheshmogal.db/order_partition', Hive What is the difference between Collect Set and Collect List, Hive Advanced Aggregations with Grouping sets, Rollup and cube. By clicking Post Your Answer, you agree to our terms of service, privacy policy and cookie policy. hive> use testsb; OK Time taken: 0.032 seconds hive> msck repair table XXX_bk1; Maintain that structure and then check table metadata if that partition is already present or not and add an only new partition. HiveHadoop HiveHDFS HiveHiveSQLHadoopMapReduce Procedure In non-partition table having multiple files in table location. HiveHadoop SQL . - edited set hive.msck.path.validation=ignore; msck repair table . MSCK [REPAIR] TABLE table_name [ADD/DROP/SYNC PARTITIONS]; How can I explain to my manager that a project he wishes to undertake cannot be performed by the team? Or running it just one time at the table creation is enough . Are there tables of wastage rates for different fruit and veg? MSCK REPAIR TABLE returns FAILED org.apache.hadoop.hive.ql.exec.DDLTask. "msck repair"s3 S3 Hive creating a table but getting FAILED: SemanticException [Error 10035]: Column repeated in partitioning columns hadoop hive 20,703 Solution 1 Partition by columns should not be in create table definition. But what if there is a need and we need to add 100s of partitions? If the path is in camel case, then MSCK REPAIR TABLE doesn't add the partitions to the AWS Glue Data Catalog. For example, if the Amazon S3 path is userId, the following partitions aren't added to the AWS Glue Data Catalog: To resolve this issue, use lower case instead of camel case: Actions, resources, and condition keys for Amazon Athena, Actions, resources, and condition keys for AWS Glue. Can I know where I am doing mistake while adding partition for table factory? I see. You use this statement to clean up residual access control left behind after objects have been dropped from the Hive metastore outside of Databricks SQL or Databricks Runtime. When you use the AWS Glue Data Catalog with Athena, the IAM policy must allow the glue:BatchCreatePartition action. Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support. Can airtags be tracked from an iMac desktop, with no iPhone? Where does this (supposedly) Gibson quote come from? All rights reserved. Another way to recover partitions is to use ALTER TABLE RECOVER PARTITIONS. Msck::repair (Rajesh Balamohan via Ashutosh Chauhan) hashutosh Wed, 27 May 2020 11:16:08 -0700. Partition by columns will be automatically added to table columns. Hive SQL SQL! If running the MSCK REPAIR TABLE command doesn't resolve the issue, then drop the table . This goes to the directory where the table is pointing to and then creates a tree of directories and subdirectories, check table metadata, and adds all missing partitions. Why? We will learn how to create databases in Hive with simple operations like listing database, setting database location in HDFS & deleting database. When there is a large number of untracked partitions, there is a provision to run MSCK REPAIR TABLE batch wise to avoid OOME (Out of Memory Error). Troubleshoot 403 Access Denied error in Amazon S3. Additional reading Connect to an HDInsight cluster by using SSH FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask ignore. Its mostly due to permission issues like missing glue:BatchCreatePartition or KMS permissions or s3:GetObject. Use hive.msck.path.validation setting on the client to alter this behavior; "skip" will simply skip the directories. The MSCK REPAIR TABLE command was designed to bulk-add partitions that already exist on the filesystem but are not present in the metastore. Yes, you need to run msck repair table daily once you have loaded a new partition in HDFS location. 02-21-2019 The MSCK REPAIR TABLE command was designed to manually add partitions that are added Athena returns "FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. How do I troubleshoot a HTTP 500 or 503 error from Amazon S3? How can we prove that the supernatural or paranormal doesn't exist? A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Already have an account? Hive Data Definition Language 2023/03/02 11:30. Can you please confirm why it not worked in managed table? Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type. It is useful in situations where new data has been added to a partitioned table, and the metadata about the . Well yes it has added new partition to our table. We had the same problem (very intermittent). 09-16-2022 Read More Hive What is the difference between Collect Set and Collect ListContinue. Clouderas new Model Registry is available in Tech Preview to connect development and operations workflows, [ANNOUNCE] CDP Private Cloud Base 7.1.7 Service Pack 2 Released, [ANNOUNCE] CDP Private Cloud Data Services 1.5.0 Released. Suggestions: By default, Managed tables store their data in HDFS under the path "/user/hive/warehouse/" or "/user/hive/warehouse//". Hadoop2.7.6+Spark2.4.4+Scala2.11.12+Hudi0.5.2 . Thanks for contributing an answer to Stack Overflow! This task assumes you created a partitioned external table named emp_part that stores partitions outside the warehouse. You The Amazon Simple Storage Service (Amazon S3) path is in camel case instead of lower case (for example, s3://awsdoc-example-bucket/path/userId=1/, s3://awsdoc-example-bucket/path/userId=2/, s3://awsdoc-example-bucket/path/userId=3/, s3://awsdoc-example-bucket/path/userid=1/, s3://awsdoc-example-bucket/path/userid=2/, s3://awsdoc-example-bucket/path/userid=3/. I have a daily ingestion of data in to HDFS . We know we can add extra partitions using ALTER TABLE command to the Hive table. and has the following partitions shown in Glue: the error was that the prefix in the S3 bucket was empty. Why am I getting a 200 response with "InternalError" or "SlowDown" for copy requests to my Amazon S3 bucket? Thanks a lot for your answersBest regards, Created Open Sourcing Clouderas ML Runtimes - why it matters to customers? What if we are pointing our external table to already partitioned data in HDFS? Read More Hive Advanced Aggregations with Grouping sets, Rollup and cubeContinue, Your email address will not be published. . Log in to post an answer. How to handle a hobby that makes income in US. On top of that, there are multiple complex data types in hive which makes it easy to process data in Hive. About an argument in Famine, Affluence and Morality, Recovering from a blunder I made while emailing a professor. You should run it whenever you have new partition. . hive> create external table foo (a int) partitioned by (date_key bigint) location 'hdfs:/tmp/foo'; OK Time taken: 3.359 seconds hive> msck repair table foo; FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask from the log. Hive stores a list of partitions for each table in its metastore. Now we are creating an external table and pointing to this location. The equivalent command on Amazon Elastic MapReduce (EMR)'s version of Hive is: You should almost never use this command. Making statements based on opinion; back them up with references or personal experience. a new date in this case. Find answers, ask questions, and share your expertise. You use a field dt which represent a date to partition the table. You repair the discrepancy manually to msck repair table hadoop fshadoop apiHDFSCLI msck repair table table_name; msck == Hive's MetaStore Consistency checK HivemetastorederbyMySQL Hive CLIinsertalter tablemetastore ALTER TABLE table_name ADD PARTITION (partCol = 'value1') location 'loc1'; // . it worked successfully.hive> use testsb;OKTime taken: 0.032 secondshive> msck repair table XXX_bk1;xxx_bk1:payloc=YYYY/client_key=MISSDC/trxdate=20140109..Repair: Added partition to metastore xxx_bk1:payloc=0002/client_key=MISSDC/trxdate=20110105..Time taken: 16347.793 seconds, Fetched: 94156 row(s).