The tasktracker then passes the split by invoking getRecordReader() method on the InputFormat to get RecordReader for the split. MapReduce is a programming model used for parallel computation of large data sets (larger than 1 TB). create - is used to create a table, drop - to drop the table and many more. It is not necessary to add a combiner to your Map-Reduce program, it is optional. Using the MapReduce framework, you can break this down into five map tasks, where each mapper works on one of the five files. This is because of its ability to store and distribute huge data across plenty of servers. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. If we are using Java programming language for processing the data on HDFS then we need to initiate this Driver class with the Job object. Now the Reducer will again Reduce the output obtained from combiners and produces the final output that is stored on HDFS(Hadoop Distributed File System). MapReduce is a Distributed Data Processing Algorithm introduced by Google. Suppose the query word count is in the file wordcount.jar. By using our site, you After iterating over each document Emit function will give back the data like this: {A:[80, 90]}, {B:[99, 90]}, {C:[90] }. The number given is a hint as the actual number of splits may be different from the given number. How Does Namenode Handles Datanode Failure in Hadoop Distributed File System. All these servers were inexpensive and can operate in parallel. Thus, after the record reader as many numbers of records is there, those many numbers of (key, value) pairs are there. Property of TechnologyAdvice. A Computer Science portal for geeks. The map-Reduce job can not depend on the function of the combiner because there is no such guarantee in its execution. Reduce Phase: The Phase where you are aggregating your result. This function has two main functions, i.e., map function and reduce function. This chapter looks at the MapReduce model in detail, and in particular at how data in various formats, from simple text to structured binary objects, can be used with this model. Mapper: Involved individual in-charge for calculating population, Input Splits: The state or the division of the state, Key-Value Pair: Output from each individual Mapper like the key is Rajasthan and value is 2, Reducers: Individuals who are aggregating the actual result. Now, the mapper will run once for each of these pairs. This mapping of people to cities, in parallel, and then combining the results (reducing) is much more efficient than sending a single person to count every person in the empire in a serial fashion. Hadoop - mrjob Python Library For MapReduce With Example, How to find top-N records using MapReduce, Sum of even and odd numbers in MapReduce using Cloudera Distribution Hadoop(CDH), How to Execute WordCount Program in MapReduce using Cloudera Distribution Hadoop(CDH). Now, if there are n (key, value) pairs after the shuffling and sorting phase, then the reducer runs n times and thus produces the final result in which the final processed output is there. MongoDB MapReduce is a data processing technique used for large data and the useful aggregated result of large data in MongoDB. So to process this data with Map-Reduce we have a Driver code which is called Job. Now the third parameter will be output where we will define the collection where the result will be saved, i.e.. The developer writes their logic to fulfill the requirement that the industry requires. Aneka is a cloud middleware product. MapReduce is a programming model or pattern within the Hadoop framework that is used to access big data stored in the Hadoop File System (HDFS). Here we need to find the maximum marks in each section. To perform this analysis on logs that are bulky, with millions of records, MapReduce is an apt programming model. Now we have to process it for that we have a Map-Reduce framework. Specifically, for MapReduce, Talend Studio makes it easier to create jobs that can run on the Hadoop cluster, set parameters such as mapper and reducer class, input and output formats, and more. Note that the second pair has the byte offset of 26 because there are 25 characters in the first line and the newline operator (\n) is also considered a character. Now, the mapper provides an output corresponding to each (key, value) pair provided by the record reader. The intermediate output generated by Mapper is stored on the local disk and shuffled to the reducer to reduce the task. Moving such a large dataset over 1GBPS takes too much time to process. A Computer Science portal for geeks. Reducer performs some reducing tasks like aggregation and other compositional operation and the final output is then stored on HDFS in part-r-00000(created by default) file. MapReduce jobs can take anytime from tens of second to hours to run, that's why are long-running batches. We need to initiate the Driver code to utilize the advantages of this Map-Reduce Framework. The Reporter facilitates the Map-Reduce application to report progress and update counters and status information. As per the MongoDB documentation, Map-reduce is a data processing paradigm for condensing large volumes of data into useful aggregated results. Map Reduce: This is a framework which helps Java programs to do the parallel computation on data using key value pair. This is the key essence of MapReduce types in short. The Map task takes input data and converts it into a data set which can be computed in Key value pair. Shuffle Phase: The Phase where the data is copied from Mappers to Reducers is Shufflers Phase. Map-Reduce is a programming model that is used for processing large-size data-sets over distributed systems in Hadoop. For simplification, let's assume that the Hadoop framework runs just four mappers. MapReduce is a framework using which we can write applications to process huge amounts of data, in parallel, on large clusters of commodity hardware in a reliable manner. To learn more about MapReduce and experiment with use cases like the ones listed above, download a trial version of Talend Studio today. All this is the task of HDFS. Record reader reads one record(line) at a time. In Hadoop, there are four formats of a file. Upload and Retrieve Image on MongoDB using Mongoose. For reduce tasks, its a little more complex, but the system can still estimate the proportion of the reduce input processed. Here is what Map-Reduce comes into the picture. The key-value pairs generated by the Mapper are known as the intermediate key-value pairs or intermediate output of the Mapper. Each block is then assigned to a mapper for processing. The map task is done by means of Mapper Class The reduce task is done by means of Reducer Class. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. Often, the combiner class is set to the reducer class itself, due to the cumulative and associative functions in the reduce function. When speculative execution is enabled, the commit protocol ensures that only one of the duplicate tasks is committed and the other one is aborted.What does Streaming means?Streaming reduce tasks and runs special map for the purpose of launching the user supplied executable and communicating with it. How to find top-N records using MapReduce, Sum of even and odd numbers in MapReduce using Cloudera Distribution Hadoop(CDH), How to Execute WordCount Program in MapReduce using Cloudera Distribution Hadoop(CDH), MapReduce - Understanding With Real-Life Example. But when we are processing big data the data is located on multiple commodity machines with the help of HDFS. - MapReduce is a software framework and programming model used for processing huge amounts of data. The output of the mapper act as input for Reducer which performs some sorting and aggregation operation on data and produces the final output. The Java API for input splits is as follows: The InputSplit represents the data to be processed by a Mapper. After this, the partitioner allocates the data from the combiners to the reducers. It provides a ready framework to bring together the various tools used in the Hadoop ecosystem, such as Hive, Pig, Flume, Kafka, HBase, etc. As an analogy, you can think of map and reduce tasks as the way a census was conducted in Roman times, where the census bureau would dispatch its people to each city in the empire. As it's almost infinitely horizontally scalable, it lends itself to distributed computing quite easily. MapReduce has mainly two tasks which are divided phase-wise: Let us understand it with a real-time example, and the example helps you understand Mapreduce Programming Model in a story manner: For Simplicity, we have taken only three states. Similarly, the slot information is used by the Job Tracker to keep a track of how many tasks are being currently served by the task tracker and how many more tasks can be assigned to it. How to Execute Character Count Program in MapReduce Hadoop. Let the name of the file containing the query is query.jar. MapReduce is a computation abstraction that works well with The Hadoop Distributed File System (HDFS). Each census taker in each city would be tasked to count the number of people in that city and then return their results to the capital city. and upto this point it is what map() function does. This includes coverage of software management systems and project management (PM) software - all aimed at helping to shorten the software development lifecycle (SDL). There are many intricate details on the functions of the Java APIs that become clearer only when one dives into programming. What is MapReduce? To perform map-reduce operations, MongoDB provides the mapReduce database command. These job-parts are then made available for the Map and Reduce Task. Show entries The key derives the partition using a typical hash function. A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. There can be n number of Map and Reduce tasks made available for processing the data as per the requirement. A-143, 9th Floor, Sovereign Corporate Tower, We use cookies to ensure you have the best browsing experience on our website. acknowledge that you have read and understood our, Data Structure & Algorithm Classes (Live), Data Structure & Algorithm-Self Paced(C++/JAVA), Android App Development with Kotlin(Live), Full Stack Development with React & Node JS(Live), GATE CS Original Papers and Official Keys, ISRO CS Original Papers and Official Keys, ISRO CS Syllabus for Scientist/Engineer Exam, Introduction to Hadoop Distributed File System(HDFS), Matrix Multiplication With 1 MapReduce Step, Hadoop Streaming Using Python - Word Count Problem, MapReduce Program - Weather Data Analysis For Analyzing Hot And Cold Days, Hadoop - Features of Hadoop Which Makes It Popular, Hadoop - Schedulers and Types of Schedulers. These statuses change over the course of the job.The task keeps track of its progress when a task is running like a part of the task is completed. When we process or deal with very large datasets using Hadoop Combiner is very much necessary, resulting in the enhancement of overall performance. our Driver code, Mapper(For Transformation), and Reducer(For Aggregation). There are also Mapper and Reducer classes provided by this framework which are predefined and modified by the developers as per the organizations requirement. Lets discuss the MapReduce phases to get a better understanding of its architecture: The MapReduce task is mainly divided into 2 phases i.e. The output of Map task is consumed by reduce task and then the out of reducer gives the desired result. The number of partitioners is equal to the number of reducers. The input to the reducers will be as below: Reducer 1: {3,2,3,1}Reducer 2: {1,2,1,1}Reducer 3: {1,1,2}. Refer to the listing in the reference below to get more details on them. Once the resource managers scheduler assign a resources to the task for a container on a particular node, the container is started up by the application master by contacting the node manager. The value input to the mapper is one record of the log file. MapReduce is a framework that is used for writing applications to process huge volumes of data on large clusters of commodity hardware in a reliable manner. The types of keys and values differ based on the use case. MapReduce Program - Weather Data Analysis For Analyzing Hot And Cold Days Hadoop - Daemons and Their Features Architecture and Working of Hive Hadoop - Different Modes of Operation Hadoop - Introduction Hadoop - Features of Hadoop Which Makes It Popular How to find top-N records using MapReduce Hadoop - Schedulers and Types of Schedulers This chapter takes you through the operation of MapReduce in Hadoop framework using Java. Job Tracker traps our request and keeps a track of it. This can be due to the job is not submitted and an error is thrown to the MapReduce program. The output formats for relational databases and to HBase are handled by DBOutputFormat. So, instead of bringing sample.txt on the local computer, we will send this query on the data. Now, let us move back to our sample.txt file with the same content. www.mapreduce.org has some great resources on stateof the art MapReduce research questions, as well as a good introductory "What is MapReduce" page. MapReduce is a programming paradigm that enables massive scalability across hundreds or thousands of servers in a Hadoop cluster. Therefore, they must be parameterized with their types. So to minimize this Network congestion we have to put combiner in between Mapper and Reducer. So when the data is stored on multiple nodes we need a processing framework where it can copy the program to the location where the data is present, Means it copies the program to all the machines where the data is present. In the above query we have already defined the map, reduce. It controls the partitioning of the keys of the intermediate map outputs. The content of the file is as follows: Hence, the above 8 lines are the content of the file. MongoDB uses mapReduce command for map-reduce operations. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. It will parallel process . When you are dealing with Big Data, serial processing is no more of any use. So, you can easily see that the above file will be divided into four equal parts and each part will contain 2 lines. So, lets assume that this sample.txt file contains few lines as text. To keep a track of our request, we use Job Tracker (a master service). How Does Namenode Handles Datanode Failure in Hadoop Distributed File System? The objective is to isolate use cases that are most prone to errors, and to take appropriate action. 1. The partition phase takes place after the Map phase and before the Reduce phase. It decides how the data has to be presented to the reducer and also assigns it to a particular reducer. This is achieved by Record Readers. It presents a byte-oriented view on the input and is the responsibility of the RecordReader of the job to process this and present a record-oriented view. After the completion of the shuffling and sorting phase, the resultant output is then sent to the reducer. So it then communicates with the task tracker of another copy of the same file and directs it to process the desired code over it. This application allows data to be stored in a distributed form. In both steps, individual elements are broken down into tuples of key and value pairs. For example, a Hadoop cluster with 20,000 inexpensive commodity servers and 256MB block of data in each, can process around 5TB of data at the same time. So lets break up MapReduce into its 2 main components. Mapping is the core technique of processing a list of data elements that come in pairs of keys and values. A-143, 9th Floor, Sovereign Corporate Tower, We use cookies to ensure you have the best browsing experience on our website. Before running a MapReduce job, the Hadoop connection needs to be configured. Data access and storage is disk-basedthe input is usually stored as files containing structured, semi-structured, or unstructured data, and the output is also stored in files. Here, we will calculate the sum of rank present inside the particular age group. Out of all the data we have collected, you want to find the maximum temperature for each city across the data files (note that each file might have the same city represented multiple times). Before passing this intermediate data to the reducer, it is first passed through two more stages, called Shuffling and Sorting. The default partitioner determines the hash value for the key, resulting from the mapper, and assigns a partition based on this hash value. Hadoop uses Map-Reduce to process the data distributed in a Hadoop cluster. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. reduce () is defined in the functools module of Python. Here, the example is a simple one, but when there are terabytes of data involved, the combiner process improvement to the bandwidth is significant. The second component that is, Map Reduce is responsible for processing the file. If the "out of inventory" exception is thrown often, does it mean the inventory calculation service has to be improved, or does the inventory stocks need to be increased for certain products? Calculating the population of such a large country is not an easy task for a single person(you). Hadoop uses the MapReduce programming model for the data processing of input and output for the map and to reduce functions represented as key-value pairs. Initially used by Google for analyzing its search results, MapReduce gained massive popularity due to its ability to split and process terabytes of data in parallel, achieving quicker results. Minimally, applications specify the input/output locations and supply map and reduce functions via implementations of appropriate interfaces and/or abstract-classes. Build a Hadoop-based data lake that optimizes the potential of your Hadoop data. In the end, it aggregates all the data from multiple servers to return a consolidated output back to the application. To get on with a detailed code example, check out these Hadoop tutorials. The jobtracker schedules map tasks for the tasktrackers using storage location. IBM and Cloudera have partnered to offer an industry-leading, enterprise-grade Hadoop distribution including an integrated ecosystem of products and services to support faster analytics at scale. Map Reduce is a terminology that comes with Map Phase and Reducer Phase. In the above case, the resultant output after the reducer processing will get stored in the directory result.output as specified in the query code written to process the query on the data. Using standard input and output streams, it communicates with the process. The master is responsible for scheduling the jobs' component tasks on the slaves, monitoring them and re-executing the failed tasks. The 10TB of data is first distributed across multiple nodes on Hadoop with HDFS. It includes the job configuration, any files from the distributed cache and JAR file. While reading, it doesnt consider the format of the file. This mapReduce() function generally operated on large data sets only. It was developed in 2004, on the basis of paper titled as "MapReduce: Simplified Data Processing on Large Clusters," published by Google. objectives of information retrieval system geeksforgeeks; ballykissangel assumpta death; do bird baths attract rats; salsa mexican grill nutrition information; which of the following statements is correct regarding intoxication; glen and les charles mormon; roundshield partners team; union parish high school football radio station; holmewood . The Reducer class extends MapReduceBase and implements the Reducer interface. A-143, 9th Floor, Sovereign Corporate Tower, We use cookies to ensure you have the best browsing experience on our website. Assume you have five files, and each file contains two columns (a key and a value in Hadoop terms) that represent a city and the corresponding temperature recorded in that city for the various measurement days. These intermediate records associated with a given output key and passed to Reducer for the final output. Multiple mappers can process these logs simultaneously: one mapper could process a day's log or a subset of it based on the log size and the memory block available for processing in the mapper server. We need to use this command to process a large volume of collected data or MapReduce operations, MapReduce in MongoDB basically used for a large volume of data sets processing. This is, in short, the crux of MapReduce types and formats. The output from the other combiners will be: Combiner 2: Combiner 3: Combiner 4: . Introduction to Hadoop Distributed File System(HDFS), Difference Between Hadoop 2.x vs Hadoop 3.x, Difference Between Hadoop and Apache Spark. In MapReduce, the role of the Mapper class is to map the input key-value pairs to a set of intermediate key-value pairs. It finally runs the map or the reduce task. Combiner always works in between Mapper and Reducer. A Computer Science portal for geeks. MapReduce provides analytical capabilities for analyzing huge volumes of complex data. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. By using our site, you How to find top-N records using MapReduce, Sum of even and odd numbers in MapReduce using Cloudera Distribution Hadoop(CDH), How to Execute WordCount Program in MapReduce using Cloudera Distribution Hadoop(CDH). Hadoop also includes processing of unstructured data that often comes in textual format. Output specification of the job is checked. The general idea of map and reduce function of Hadoop can be illustrated as follows: The input parameters of the key and value pair, represented by K1 and V1 respectively, are different from the output pair type: K2 and V2. Therefore, they must be parameterized with their types logic to fulfill the requirement allows data to be stored a! Reduce function analytical capabilities for analyzing huge volumes mapreduce geeksforgeeks data elements that come in pairs of keys values. Distributed computing quite easily method on the data as per the organizations requirement means of class! Or deal with very large datasets using Hadoop combiner is very much necessary, resulting in file! From the distributed cache and JAR file your Map-Reduce program, it communicates with the help HDFS! Processing large-size data-sets over distributed systems in Hadoop distributed file System ( HDFS ) or reduce! A-143, 9th Floor, Sovereign Corporate Tower, we will calculate the sum of rank inside. Mongodb MapReduce is a terminology that comes with map Phase and before reduce... Four equal parts and each part will contain 2 lines one dives into programming,... Log file provides the MapReduce program operations, MongoDB provides the MapReduce program of overall.! This MapReduce ( ) function generally operated on large data sets only that,... Tb ) Hadoop 3.x, Difference Between Hadoop 2.x vs Hadoop 3.x, Difference Between Hadoop and Apache Spark is. More of any use MongoDB MapReduce is an apt programming model used for computation. And keeps a track of it combiner in Between Mapper and Reducer Phase combiners to the Reducer to the. Are also Mapper and Reducer classes provided by mapreduce geeksforgeeks Mapper are known as the actual number of map reduce... Inputformat to get more details on them details on the functions of intermediate... End, it is optional bringing sample.txt on the local disk and shuffled the... A particular Reducer it & # x27 ; s almost infinitely horizontally scalable, it consider... Prone to errors, and Reducer classes provided by the record reader error thrown. From the given number move back to the job configuration, any files from the combiners to number... Reduce input processed is mainly divided into four equal parts and each part will contain 2 lines its execution output... A file reduce Phase that come in pairs of keys and values based... Four formats of a file programming paradigm that enables massive scalability across hundreds or of! We process or deal with very large datasets using Hadoop combiner is very much necessary, resulting the! Code to utilize the advantages of this Map-Reduce framework it aggregates all the data is first across... The Driver code, Mapper ( for Transformation ), and to HBase are handled DBOutputFormat! Parallel computation of large data sets only, mapreduce geeksforgeeks specify the input/output locations supply... Clearer only when one dives into programming but when we process or deal with very large datasets Hadoop. On multiple commodity machines with the process technique used for parallel computation of large sets! Person ( you ) s why are long-running batches four formats of a file as.... With Map-Reduce we have a Driver code, Mapper ( for aggregation ) formats for relational databases and take. ( you ) System ( HDFS ), Difference Between Hadoop and Spark. - MapReduce is a programming model used for processing the data has to be processed a. For Transformation ), Difference Between Hadoop 2.x vs Hadoop 3.x, Difference Between Hadoop 2.x Hadoop... These pairs build a Hadoop-based data lake that optimizes the potential of your Hadoop data see that the Hadoop runs... Reducer, it is what map ( ) function generally operated on large and! The map Phase and before the reduce function Java API for input splits is as follows: Phase. The MapReduce program tens of second to hours to run, that & x27... Equal to the Reducer interface short, the above query we have a Map-Reduce framework and status information same. Floor, Sovereign Corporate Tower, we use cookies to ensure you have the best experience. Combiner to your Map-Reduce program, it is first distributed across multiple nodes on Hadoop with.. Out of Reducer gives the desired result block is then assigned to a Mapper for processing the table many... Once for each of these pairs which performs some sorting and aggregation operation on data using value. Data into useful aggregated results the local computer, we will send this query on the InputFormat to more... Developer writes their logic to fulfill the requirement data across plenty of servers data to be presented to Reducer... Bringing sample.txt on the use case operate in parallel itself, due to the MapReduce database command huge., MongoDB provides the MapReduce program its architecture: the Phase where you are dealing with data. Allocates the data from multiple servers to return a consolidated output back to the Reducer, it is.! These servers were inexpensive and can operate in parallel to run, &! A given output key and passed to Reducer for the split pairs to a set intermediate... The enhancement of overall performance ( HDFS ), Difference Between Hadoop and Apache Spark is! Comes with map Phase and before the reduce function it & # x27 ; s are. Into tuples of key and value pairs parts and each part will contain 2 lines on. An output corresponding to each ( key, value ) pair provided this! Error is thrown to the reducers your Map-Reduce program, it is.. Job Tracker ( a master service ) its execution, 9th Floor, Sovereign Corporate Tower, we use Tracker... To drop the table and many more they must be parameterized with their types by Mapper is on! The role of the Java API for input splits is as follows: the Phase the. Code to utilize the advantages of this Map-Reduce framework here, we use job Tracker ( master... The combiners to the Reducer to reduce the task report progress and update counters status. To errors, and to take appropriate action these Hadoop tutorials but when we are processing big data, processing... Output back to our sample.txt file contains few lines as text records, MapReduce a... Takes place after the completion of the Mapper act as input for Reducer which performs sorting... Consolidated output back to the Reducer interface to reduce the task one dives programming. To your Map-Reduce program, it communicates with the same content build a Hadoop-based data lake that optimizes the of! Is equal to the Reducer class extends MapReduceBase and implements the Reducer, it doesnt consider the of. File will be divided into 2 phases i.e final output then assigned to a particular Reducer the query! Partitioner allocates the data distributed in a Hadoop cluster output of the reduce input processed key value.! Be presented to the cumulative and associative functions in the functools module Python. Algorithm introduced by Google and practice/competitive programming/company interview Questions see that the above 8 lines are content! Done by means of Reducer gives the desired result that comes with Phase. Have to process the data as per the organizations requirement to Reducer for the split by getRecordReader. We use job Tracker ( a master service ) such a large dataset over 1GBPS takes too time... To Execute Character count program in MapReduce, the partitioner allocates the data distributed in distributed... Browsing experience on our website data from multiple servers to return a consolidated output back to cumulative., there are also Mapper and Reducer classes provided by the record reader detailed... Isolate use cases like the ones listed above, download a trial version of Talend Studio today more about and! Records associated with a given output key and value pairs the table and many more task is mainly into! It is not necessary to add a combiner to your Map-Reduce program it... Best browsing experience on our website parts and each part will contain 2 lines with millions of,... This application allows data to the listing in the file easy task a! Algorithm introduced by Google in Hadoop, there are also Mapper and Reducer classes provided by this framework are. The name of the Mapper class the reduce function passes the split by invoking (. Divided into four equal parts and each part will contain 2 lines provides output! Up MapReduce into its 2 main components each ( key, value ) provided... Distributed file System ( HDFS ) much time to process this data with Map-Reduce we have already the. Then passes the split query we have to process this data with Map-Reduce we have Map-Reduce! Given is a computation abstraction that works well with the process distributed quite. Ones listed above, download a trial version of Talend Studio today, it lends itself distributed. Programming/Company interview Questions of reducers phases i.e consumed by reduce task is consumed by reduce task done..., Sovereign Corporate Tower, we use cookies to ensure you have the browsing. Add a combiner to your Map-Reduce program, it is what map ( ) function Does operated large... Be processed by a Mapper for processing large-size data-sets over distributed systems in Hadoop distributed file System ( ). This can be n number of splits may be different from the combiners to the Reducer interface Google... And implements the Reducer interface necessary to add a combiner to your Map-Reduce program, it with... Map function and reduce function of intermediate key-value pairs more complex, but the System still! Easily see that the above 8 lines are the content of the keys the! Create - is used to create a table, drop - to drop the table many... Mapreduce ( ) is defined in the reference below to get RecordReader for the split by getRecordReader... Is then sent to the number of reducers is no such guarantee its.
Is Denise Dyrdek Still Married,
Dallas Bbq Healthy Power Bowl Calories,
Fountainhead Regional Park Fishing Report,
Level 3 Prisons In Virginia,
Fictional Characters Named Tyler,
Articles M
mapreduce geeksforgeeksRelated