title: Map-Reduce-Filter Pattern for Processing Collection Elements
date: 2022-07-04 10:37:00
toc: false
index_img: http://api.btstu.cn/sjbz/?lx=m_dongman&cid=9
category:
- Go
tags: - Users
- Traversal
- Computation
- Statistics
- Strings
- Functions
During the daily development process, it is common to handle collection types such as arrays, slices, and dictionaries by iterating through them. For example, extracting all age attribute values from a dictionary type user slice and then calculating their sum. The conventional approach is to loop through all the slices, extract the age field values from the user dictionary key-value pairs, and then perform the accumulation one by one.
For simple individual scenarios, this implementation is fine, but it is a typical procedural thinking and the code has almost no reusability: every time you deal with similar problems, you have to write the same code template, such as calculating other field values or modifying type conversion logic, you have to rewrite the implementation code.
In functional programming, we can make this functionality more elegant and reusable through the Map-Reduce
technique.
Map-Reduce is not a whole, but needs to be implemented in two steps: Map and Reduce. The Map-Reduce model: first convert the dictionary type slice into a string type slice (Map, literally meaning one-to-one mapping), and then convert the converted slice elements to integers and accumulate them (Reduce, literally meaning reducing multiple collection elements through iterative processing to one).
Sometimes, in order to make the Map-Reduce code more robust (excluding invalid field values) or to perform statistical calculations only on specified ranges of data, we can introduce a Filter on top of Map-Reduce to filter the collection elements.
However, calling Map, Reduce, and Filter functions separately is not very elegant. We can nest them layer by layer through the decorator pattern, or use the pipeline pattern to make the calling logic more readable and elegant.