title: 【Reprint】The @Async Annotation Is Just Like That.
date: 2021-09-09 09:46:00
comment: false
toc: true
category:
- Java
tags: - Reprint
- Java
- Async
- Annotation
- Thread Pool
- Thread
- Asynchronous
- Logic
This article is reprinted from: The @Async Annotation Is Just Like That. - Juejin
Hello, I am why.
I previously wrote some articles about thread pools, and then some classmates went around and found that I had never written an article about the @Async
annotation, so they came to ask me:
Yes, I admit it.
The reason I don't like this annotation is that I have never used it.
I am used to using custom thread pools to do some asynchronous logic, and I have been doing it this way for many years.
So if it’s a project I lead, you definitely won’t see the @Async
annotation in the project.
Have I seen the @Async
annotation before?
Of course, I have seen it. Some friends like to use this annotation.
One annotation solves asynchronous development, how nice.
I don’t know if those who use this annotation know its principles; I certainly don’t.
Recently, when developing, I introduced a component and found that in the methods called, some places used this annotation.
Since it was used this time, let’s study it.
First, it should be noted that this article will not cover knowledge related to thread pools.
It only describes how I learned about this annotation that I previously knew nothing about.
Create a Demo#
I wonder how everyone would approach this situation.
But I believe that no matter what angle you approach it from, you will eventually end up in the source code.
So, I usually start by creating a Demo.
The Demo is very simple, just three classes.
First is the startup class, which needs no explanation:
Then create a service:
The syncSay method in this service is annotated with @Async
.
Finally, create a Controller to call it, and that's it:
The Demo is set up, and if it takes you more than 5 minutes, I’ll admit defeat.
Then, start the project, call the interface, and check the logs:
Wow, from the thread name, this doesn’t seem asynchronous at all?
Why is it still a Tomcat thread?
Thus, I encountered the first problem on my research journey: the @Async
annotation did not take effect.
Why Is It Not Effective?#
Why is it not effective?
I’m also confused. I said I knew nothing about this annotation before, so how would I know?
So what do you do when you encounter this problem?
Of course, you program towards the browser!
In this case, if I analyze from the source code why it’s not effective, I could definitely find the reason.
However, if I program towards the browser, I can find these two pieces of information in just 30 seconds:
Reasons for failure:
-
- The
@SpringBootApplication
startup class does not have the@EnableAsync
annotation added.
- The
-
- It did not go through Spring's proxy class. Because both
@Transactional
and@Async
annotations are implemented based on Spring's AOP, and AOP is implemented based on dynamic proxy patterns. Therefore, the reason for the annotation's failure is obvious; it may be because the method is called by the object itself rather than the proxy object, as it was not managed by the Spring container.
- It did not go through Spring's proxy class. Because both
Clearly, my situation fits the first case, as I did not add the @EnableAsync
annotation.
I am also very interested in the other reason, but my primary task now is to set up the Demo, so I cannot be distracted by other information.
Many classmates, when searching with a problem, originally asked why the @Async
annotation did not take effect, but slowly got sidetracked, and fifteen minutes later, the question gradually evolved into the Spring Boot startup process.
Half an hour later, the webpage displays some must-know interview scripts...
What I mean is, when searching for a problem, focus on the problem. During the process of searching for a problem, there will definitely be other questions that arise from this problem that pique your interest. However, record them down and do not let the questions diverge.
This principle is similar to looking at the source code with a question in mind; as you look, you might not even know what your original question was.
Alright, back to the point.
I added the annotation to the startup class:
Call again:
You can see that the thread name has changed, indicating that it really works.
Now my Demo is set up, and I can start looking for angles to explore.
From the logs above, I can also see that by default, there is a thread pool with a thread prefix of task-
helping me execute tasks.
Speaking of thread pools, I need to know the relevant configuration of this thread pool to feel at ease.
So how can I find out?
First, Stress Test It#
Actually, a normal person's thought process at this point should be to look through the source code and find the place where the thread pool is injected.
But I am a bit abnormal; I am too lazy to look through the source code, and I want it to expose itself to me.
How can I make it expose itself?
Relying on my understanding of thread pools, my first thought is to stress test this thread pool.
Stress it until it can’t handle the tasks, leading it to the rejection logic; normally, it should throw an exception, right?
So, I slightly modified the program:
I thought I would just go all out:
The result...
It actually...
Accepted everything without throwing an exception?
The logs printed several lines per second, and it was quite cheerful:
Although the expected rejection exception did not occur, I still noticed a few clues from the logs.
For example, I found that this task only reached a maximum of 8:
Friends, what does this mean?
Doesn’t it mean that the core thread count configuration of the thread pool I am looking for is 8?
What, you ask me why it can’t be the maximum thread count?
Is that possible?
Of course, it is possible. But if I send 10,000 tasks and do not trigger the thread pool's rejection strategy, did I just happen to use up the maximum thread pool?
This means that the thread pool configuration is a queue length of 9992 and a maximum thread count of 8?
That’s too coincidental and unreasonable, right?
So I think the core thread count configuration is 8, and the queue length should be Integer.MAX_VALUE
.
To verify my guess, I modified the request like this:
num=ten million.
By observing the heap memory usage through jconsole:
It skyrocketed, and clicking the [Execute GC] button did not alleviate it at all.
This also indirectly proves that the tasks might have all entered the queue, causing the memory to soar.
Although I still do not know its configuration, after the recent black-box testing, I have legitimate reasons to suspect:
The default thread pool has a risk of causing memory overflow.
However, this also means that my idea of making it throw an exception to expose itself has failed.
Dive into the Source Code#
Since the previous thought process didn’t work, let’s honestly dive into the source code.
I started from this annotation:
After clicking into this annotation, I found a few short English sentences, from which I obtained a key piece of information:
Mainly focus on the part I underlined.
In terms of target method signatures, any parameter types are supported.
In the target method signatures, any parameter types are supported.
To add a note: when it mentions the target method, the concept of a proxy object should immediately come to mind.
This sentence is easy to understand and even feels like a bit of a tautology.
However, it is followed by a However:
However, the return type is constrained to either void or Future.
Constrained means restricted, limited.
This sentence means: the return type is limited to void or Future.
What does that mean?
What if I want to return a String?
WTF, the printed result is actually null!?
If I return an object here, wouldn’t it easily lead to a null pointer exception?
After reading the comments on the annotation, I discovered the second hidden pit:
If a method annotated with
@Async
has a return value, it can only be void or Future.
Void is not a problem, let’s talk about Future.
Look at the other sentence I underlined:
it will have to return a temporary {@code Future} handle that just passes a value through: e.g. Spring's {@link AsyncResult}
There’s a temporary, which is a level four vocabulary, and should be recognized, meaning short-lived or temporary.
So it means that if you want to return a value, you should wrap it in an AsyncResult object, which is the temporary worker.
Just like this:
Next, let’s focus on the value attribute of the annotation:
This annotation, according to the comment above, should fill in the name of a thread pool bean, which is equivalent to specifying the thread pool.
I don’t know if I understood that correctly; I’ll write a method to verify it later.
Alright, up to now, I have summarized the information.
- I previously knew nothing about this annotation, but now I have a Demo, and while setting up the Demo, I found that in addition to the
@Async
annotation, I also need to add the@EnableAsync
annotation, such as adding it to the startup class. - Then I black-box tested this default thread pool, and I suspect its core thread count is 8, with an infinite queue length. There is a risk of memory overflow.
- By reading the comments on
@Async
, I found that the return value can only be void or Future type; otherwise, even if other values are returned, it will not throw an error, but the returned value will be null, leading to a null pointer risk. - The
@Async
annotation has a value attribute, which, according to the comment, should allow for specifying a custom thread pool.
Next, I will prioritize the questions I want to explore, focusing only on issues related to @Async
:
-
- What is the specific configuration of the default thread pool?
-
- How does the source code ensure that only void and Future are supported?
-
- What is the purpose of the value attribute?
What Is the Specific Configuration?#
Finding the specific configuration is actually a quick process.
Because the value parameter of this class is simply too friendly:
There are five places where it is called, four of which are comments.
The effective call is just this one place, so let’s set a breakpoint first:
org.springframework.scheduling.annotation.AnnotationAsyncExecutionInterceptor#getExecutorQualifier
After initiating the call, I indeed hit the breakpoint:
Following the breakpoint down, I will arrive at this place:
org.springframework.aop.interceptor.AsyncExecutionAspectSupport#determineAsyncExecutor
The code structure here is very clear.
The part numbered ① is where it retrieves the value of the @Async
annotation on the corresponding method. This value is actually the bean name; if it is not empty, it retrieves the corresponding bean from the Spring container.
If the value is empty, as in our Demo, it will go to the part numbered ②.
This is the default thread pool I am looking for.
Finally, whether it’s the default thread pool or a custom thread pool in the Spring container.
They will maintain a mapping relationship between methods and thread pools in a map at the method level.
That is, the executors in the code at step ③ is a map:
So, what I want to find is the logic at step ②.
This mainly involves a defaultExecutor object:
This thing is functional programming, so if you don’t know what it does, debugging it might be a bit confusing:
I suggest you quickly learn it; you can get started in 10 minutes.
Eventually, you will debug to this place:
org.springframework.aop.interceptor.AsyncExecutionAspectSupport#getDefaultExecutor
This code is quite interesting; it retrieves a default thread pool-related Bean from the BeanFactory. The process is simple, and the logs are printed clearly, so I won’t elaborate.
However, I want to point out an interesting aspect: I don’t know if you see this code and notice a hint of parent delegation.
It uses exceptions to handle logic within exceptions.
This “garbage” code directly violates two major points in the Alibaba development specifications:
In the source code, this is considered good code.
In business processes, this violates the specifications.
So, to say a side note.
I personally feel that the Alibaba development specifications are actually a best practice for our colleagues writing business code.
However, when this standard is applied to middleware, foundational components, and framework source code, there will be some symptoms of incompatibility; this is subjective. I feel that the Alibaba development specification’s idea plugin is really useful for programmers like me who write CRUD applications.
Not to digress, let’s return to the thread pool we obtained:
Didn’t I find what I wanted? I can see all the relevant parameters of this thread pool.
This also confirms my previous guess:
I think the core thread count configuration is 8, and the queue length should be Integer.MAX_VALUE.
But now, I have directly obtained this thread pool's Bean from the BeanFactory. When was this Bean injected?
Friends, isn’t this simple?
I have already obtained the beanName of this Bean, which is applicationTaskExecutor. As long as you are somewhat familiar with the Spring bean retrieval process, you should know to set a breakpoint here and debug slowly:
org.springframework.beans.factory.support.AbstractBeanFactory#getBean(java.lang.String)
What if you don’t know to set a breakpoint here to debug?
Let’s say you want a simple and straightforward method; you can just search for the beanName in the code, and it will come out.
Simple and effective:
org.springframework.boot.autoconfigure.task.TaskExecutionAutoConfiguration
You can find this class, set a breakpoint, and start debugging.
Let’s say I want to do something a bit more clever.
Suppose I don’t even know the beanName, but I know it must be a thread pool managed by Spring.
Then I can retrieve all the thread pools managed by Spring in the project; surely one of them is what I’m looking for, right?
Look at the screenshot below; isn’t the current bean the applicationTaskExecutor I’m looking for?
These are some alternative methods; knowing them is good, as sometimes multiple troubleshooting methods are useful.
Support for Return Types#
We’ve finished the first question about configuration.
Next, let’s look at another question we raised earlier:
How does the source code ensure that only void and Future are supported?
The answer is hidden in this method:
org.springframework.aop.interceptor.AsyncExecutionInterceptor#invoke
The part marked ① is actually where we previously analyzed the method corresponding to the thread pool from the map.
After obtaining the thread pool, we arrive at the part marked ②, which is where a Callable object is encapsulated.
So, what is encapsulated into the Callable object?
Let’s put that question aside for now; let’s continue to focus on our question, or else the questions will keep multiplying.
At the part marked ③, doSubmit, as the name suggests, this is where the task is executed.
org.springframework.aop.interceptor.AsyncExecutionAspectSupport#doSubmit
Actually, this is where I want to find the answer.
You see, the method’s input parameter returnType is String, which is actually the asyncSay method annotated with @Async.
If you don’t believe it, I can show you the previous call stack, where you can see the specific method:
So, now you can see what the doSubmit method does with this method’s return type.
There are four branches in total; the first three check whether it is of Future type.
Both ListenableFuture and CompletableFuture inherit from Future.
These two classes are also mentioned in the comments of the @Async annotated method:
Our program reaches the last else, which means the return value is not of Future type.
So, what does it do?
It directly submits the task to the thread pool and returns null.
Doesn’t this lead to a null pointer exception?
At this point, we’ve also solved the question:
How does the source code ensure that only void and Future are supported?
The reasoning is quite simple; when we normally use a thread pool, don’t we only have these two return types?
When submitting using submit, it returns a Future, encapsulating the result inside the Future:
When submitting using execute, there is no return value:
And the framework helps us achieve asynchrony through a simple annotation; no matter how fancy it gets, it still has to adhere to the underlying principles of thread pool submission.
So, why does the source code only support void and Future return types?
Because the underlying thread pool only supports these two types of returns.
However, its approach is a bit tricky; it directly processes the return values of other types as null.
You can’t complain; after all, you didn’t read the comments on the instructions.
Additionally, I found a small optimization point here:
When it reaches this method, the return value is already clearly null.
Why still use executor.submit(task)
to submit the task?
Using execute would suffice.
The difference, you ask me what the difference is?
Didn’t I just mention that the submit method has a return value?
Even if you don’t use it, it will still construct a returned Future object.
However, even if it’s constructed, it’s not used.
So, it’s better to use execute for submission.
By generating one less Future object, can that be considered optimization?
To be honest, it’s not a significant optimization, but saying it’s optimized the Spring source code is enough to show off.
Next, let’s talk about the part we previously set aside; what exactly is encapsulated at the part marked ②?
Actually, you could probably guess this with your toes:
The reason I’m bringing this up separately is to prove to you that the result returned here is the actual value returned by our method.
It just checks that the type is not Future, so it doesn’t process it; for example, I actually returned the string hi:1
, but it didn’t meet the conditions and was discarded:
Moreover, the idea is quite intelligent; it will prompt you that the return value here is problematic:
It even highlights the method modification for you; you just need to click it, and it will modify it for you.
Now we are very clear about why this change is necessary.
We understand both the phenomenon and the reason behind it.
The Value of the @Async Annotation#
Next, let’s see what the value attribute of the @Async annotation is for.
Actually, I’ve already subtly mentioned it; I just skimmed over it in one sentence, which is this part:
Earlier, I mentioned that the part marked ① retrieves the value of the @Async
annotation on the corresponding method. This value is actually the bean name; if it is not empty, it retrieves the corresponding bean from the Spring container.
Then I directly analyzed the part marked ②.
Now let’s take another look at the part marked ①.
I will also rearrange a test case to verify my idea.
After all, the value should be the name of a Spring bean, and this bean must be a thread pool object, no doubt about it.
So, I modified the Demo program like this:
Run it again, and when it reaches this breakpoint, it’s different from the default case; this time the qualifier has a value:
Next, it will retrieve the bean named whyThreadPool from the beanFactory.
Finally, the thread pool retrieved is this custom thread pool of mine:
This is actually a very simple exploration process, but it embodies a principle.
Previously, some classmates asked me this question:
This question is quite representative; many classmates believe that thread pools should not be overused, and one project should just share one.
Thread pools indeed should not be overused, but a project can have multiple custom thread pools.
It depends on your business scenario.
For example, a simple case is that the main business process can use one thread pool, but if a certain link in the main process has a problem, say, it needs to send a warning SMS.
The operation of sending the warning SMS can use another thread pool.
Can they share one thread pool?
Yes, they can use it.
But what problems might arise?
If a certain business in the project has a problem and is continuously and frantically sending warning SMS, even filling up the thread pool.
At this point, if the main business and the SMS sending use the same thread pool, what beautiful scene might occur?
Isn’t it that as soon as you submit a task, it directly goes to the rejection strategy?
The auxiliary function of sending warning SMS leads to the main business being unable to proceed, which is counterproductive, right?
Therefore, it is recommended to use two different thread pools, each performing its own duties.
This is actually what sounds like a high-end thread pool isolation technique.
So, how does this relate to the @Async
annotation?
It’s just like this:
And remember the map we mentioned earlier that maintains the relationship between methods and thread pools?
That’s it:
Now, I will run the program to call the above three methods, aiming to populate this map with values:
Do you understand now?
Let me reiterate this sentence:
Maintain the relationship between methods and thread pools at the method level.
Now, I have a bit of understanding of the @Async
annotation; I find it quite lovely. Perhaps I will consider using it in projects in the future. After all, it aligns more with Spring Boot's annotation-based development philosophy.
One Last Thing#
Alright, having seen this, feel free to like or follow; I won’t mind if you do both. Writing articles is tiring and needs some positive feedback.
Here’s a bow to all the readers: