Introduction

This article provides a comprehensive compilation of 16 essential coding practices that the author deems are indispensable for enhancing work efficiency

1. After modifying the code, remember to test it

“Test the code after modification” is a basic requirement for every programmer. Don’t have the chance mentality of “I only modified a variable or a line of configuration code, no need to test”. After modifying the code, try to test it yourself, which can avoid many unnecessary bugs.

2. Try to validate method parameters as much as possible

Parameter validation is also a basic requirement for every programmer. The method you create must first validate the parameters. For example, check whether the parameters are allowed to be empty, and whether the length of the parameters meets your expected length. Try to cultivate this habit, as many “low-level bugs” are caused by “not validating the parameters”.
If your database field is set to varchar(16), and the other party sends a 32-character string, without validating the parameter, an "insert database exception" will occur.

3. When modifying an old interface, consider the interface compatibility

Many bugs are caused by modifying the old interface without making it compatible. This problem is often quite serious and may directly lead to the failure of system release. Novice programmers are prone to making this mistake~

Therefore, if your requirement is to modify the original interface, especially if this interface provides external services, you must consider interface compatibility. Let me give you an example, for example, the Dubbo interface originally only accepts parameters A and B, and now you have added a parameter C, so you can consider handling it like this.

1
2
3
4
5
6
7
8
// Old interface
void oldService(A, B);
{
// Compatible with the new interface, pass null instead of C
newService(A, B, null);
}
// New interface, the old interface cannot be deleted temporarily, and compatibility needs to be done.
void newService(A, B, C);

4.For complex code logic, add clear comments

When writing code, it is not necessary to write too many comments. Well-named methods and variables are the best comments. However, if it is “code with complex business logic”, it is really necessary to write “clear comments”. Clear comments are more conducive to future maintenance.

5. Close IO resources after use

I believe everyone has had the experience that if you “open too many files” or system software on a Windows desktop, you will find that your computer will be slow. Of course, it is the same for our Linux servers. When operating files or database connections in daily life, if the IO resources are not closed, then this IO resource will be occupied by it, so others cannot use it, which causes “resource waste”.

Therefore, after using IO stream, you can close it using finally.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
FileInputStream fdIn = null;
try {
fdIn = new FileInputStream(new File("/jay.txt"));
} catch (FileNotFoundException e) {
log.error(e);
} catch (IOException e) {
log.error(e);
} finally {
try {
if (fdIn != null) {
fdIn.close();
}
} catch (IOException e) {
log.error(e);
}
}

// JDK 7 introduced a more elegant way to close streams using "try-with-resources"
try (FileInputStream inputStream = new FileInputStream(new File("jay.txt"))) {
// Use the resource here
} catch (FileNotFoundException e) {
log.error(e);
} catch (IOException e) {
log.error(e);
}

6. Measures are taken in the code to avoid runtime errors (such as array out of bounds, division by zero, etc.)

In daily development, measures need to be taken to avoid runtime errors such as “array out of bounds, division by zero, null pointer”, etc. Similar code is often seen:

1
2
3
4
5
String name = list.get(1).getName(); // There is a possibility of an array index out of bounds error since the list may not have two elements
// Therefore, it is necessary to take measures to prevent array boundary overflow. Example:
if (CollectionsUtil.isNotEmpty(list) && list.size() > 1) {
String name = list.get(1).getName();
}

7. Avoid remote calls or database operations within loops, prefer batch processing

Remote operations or database operations are “relatively network and IO resource-intensive”, so try to avoid remote calls within loops or database operations within loops. If possible, “retrieve as much data as possible in batches instead of looping multiple times” (but not too much data at once, like 500 per batch).

1
2
3
4
5
6
7
8
// Example:
remoteBatchQuery(param);

// Counterexample:
for (int i = 0; i < n; i++) {
remoteSingleQuery(param);
}

8. After writing the code, imagine what would happen if it is executed in multiple threads, pay attention to concurrency consistency issues

In some common business scenarios, we first check if there is a record, and then perform the corresponding operation (such as modification). However, the (query + modification) combined is not an atomic operation. If we imagine it being executed in multiple threads, we will find a problem. The following is a counterexample:

1
2
3
4
5
6
7
8
if (isAvailable(ticketId)) {
// 1. Perform cash addition operation
// 2. Delete ticket by ID
deleteTicketById(ticketId);
} else {
return "No available cash voucher.";
}

Obviously, there is a “concurrency problem”. The correct solution is to “use the atomicity of database delete operations”, as follows:

1
2
3
4
5
6
if (deleteAvailableTicketById(ticketId) == 1) {
// 1. Perform cash addition operation
} else {
return "No available cash voucher.";
}

Therefore, it is also a good habit to “after writing the code, think about the execution in multiple threads, whether there will be concurrency consistency issues”.

9. When accessing object properties, first check if the object is null

This point also belongs to “measures taken to avoid runtime exceptions”, but I still want to emphasize it as a key point because null pointer exceptions are too common in normal circumstances. A small mistake can lead to null pointer exceptions in the production environment. So, when accessing object properties, try not to trust that “it should not be null in theory”. It is a good habit to check if it is null before getting the object property. Example:

1
2
3
if(object!=null)<{
p> String name = object.getName();
}

10. When considering multiple asynchronous threads, prioritze using an appropriate thread pool instead of creating new threads, also consider whether the thread pool should be isolated

Why use thread pools as a priority? Using thread pools has several advantages:

  • It helps us manage threads, avoiding the resource consumption of creating and destroying threads.
  • It improves response speed.
  • It allows for thread reuse.

At the same time, it is advisable not to share the same thread pool for all business operations, and “thread pool isolation” should be considered. This means that different key businesses should be allocated to different thread pools, and the thread pool parameters should also be considered appropriately.

11. After manually writing the SQL code for the business, run it in the database and also explain the execution plan

After manually writing the SQL code for the business, you can run it in the database to check for any syntax errors. Some people have the bad habit of packaging and sending the code to the test server immediately after writing it. In fact, running the SQL in the database can avoid many errors.

At the same time, also use “explain” to check the execution plan of your SQL, especially in terms of whether it uses indexes.

1
explain select * from user where userid =10086 or age =18;

12. When calling third-party interfaces, consider exception handling, security, and timeout retries

When calling third-party services or distributed remote services, it is necessary to consider:

  • Exception handling (e.g., when calling someone else’s interface and encountering an exception, how to handle it, whether to retry or treat it as a failure)
  • Timeout (it is difficult to estimate how long the interface of the other party will return, so it is generally recommended to set a timeout to disconnect, in order to protect your own interface)
  • Retry times (if your interface call fails, whether you need to retry, this question needs to be considered from a business perspective)

A simple example: When making an HTTP request to someone else’s service, you need to consider setting connect-time and retry times.
If it is an important third-party service such as transferring money, you also need to consider “signature verification” and “encryption”.

13. Interfaces need to consider idempotence

Interfaces need to consider idempotence, especially for important interfaces such as grabbing red packets or transferring money. The most intuitive business scenario is “when a user clicks twice in a row”, whether your interface can handle it properly.

  • Idempotence (idempotent, idempotence) is a mathematical and computational concept commonly seen in abstract algebra.
  • In programming, an idempotent operation is characterized by the fact that the impact of executing it any number of times is the same as the impact of executing it once. An idempotent function or method is one that can be repeatedly executed with the same parameters and obtain the same result.

Generally, the following are several “idempotent technical solutions”:

  • Query operation
  • Unique index
  • Token mechanism to prevent duplicate submissions
  • Delete operation in the database
  • Optimistic locking
  • Pessimistic locking
  • Redis, ZooKeeper distributed lock (used Redis distributed lock for previous red packet requirements)
  • State machine idempotence

14. Consider thread safety in multi-threaded scenarios

In “high-concurrency” scenarios, HashMap may encounter deadlocks. Because it is not thread-safe, you can consider using ConcurrentHashMap. Therefore, it is recommended not to use “new HashMap()” immediately without considering alternatives.

  • Hashmap, ArrayList, LinkedList, TreeMap, etc. are not thread-safe.
  • Vector, Hashtable, ConcurrentHashMap, etc. are thread-safe.

15. Consider the problem of master-slave delay

This kind of logic, where you insert data and then immediately query it, “may” have problems. Generally, databases have a master database and slave databases. When writing data, it is usually written to the master database, and reading is usually done from the slave databases. If there is a delay in the synchronization between the master and slave databases, it is possible that you may insert data successfully, but cannot query it.

  • If it is an important business, consider whether to force reading from the master database or reconsider the design.
  • However, for some business scenarios, it may be acceptable to have a slight delay between the master and slave databases, but it is still a good practice to consider this issue.
  • After writing code that operates the database, think about whether there is a problem of master-slave delay.

16. When using caching, consider consistency with the database and also avoid issues like cache penetration, cache avalanche, and cache breakdown

In simple terms, we use caching to “improve query speed and reduce interface response time”. However, when using caching, it is necessary to pay attention to the “consistency between the cache and the database”. At the same time, it is also necessary to avoid three major issues: cache penetration, cache avalanche, and cache breakdown.

  • Cache avalanche: refers to a situation where a large number of data in the cache expire at the same time and there is a huge amount of data to be queried, leading to excessive database pressure or even a server crash.
  • Cache penetration: refers to the situation where a query is made for data that definitely does not exist. When the cache misses, it needs to query the database. If the data is not found, it will not be written into the cache. This will result in a database query for this non-existent data every time it is requested, putting additional pressure on the database.
  • Cache breakdown: refers to the situation where a hot key expires at a certain point in time, and at the same time, there are a large number of concurrent requests for this key, resulting in a high volume of requests hitting the database.