Interview

Interview

Containers used in development

Arraylist

  • The bottom layer is an array
  • Expansion mechanism: The default expansion is half. If half of the expansion is not enough, use the target size (original array length + 1) as the expanded capacity
  • In the process of adding, deleting, modifying, and checking, if the addition leads to expansion, modCount will be modified, and the deletion will definitely be modified. Changing and checking will definitely not modify modCount.
  • Expansion operations will result in array copying, and batch deletion will result in finding the intersection of two sets and array copy operations. Therefore, additions and deletions are relatively inefficient. The modification and investigation are both very efficient operations.
  • In combination with the characteristics, in use, take the most commonly used display list in Android as an example. When the list is sliding, it is necessary to display an array of each Item (element), so the check operation is the most frequent. Relatively speaking, the increment operation is only used when the list loads more, and it is inserted at the end of the list, so there is no need to move the data. The delete operation is more low frequency. So choose ArrayList as the structure to save the data

LinkedList

  • LinkedList and ArrayList are two different implementations of the List interface. The efficiency of adding and deleting ArrayList is low, but the efficiency of changing and checking is high. The LinkedList is just the opposite. Additions and deletions do not need to move the bottom-level array data, and the bottom-level is implemented by a linked list, and only needs to modify the node pointer of the linked list, so the efficiency is higher.
  • LinkedList is a two-way list. When obtaining a node by subscript, (add select), it will perform a halving according to whether the index is in the first half or the second half to improve query efficiency

HashMap

  • HashMap is an associative array and hash table. It is not thread-safe and allows the key to be null and the value to be null. Disorder when traversing. Before 1.8, it is an array + linked list. After 1.8, when the length of the linked list reaches 8, it will be transformed into a red-black tree to improve its query and insertion efficiency.

  • It is thread-unsafe. You can obtain a thread-safe HashMap through the static method synchronizedMap of the Collections class, Map map=Collections.synchronizedMap(new HashMap())

  • When the capacity of the HashMap reaches the threshold value, the expansion will be triggered. Before and after the expansion, the length of the hash bucket must be a power of 2. In this way, when looking for the corresponding hash bucket according to the hash value of the key, bit operation can be used to replace the remainder operation, which is more efficient. Because the expansion is double the capacity, each node on the original linked list may now be stored in the original subscript, that is, the low bit, or the expanded subscript, that is, the high bit. high bit = low bit + original hash bucket capacity

  • Since the length of the hash bucket of HashMap is much smaller than the range of hash values, the default is 16, so when the hash value is calculated by the length of the bucket to find the subscript of the bucket storing the key, the remainder is passed and After the operation is completed, the high bits of the hash value will be ignored. Therefore, only the low bits of hashCode() participate in the calculation, and different hash values will occur, but the chance of getting the same index will greatly increase. This situation is called hash collision. That is, the collision rate will increase.

  • The perturbation function is to solve the hash collision. It will integrate the characteristics of the high and low hash values and store them in the low bit. Therefore, in the AND operation, it is equivalent to participating in the operation with the high and low bits to reduce the probability of hash collision.

  • And operation replaces modulo operation. Use hash & (table.length-1) instead of hash% (table.length)

The difference between HashMap and HashTable

  • In contrast, HashTable is thread-safe and does not allow the key and value to be null.
  • The default capacity of HashTable is 11.
  • HashTable directly uses the key's hashCode (key.hashCode()) as the hash value, unlike HashMap, which uses the static final int hash (Object key) perturbation function to disturb the key's hashCode as the hash value.
  • HashTable takes the subscript of the hash bucket directly by modulo %. (Because its default capacity is not 2 to the power of n. So it is not possible to replace modulo arithmetic with bit operations)
  • When the capacity is expanded, the new capacity is 2 times + 1 of the original capacity. int newCapacity = (oldCapacity << 1) + 1;
  • Hashtable is a subclass of Dictionary and also implements the Map interface. HashMap is an implementation class of the Map interface;

LinkedHashMap

  • LinkedHashMap is an associative array and hash table. It is not thread-safe and allows the key to be null and the value to be null.
  • It inherits from HashMap and implements the Map<K,V> interface. It also maintains a doubly linked list inside. Every time you insert data, or access or modify data, nodes will be added or the order of nodes in the linked list will be adjusted. To determine the order of output during iteration.
  • The order of traversal is in the order of inserting nodes. This is also the biggest difference between it and HashMap. The accessOrder parameter can also be passed in during construction, so that the traversal order is output in the order of access. accessOrder, the default is false, the output order during iteration is the order of inserting nodes. If true, the order of output is in the order of visiting nodes
Map < String , String > map = new LinkedHashMap<>(); map.put( "1" , "a" ); map.put( "2" , "b" ); map.put( "3" , "c" ); map.put( "4" , "d" ); Iterator< Map .Entry< String , String >> iterator = map.entrySet().iterator(); while (iterator.hasNext()) { System.out.println(iterator.next()); } System.out.println( "The following is the case of accessOrder=true:" ); map = new LinkedHashMap< String , String >( 10 , 0. 75f, true ); map.put( "1" , "a" ); map.put( "2" , "b" ); map.put( "3" , "c" ); map.put( "4" , "d" ); map.get( "2" ); //2 moved to the end of the internal linked list map.get( "4" ); //4 adjusted to the end map.put( "3" , "e" ); //3 adjusted To the end map.put( null , null ); //Insert two new nodes null map.put( "5" , null ); //5 iterator = map.entrySet().iterator(); while (iterator.hasNext()) { System.out.println(iterator.next()); } Copy code

result:

1 = a 2 = b 3 = c 4 = d The following are = accessOrder to true in the case where: the most recently accessed data will be placed in the tail . 1 = A 2 = B . 4 = D . 3 = E null = null . 5 = null duplicated code
  • Compared with HashMap, there is a small optimization. It rewrites the containsValue() method and directly traverses the internal linked list to compare whether the value value is equal. Why doesn't it rewrite the containsKey() method and check whether the keys of the internal linked list are equal? Is it because it takes O(1) time to find the index according to the key in the HashMap, and then locate the element. If you use a linked list, it takes o(n) time.

ArrayMap (container in Android)

  • ArrayMap implements the implements Map<K, V> interface, so it is also an associative array and hash table. Store data in the form of key->value structure. It is also thread-unsafe, allowing key to be null and value to be null.

  • Its internal implementation is based on two arrays. An int[] array is used to store the hashCode of each item. An Object[] array is used to store key/value key-value pairs. The capacity is twice that of the previous array.

  • But it is not suitable for large-capacity data storage. When storing large amounts of data, its performance will degrade by at least 50%. It is less time efficient than traditional HashMap. Because it will use the dichotomy to sort the keys from small to large,

  • When you want to get a certain value, ArrayMap will calculate the hash value after the input key is converted, and then use the binary search method on the hash array to find the corresponding index, and then we can directly access the need in another array through this index The key-value pair. If the key in the second array key-value pair is inconsistent with the query key entered earlier, then it is considered that a collision has occurred. In order to solve this problem, we will use the key as the center point, expand up and down respectively, and compare and search one by one until a matching value is found.

SparseArray

  • SparseArray is a data structure used to replace HashMap on the Android platform. More specifically, it is used to replace a HashMap whose key is of type int and value is of type Object.

  • It is also thread-unsafe, allowing value to be null.

  • In principle,

    • Its internal implementation is also based on two arrays. An int[] array mKeys is used to store the key of each item. The key itself is an int type, so it can be understood that the hashCode value is the value of the key. An Object[] array mValues stores the value. The capacity is the same as that of the key array.
  • It is also not suitable for large-capacity data storage. When storing large amounts of data, its performance will degrade by at least 50%.

  • It is less time efficient than traditional HashMap. Because it will sort the keys from small to large, use the dichotomy to query the subscript corresponding to the key in the array

  • Applicable scene:

    • The amount of data is not large (within a thousand)
    • Space is more important than time
    • Need to use Map, and the key is int type.

Java container class

It can be seen from the figure that it is divided into two categories: Collection and Map

Collection

Collection is an interface abstracted from List and Set, which contains the basic operations of these collections. Is the top-level interface of the collection

  • List

The List interface usually represents a list (array, queue, linked list, stack, etc.) in which elements can be repeated. Commonly used implementation classes are ArrayList, LinkedList, and Vector.

  • Set

The Set interface usually represents a collection, and the elements in the collection are not allowed to be repeated (guaranteed by the hashCode and equals functions). The commonly used implementation classes are HashSet and TreeSet. HashSet is implemented through HashMap in Map, while TreeSet is implemented through Map. The TreeMap is implemented, and TreeSet also implements the SortedSet interface, so it is an ordered collection.

  • The difference between List and Set

The Set interface stores disordered, non-repetitive data. The List interface stores ordered and repeatable data. Set retrieval efficiency is low, deletion and insertion efficiency is high, and insertion and deletion will not cause element position changes. List has high efficiency in finding elements, and low efficiency in deleting and inserting. List is similar to array and can be dynamically increased. The length of List is automatically increased according to the actual storage length.

Map

Map is a mapping interface, each element of which is a Key-Value key-value pair. The same abstract class AbstractMap implements most of the functions of the Map interface through the adapter mode. Implementation classes such as TreeMap, HashMap and WeakHashMap all inherit from AbstractMap. achieve.

Iterator

Iterator is an iterator that traverses a collection. It can be used to traverse a Collection, but it cannot be used to traverse a Map. Collection implementation classes all implement the iterator() function, which returns an Iterator object to traverse the collection, and ListIterator is specifically used to traverse the List. Enumeration was introduced in JDK 1.0 and has the same function as Iterator, but it has fewer functions than Iterator. It can only be used in Hashtable, Vector, and Stack.

Arrays and Collections

Arrays and Collections are two tool classes used to manipulate arrays and collections. For example, the Arrays.Copyof() method is widely called in ArrayList and Vector, and there are many static methods in Collections that can return the synchronized version of each collection class, that is, thread. The safe version, of course, if you want to use a thread-safe collection class, the corresponding collection class under the concurrent package is preferred.

Which are thread-safe containers in Java?

Synchronous container class: using synchronized:

  • 1.Vector
  • 2.HashTable

Concurrent container:

  • 3.ConcurrentHashMap: Segmentation
  • 4. CopyOnWriteArrayList: copy on write
  • 5. CopyOnWriteArraySet: copy on write

Queue:

  • 6. ConcurrentLinkedQueue: It is an unbounded thread-safe queue based on linked nodes that is implemented in a non-blocking way, with very good performance.
  • 7. ArrayBlockingQueue: array-based bounded blocking queue
  • 8. LinkedBlockingQueue: Bounded blocking queue based on linked list.
  • 9.PriorityBlockingQueue: Support priority unbounded blocking queue, that is, the elements in the blocking queue can be automatically sorted. By default, the elements are sorted in natural ascending order
  • 10.DelayQueue: An unbounded blocking queue for delaying access to elements.
  • 11.SynchronousQueue: a blocking queue that does not store elements. Each put operation must wait for a take operation, otherwise it cannot continue to add elements. There is actually no element inside, the capacity is 0

Deque: (The Deque interface defines a two-way queue. The two-way queue allows entry and exit operations at the head and tail of the queue.)

  • 12. ArrayDeque: Array-based two-way non-blocking queue.
  • 13.LinkedBlockingDeque: Two-way blocking queue based on linked list.

Sorted container:

  • 14. ConcurrentSkipListMap: is a thread-safe version of TreeMap
  • 15. ConcurrentSkipListSet: is a thread-safe version of TreeSet

java common problem

Question 1: The difference between wait and sleep in multithreading

  • 1. sleep() is a method of the Thread class, and wait() is a method defined in the Object class.
  • 2. The sleep() method can be used anywhere, and the wait() method can only be used in the synchronized method or synchronized.
  • 3. Thread.sleep() will only give up the CPU and will not change the lock behavior; Object.wait() will not only give up the CPU, but will also release the locked synchronization resources that have been occupied

Question 2: The difference between == and equals and hashCode in java

  • 1. == If it is a comparison of basic data types, it is a comparison value. If it is a reference type, it is their storage address in memory. The object is stored in the heap, the reference of the object stored in the stack, so == is to compare the values in the stack, if it returns true, it means that the memory addresses of the variables are equal;
  • 2. Equals is a method in the Object class. The equals method of the Object class is used to determine whether the memory address reference of the object is the same address (is it the same object). If the equals method is overridden in the class, it must be determined according to the specific code. Generally, after the overriding, it is judged whether the objects are equal by whether the contents of the objects are equal.
  • 3.hashCode() calculates the hash code of the object instance and stores it as the key when the object is hashed.
  • 4.The relationship between equals and hashCode methods:
  • hashCode() is a local method, and the implementation is based on the local machine. equals() equal objects, hashCode() must be equal; hashCode() is not equal, equals() must not be equal; hashCode() is equal, equals() may be equal or unequal.
  • Therefore, before rewriting the equals(Object obj) method, it is necessary to rewrite the hashCode() method to ensure that the two objects that are judged to be true by the equals(Object obj) method have the same hashCode() return value.
  • 5.The relationship between equals and ==:
  • Integer b1 = 127; is compiled into Integer when compiled in java. b1 = Integer.valueOf(127); For Integer values between -128 and 127, the native data type int is used, which will be reused in memory, that is When the Integer values between these are compared with ==, only the numeric value of the int primitive data type is compared. If it exceeds the range of -128~127, when performing == comparison, address and value comparison are performed.

Question 3: The difference between int and integer

Integer is a wrapper class of int, and int is a basic data type of java. Integer variables must be instantiated before they can be used. When new is an Integer, it actually generates a reference to this object, and int is the value that directly stores the data , Integer default value is null, and int default value is 0

Question 4: Talk about the understanding of java polymorphism

  • The same message can adopt a variety of different behaviors according to different sending objects. During execution, the actual type of the referenced object is judged, and the corresponding method is called according to its actual type.
  • Role: Eliminate the coupling relationship between types. The necessary conditions for achieving polymorphism: inheritance, rewriting (because methods existing in the parent class must be called), parent class references pointing to subclass objects

Question 5: The difference between String, StringBuffer, StringBuilder

  • They are all string classes. The String class uses character arrays to store strings. Because of the final modifier, the String object is immutable. Every time a String operation is performed, a new String object is generated, which is inefficient and wastes memory space. But the thread is full.
  • StringBuilder and StringBuffer also use character arrays to store characters, but both types of objects are mutable, that is, append operations to strings will not generate new objects
  • The difference between them is: StringBuffer adds a synchronization lock to the method, which is thread-safe, and StringBuilder is not thread-safe.

Question 6: The difference between Serializable and Parcelable

android common problem

ThreadLocal

  • 1: Make each thread can independently change its own space resources (that is, the stored value) without conflicting with the resources of other threads
  • 2: If two threads execute a section of code that contains a Threadlocal variable reference at the same time, they will not be able to access each other's ThreadLocal variable.

Principle: When storing the value, the ThreadLocalMap object of the current thread is first obtained. The ThreadLocalMap key is the current ThreadLocal, and the value is the stored value. That is, a thread corresponds to a ThreadLocalMap, so that it will not conflict with the resources of other threads

Examples of applications in actual development

  • The main thread and the child thread get data through spliteHelper

Handler message mechanism

The message that needs to update the UI in the child thread is passed to the UI thread, so as to realize the update processing of the UI by the child thread, and finally realize the processing of asynchronous messages, thereby avoiding the problem of unsafe thread operation

Related concepts:

  • Message: The data unit of communication between threads
  • MessageQueue: The message storage queue is a data structure (first in, first out)
  • Handler:
    • 1: The communication medium between the main thread and the child thread (add the message to the message queue)
    • 2: The main processor of the thread message (processing the message dispatched by the looper Looper)
  • Looper: the communication medium between the message queue and the processor
    • 1: Message acquisition: cyclically remove messages from the message queue
    • 2: Message distribution: send the retrieved message to the corresponding processor

Source code analysis:

Thread Pool

The thread pool is to put multiple thread objects in a container in advance. When using it, you don't need new thread but directly go to the pool to get the thread, which saves the time of opening up child threads and improves the efficiency of code execution.

effect

  • 1: Reuse threads
  • 2: Management thread (uniform allocation, tuning and monitoring; control the maximum number of concurrent threads in the pool

advantage

  • 1: Reduce the performance overhead caused by the creation/destruction of threads
  • 2: Improve thread response speed/execution efficiency
  • 3: Improve the management of threads

Core parameters:

  • corePoolSize: number of core threads
  • maximumPoolSize: the maximum number of threads that the thread pool can hold
  • keepAliveTime: non-core thread idle timeout time
  • TimeUnit unit: Specify the time unit of the keepAliveTime parameter
  • workQueue: task queue
  • threadFactory: thread engineering, create a new thread for the thread pool

Common 4 types of functional thread pools:

  • Fixed-length thread pool (FixedThreadPool)

It is a thread pool with a fixed number of threads. When threads are idle, they will not be retracted unless the thread pool is closed. When all threads are active, the new task will be in a waiting state until a thread is free

//1. Create a fixed-length thread pool object & set the number of thread pool threads to be fixed at 3 ExecutorService fixedThreadPool = Executors.newFixedThreadPool( 3 ); //2. Create a Runnable thread object & tasks to be executed Runnable task = new Runnable () { public void run () { System.out.println( "Perform the task" ); } }; //3. Submit tasks to the thread pool: execute() fixedThreadPool.execute(task); //4. Close the thread pool fixedThreadPool.shutdown(); Copy code
  • Timed thread pool (ScheduledThreadPool)

The number of core threads is fixed, while the number of non-core threads is unlimited, and core threads will be reclaimed immediately when they are idle. Threads such as ScheduledThreadPoll are mainly used to perform timing tasks and repetitive tasks with a fixed period

//1. Create a timed thread pool object & set the number of thread pool threads to be fixed at 5 ScheduledExecutorService scheduledThreadPool = Executors.newScheduledThreadPool( 5 ); //2. Create a Runnable thread object & tasks to be executed Runnable task = new Runnable () { public void run () { System.out.println( "Perform the task" ); } }; //3. Submit tasks to the thread pool: schedule() scheduledThreadPool.schedule(task, 1 , TimeUnit.SECONDS); //execute the task after a delay of 1s scheduledThreadPool.scheduleAtFixedRate(task, 10 , 1000 ,TimeUnit.MILLISECONDS); //After a delay of 10ms, the task will be executed every 1000ms //4. Close the thread pool scheduledThreadPool.shutdown(); Copy code
  • Cached thread pool (CachedThreadPool)

It is a thread pool with an indefinite number of threads, it only has non-core threads, and the maximum number of threads is Integer.MAX_VALUE. Since Integer.MAX_VALUE is a large number, it is actually equivalent to the maximum number of threads can be arbitrarily large. When the threads in the thread pool are active, the thread pool will create new threads to handle new tasks, otherwise it will use idle threads to handle new tasks. Idle threads in the thread pool have a timeout mechanism. The timeout period is 60 seconds, and the idle thread will be recycled if it exceeds 60 seconds. Unlike FixedThreadPool, the task queue of CachedThreadPool is actually equivalent to an empty set, which will cause any task to be executed immediately, because SynchronousQueue cannot insert tasks in this case. SynchronousQueue is a very special queue, in many cases it can be understood as a queue that cannot store elements (rarely used in practice). Judging from the characteristics of CachedThreadPool, this type of thread pool is more suitable for performing a large number of less time-consuming tasks. When the entire thread pool is idle, the threads in the thread pool will time out and be stopped. At this time, there are no threads in the CachedThreadPool, and it hardly takes up any system resources.

//1. Create a cacheable thread pool object ExecutorService cachedThreadPool = Executors.newCachedThreadPool(); //2. Create a Runnable thread object & tasks to be executed Runnable task = new Runnable () { public void run () { System.out.println( "Perform the task" ); } }; //3. Submit tasks to the thread pool: execute() cachedThreadPool.execute(task); //4. Close the thread pool cachedThreadPool.shutdown(); //When the second task is executed, the first task has been completed //The thread that executed the first task will be reused instead of creating a new thread every time. Copy code
  • Single threaded thread pool (SingleThreadExecutor)

There is only one core thread inside this type of thread pool, which ensures that all tasks are executed sequentially in the same thread. The meaning of SingleThreadExecutor is to unify all external tasks into one thread, which makes it unnecessary to deal with thread synchronization between these tasks

//1. Create a single-threaded thread pool ExecutorService singleThreadExecutor = Executors.newSingleThreadExecutor(); //2. Create a Runnable thread object & tasks to be executed Runnable task = new Runnable () { public void run () { System.out.println( "Perform the task" ); } }; //3. Submit tasks to the thread pool: execute() singleThreadExecutor.execute(task); //4. Close the thread pool singleThreadExecutor.shutdown(); Copy code

Thread reuse in thread pool:

Here we need to go deep into the source code addWorker(): it is the key to creating a new thread and the key entry point for thread reuse. Eventually it will be executed to runWoker, and it can take tasks in two ways:

  • firstTask: This is the first runnable executable task specified, and it will execute the task run in the Woker worker thread. And leave it blank to indicate that this task has been executed.
  • getTask(): First of all, this is an endless loop process. The worker thread loops until it can take out the Runnable object or return after timeout. The target here is the task queue workQueue.

Corresponding to the operation of entering the team just now, there is an entry and an exit. In fact, the task not only executes the first task specified at the time of creation, but also actively takes the task to execute from the task queue through the getTask() method, and it is blocked waiting with/without time limit to ensure the survival of the thread. .

What kinds of work queues are there in thread pools?

1. ArrayBlockingQueue

It is a bounded blocking queue based on an array structure. This queue sorts the elements according to the FIFO (first in first out) principle.

2. LinkedBlockingQueue

A blocking queue based on a linked list structure, this queue sorts the elements according to FIFO (first in first out), and the throughput is usually higher than that of ArrayBlockingQueue. The static factory methods Executors.newFixedThreadPool() and Executors.newSingleThreadExecutor use this queue.

3. SynchronousQueue

A blocking queue that does not store elements. Each insert operation must wait until another thread calls the remove operation, otherwise the insert operation has been blocked, and the throughput is usually higher than LinkedBlockingQueue. The static factory method Executors.newCachedThreadPool uses this queue.

4. PriorityBlockingQueue

An infinite blocking queue with priority.

How to understand unbounded queues and bounded queues?

  1. Bounded queue

1. The initial poolSize <corePoolSize, and the submitted runnable task will be directly used as a parameter of a new Thread and executed immediately. 2. When the number of submitted tasks exceeds corePoolSize, the current runable will be submitted to a block queue. 3. After the bounded queue is full, if poolSize <maximumPoolsize, a new Thread will be tried for emergency treatment, and the corresponding runnable task will be executed immediately. 4. If 3 cannot be processed, it will go to the fourth step to perform the reject operation.

  1. Unbounded queue

Compared with a bounded queue, unless the system resources are exhausted, there is no failure of task enqueue in an unbounded task queue. When a new task arrives and the number of threads in the system is less than corePoolSize, a new thread is created to perform the task. When the corePoolSize is reached, it will not continue to increase. If there are still new tasks to join and there are no free thread resources, the tasks will directly enter the queue and wait. If the speed of task creation and processing is very different, the unbounded queue will keep growing rapidly until it runs out of system memory. When the task buffer queue of the thread pool is full and the number of threads in the thread pool reaches the maximumPoolSize, a task rejection strategy will be adopted if there are tasks coming.

Multithreaded development in Android

  • asyncTask

Mainly call the thread pool to perform time-consuming tasks, and then use the handler and the new UI to define the maximum number of threads to be 128. If this number is exceeded, the execution will report an error

  • HandlerThread

HandlerThread inherits Thread. It is a Thread that can use Handler. Its implementation is also very simple. It uses Looper.prepare() to create a message queue in the run method and opens the message loop through Looper.loop(). This allows Handler to be created in HandlerThread in actual use

  • intentService

Overwrite the onHandleIntent() method to handle time-consuming tasks

Event distribution mechanism

  • For dispatchTouchEvent, onTouchEvent, return true is the final event delivery. return false is to trace back to the onTouchEvent method of the parent View.
  • ViewGroup wants to distribute itself to its own onTouchEvent, it needs the interceptor onInterceptTouchEvent method return true to intercept the event.
  • ViewGroup's interceptor onInterceptTouchEvent is not intercepted by default, so return super.onInterceptTouchEvent()=return false;
  • View has no interceptor. In order for View to distribute events to its own onTouchEvent, the default implementation (super) of View's dispatchTouchEvent is to distribute events to its own onTouchEvent.

Some thoughts on event distribution

Set the onTouch and onClick events for a button at the same time, who executes them first? onTouch is executed before onClick, and onTouch is executed twice, once for ACTION_DOWN and once for ACTION_UP (you may also have multiple executions of ACTION_MOVE, if you shake your hands). Therefore, the order of event delivery is through onTouch first, and then to onClick.

button.setOnClickListener( new OnClickListener () { @Override public void onClick ( View v ) { Log.d( "TAG" , "onClick execute" ); } }); button.setOnTouchListener( new OnTouchListener () { @Override public boolean onTouch ( View v, MotionEvent event ) { Log.d( "TAG" , "onTouch execute, action " + event.getAction()); return false ; } }); Copy code

What happens if we try to change the return value in the onTouch method to true?

The onClick method is no longer executed

the reason:

This is the dispatchTouchEvent source code of view:

public boolean dispatchTouchEvent ( MotionEvent event ) { if (mOnTouchListener != null && (mViewFlags & ENABLED_MASK) == ENABLED && mOnTouchListener.onTouch( this , event)) { return true ; } return onTouchEvent(event); } Copy code

When the three conditions of dispatchTouchEvent are met, the retrun true method will be executed. We know that the event is consumed after dispatchTouchEvent retrun is true. The three conditions are:

  • The first condition is mOnTouchListener != null, which means that as long as we register a touch event for the control, mOnTouchListener must be assigned a value.
  • The second condition is (mViewFlags & ENABLED_MASK) == ENABLED is to determine whether the currently clicked control is enabled, buttons are enabled by default, so this condition is always true.
  • The third condition is more critical, mOnTouchListener.onTouch(this, event), which is actually the onTouch method when the callback control registers the touch event. In other words, if we return true in the onTouch method, all three conditions will be satisfied, and the entire method will return true directly. If we return false in the onTouch method, the onTouchEvent(event) method will be executed again.
public boolean onTouchEvent ( MotionEvent event ) { final int viewFlags = mViewFlags; if ((viewFlags & ENABLED_MASK) == DISABLED) { //A disabled view that is clickable still consumes the touch //events, it just doesn't respond to them. return (((viewFlags & CLICKABLE) == CLICKABLE || (viewFlags & LONG_CLICKABLE) == LONG_CLICKABLE)); } if (mTouchDelegate != null ) { if (mTouchDelegate.onTouchEvent(event)) { return true ; } } if (((viewFlags & CLICKABLE) == CLICKABLE || (viewFlags & LONG_CLICKABLE) == LONG_CLICKABLE)) { switch (event.getAction()) { case MotionEvent.ACTION_UP: boolean prepressed = (mPrivateFlags & PREPRESSED) != 0 ; if ((mPrivateFlags & PRESSED) != 0 || prepressed) { //take focus if we don't have it already and we should in //touch mode. boolean focusTaken = false ; if (isFocusable() && isFocusableInTouchMode() && !isFocused()) { focusTaken = requestFocus(); } if (!mHasPerformedLongPress) { //This is a tap, so remove the longpress check removeLongPressCallback(); //Only perform take click actions if we were in the pressed state if (!focusTaken) { //Use a Runnable and post this rather than calling //performClick directly. This lets other visual state //of the view update before click actions start. if (mPerformClick == null ) { mPerformClick = new PerformClick (); } if (!post(mPerformClick)) { performClick(); } } } if (mUnsetPressedState == null ) { mUnsetPressedState = new UnsetPressedState(); } if (prepressed) { mPrivateFlags |= PRESSED; refreshDrawableState(); postDelayed(mUnsetPressedState, ViewConfiguration.getPressedStateDuration()); } else if (!post(mUnsetPressedState)) { //If the post failed, unpress right now mUnsetPressedState.run(); } removeTapCallback(); } break ; case MotionEvent.ACTION_DOWN: if (mPendingCheckForTap == null ) { mPendingCheckForTap = new CheckForTap(); } mPrivateFlags |= PREPRESSED; mHasPerformedLongPress = false ; postDelayed(mPendingCheckForTap, ViewConfiguration.getTapTimeout()); break ; case MotionEvent.ACTION_CANCEL: mPrivateFlags &= ~PRESSED; refreshDrawableState(); removeTapCallback(); break ; case MotionEvent.ACTION_MOVE: final int x = (int) event.getX(); final int y = (int) event.getY(); //Be lenient about moving outside of buttons int slop = mTouchSlop; if ((x < 0 -slop) || (x >= getWidth() + slop) || (y < 0 -slop) || (y >= getHeight() + slop)) { //Outside button removeTapCallback(); if ((mPrivateFlags & PRESSED) != 0 ) { //Remove any future long press/tap checks removeLongPressCallback(); //Need to switch from pressed to not pressed mPrivateFlags &= ~PRESSED; refreshDrawableState(); } } break ; } return true ; } return false ; } Copy code

Here we know that the onClick call must be in the onTouchEvent(event) method!

public boolean performClick () { sendAccessibilityEvent(AccessibilityEvent.TYPE_VIEW_CLICKED); if (mOnClickListener != null ) { playSoundEffect(SoundEffectConstants.CLICK); mOnClickListener.onClick( this ); return true ; } return false ; } Copy code

View drawing process

Mainly divided into: measure, layout, draw process

measure

For the measurement of View, you will definitely contact MeasureSpec to translate "measurement specification" or "measurement parameter", "a MeasureSpec encapsulates the layout requirements passed from the parent container to the child container" and "passing" are very important and more precise. It is said that this MeasureSpec is a measurement requirement for the child View obtained by simple calculation from the MeasureSpec of the parent View and the LayoutParams of the child View. This measurement requirement is MeasureSpec.

MeasureSpec is a combination of size and mode. MeasureSpec is a 32-bit int value, where the upper 2 bits are the measurement mode, and the lower 30 bits are the measurement size.

Classification of measurement modes

  • EXACTLY

Precise mode, that is, the width and height of the control are specific values, such as Android: lanout_width=100dp, or when specified as the match_parent attribute (occupies the size of the parent view)

  • AT_MOST

Maximum mode, that is, when the width and height attributes of the control are android:lanout_width=warp_content, the size of the control at this time is determined by the size of the content, as long as it does not exceed the size of the parent control

  • UPSPECIFIED

The parent container has no restrictions on the child container, and the child container can be as large as it wants.

Measurement process:

The parent View s measure process will measure the child View first, and then measure itself after the child View s measurement results come out.

According to the measurement specifications of the parent View and the Padding of the parent View, as well as the Margin of the child View and the space used (widthUsed), the MeasureSpec of the child View can be calculated.

protected void measureChildWithMargins ( View child, int parentWidthMeasureSpec, int widthUsed, int parentHeightMeasureSpec, int heightUsed ) { //LayoutParams of the child View, your layout_width and layout_height in xml, //the value of layout_xxx will be encapsulated in this LayoutParams at the end. final MarginLayoutParams lp = (MarginLayoutParams) child.getLayoutParams(); //According to the measurement specifications of the parent View and the Padding of the parent View, //and the Margin of the child View and the used space size (widthUsed), the MeasureSpec of the child view can be calculated. The specific calculation process sees the getChildMeasureSpec method. final int childWidthMeasureSpec = getChildMeasureSpec(parentWidthMeasureSpec, mPaddingLeft + mPaddingRight + lp.leftMargin + lp.rightMargin + widthUsed, lp.width); final int childHeightMeasureSpec = getChildMeasureSpec(parentHeightMeasureSpec, mPaddingTop + mPaddingBottom + lp.topMargin + lp.bottomMargin + heightUsed, lp.height); //Calculate the MeasureSpec of the child View through the calculation of the MeasureSpec of the parent View and the LayoutParams of the child View, and then pass the parent container to the child container //and let the child View use this MeasureSpec (a measurement requirement, such as how big it cannot exceed) to measure By myself, if the child View is a ViewGroup, it will recursively measure down. child.measure(childWidthMeasureSpec, childHeightMeasureSpec); } //The spec parameter represents the MeasureSpec of the parent View //the padding parameter Padding of the parent View + the Margin of the child View, the size of the parent View minus these margins can be accurately calculated //The size of the MeasureSpec of the child View //The childDimension parameter represents the child The value of the LayoutParams property inside the View (lp.width or lp.height) //can be wrap_content, match_parent, an exact finger (an exact size), public static int getChildMeasureSpec ( int spec, int padding, int childDimension ) { int specMode = MeasureSpec.getMode(spec); //Get the mode of the parent View int specSize = MeasureSpec.getSize(spec); //Get the size of the parent View //The size of the parent View-own Padding + Margin of the child View, the value obtained is the size of the child View. int size = Math .max( 0 , specSize-padding); int resultSize = 0 ; //Initialize the value, and finally generate the MeasureSpec of the child View through these two values int resultMode = 0 ; //Initialize the value, and finally generate the MeasureSpec of the child View through these two values switch (specMode) { //Parent has imposed an exact size on us //1, the parent View is EXACTLY! case MeasureSpec.EXACTLY: //1.1. The width or height of the child View is an exact value (an exactly size) if (childDimension >= 0 ) { resultSize = childDimension; //size is the exact value resultMode = MeasureSpec.EXACTLY; //mode is EXACTLY. } //1.2. The width or height of the child View is MATCH_PARENT/FILL_PARENT else if (childDimension == LayoutParams.MATCH_PARENT) { //Child wants to be our size. So be it. resultSize = size; //size is the size of the parent view resultMode = MeasureSpec.EXACTLY; //mode is EXACTLY. } //1.3. The width or height of the child View is WRAP_CONTENT else if (childDimension == LayoutParams.WRAP_CONTENT) { //Child wants to determine its own size. It can't be //bigger than us. resultSize = size; //size is the size of the parent view resultMode = MeasureSpec.AT_MOST; //mode is AT_MOST. } break ; //Parent has imposed a maximum size on us //2, the parent View is AT_MOST! case MeasureSpec.AT_MOST: //2.1. The width or height of the child View is an exact value (an exactly size) if (childDimension >= 0 ) { //Child wants a specific size... so be it resultSize = childDimension; //size is the exact value resultMode = MeasureSpec.EXACTLY; //mode is EXACTLY. } //2.2. The width or height of the child View is MATCH_PARENT/FILL_PARENT else if (childDimension == LayoutParams.MATCH_PARENT) { //Child wants to be our size, but our size is not fixed. //Constrain child to not be bigger than us. resultSize = size; //size is the size of the parent view resultMode = MeasureSpec.AT_MOST; //mode is AT_MOST } //2.3. The width or height of the child View is WRAP_CONTENT else if (childDimension == LayoutParams.WRAP_CONTENT) { //Child wants to determine its own size. It can't be //bigger than us. resultSize = size; //size is the size of the parent view resultMode = MeasureSpec.AT_MOST; //mode is AT_MOST } break ; //Parent asked to see how big we want to be //3, the parent View is UNSPECIFIED! case MeasureSpec.UNSPECIFIED: //3.1. The width or height of the child View is an exact value (an exactly size) if (childDimension >= 0 ) { //Child wants a specific size... let him have it resultSize = childDimension; //size is the exact value resultMode = MeasureSpec.EXACTLY; //mode is EXACTLY } //3.2. The width or height of the child View is MATCH_PARENT/FILL_PARENT else if (childDimension == LayoutParams.MATCH_PARENT) { //Child wants to be our size... find out how big it should //be resultSize = 0 ; //size is 0! , Its value is undetermined resultMode = MeasureSpec.UNSPECIFIED; //mode is UNSPECIFIED } //3.3. The width or height of the child View is WRAP_CONTENT else if (childDimension == LayoutParams.WRAP_CONTENT) { //Child wants to determine its own size.... find out how //big it should be resultSize = 0 ; //size is 0!, its value is undefined resultMode = MeasureSpec.UNSPECIFIED; //mode is UNSPECIFIED } break ; } //Construct a MeasureSpec object based on the mode and size obtained by the above logical conditions. return MeasureSpec.makeMeasureSpec(resultSize, resultMode); } Copy code

The measurement process of the subview

The measurement process of View is mainly in the onMeasure() method. If you want to customize the measurement of View, you should rewrite the onMeasure() method

protected void onMeasure ( int widthMeasureSpec, int heightMeasureSpec ) { setMeasuredDimension( getDefaultSize(getSuggestedMinimumWidth(), widthMeasureSpec), getDefaultSize(getSuggestedMinimumHeight(), heightMeasureSpec)); } Copy code
//Get the value of the android:minHeight attribute or the size value of the View background image protected int getSuggestedMinimumWidth () { return (mBackground == null )? mMinWidth: max(mMinWidth, mBackground.getMinimumWidth()); } //@param size parameter generally means that the android:minHeight attribute or the size of the View background image is set public static int getDefaultSize ( int size, int measureSpec ) { int result = size; int specMode = MeasureSpec.getMode(measureSpec); int specSize = MeasureSpec.getSize(measureSpec); switch (specMode) { case MeasureSpec.UNSPECIFIED: //Indicates that the size of the View parent view is undetermined, set to the default value result = size; break ; case MeasureSpec.AT_MOST: case MeasureSpec.EXACTLY: result = specSize; break ; } return result; } Copy code

Measure process of ViewGroup

//FrameLayout measurement protected void onMeasure ( int widthMeasureSpec, int heightMeasureSpec ) { .... int maxHeight = 0 ; int maxWidth = 0 ; int childState = 0 ; for (int i = 0 ; i <count; i++) { final View child = getChildAt(i); if (mMeasureAllChildren || child.getVisibility() != GONE) { //Traverse your child View, as long as it is not GONE will participate in the measurement, the measureChildWithMargins method is at the top //The source code has already been explained , if you forget to go back Take a look, the basic idea is that the parent View passes its own MeasureSpec //to the child View combined with the child View s own LayoutParams to calculate the MeasureSpec of the child View, and then continues to pass down, //Pass the leaf node, the leaf node has no child View, according to the transmission The MeasureSpec that comes down is just fine for measuring itself. measureChildWithMargins(child, widthMeasureSpec, 0 , heightMeasureSpec, 0 ); final LayoutParams lp = (LayoutParams) child.getLayoutParams(); maxWidth = Math .max(maxWidth, child.getMeasuredWidth() + lp.leftMargin + lp.rightMargin); maxHeight = Math .max(maxHeight, child.getMeasuredHeight() + lp.topMargin + lp.bottomMargin); .... .... } } ..... ..... //After all children are measured, after a series of calculations, they set their own width and height through setMeasuredDimension. //For FrameLayout, the size of the largest word View may be used, and for LinearLayout, it may be the accumulation of height, //Specific measurement The principle is to look at the source code. In general, the parent View waits for the measurement of all the child Views before measuring itself. setMeasuredDimension(resolveSizeAndState(maxWidth, widthMeasureSpec, childState), resolveSizeAndState(maxHeight, heightMeasureSpec, childState << MEASURED_HEIGHT_STATE_SHIFT)); .... } Copy code

Briefly describe custom FlowLayout

To do FlowLayout, the following issues are first involved:

  • (1) When to wrap (2), how to get the width of FlowLayout (3), how to get the height of FlowLayout

step:

  • Rewrite onMeasure()-calculate the width and height occupied by the current FlowLayout
    • First of all, when I first came in, it was a mode of using MeasureSpec to obtain the value suggested by the system
    • Then, calculate the space occupied by FlowLayout:
      • Traverse the width of the child view (involving the margin of the child view)> measure the width of the control if it is greater than the line break
    • Finally, set it to the system through setMeasuredDimension():
  • Override onLayout()-layout all child controls
    • Override onLayout()-layout all child controls
    • Then calculate the top and left coordinates of the current line control according to whether or not to wrap:

Detailed analysis of the realization and principle of java dynamic proxy

Static proxy

  • Both the proxy class and the proxy class implement the same interface
  • The agent class holds the power object of the agent class
  • The method of calling the proxy class is actually the method of executing the proxy class

Dynamic proxy

The proxy method created by the proxy class when the program is running is called a dynamic proxy. In our static proxy example above, the proxy class (studentProxy) is self-defined and is compiled before the program runs.

Compared with static proxy, the advantage of dynamic proxy is that it can conveniently process the functions of the proxy class in a unified manner without modifying the methods in each proxy class. Simple implementation of dynamic proxy

//Create an InvocationHandler associated with the proxy object InvocationHandler stuHandler = new MyInvocationHandler<Person>(stu); //Create a proxy object stuProxy, each execution method of the proxy object will replace the invoke method in the execution of the Invocation Person stuProxy= ( the Person) the Proxy .newProxyInstance (Person.class.getClassLoader (), new new Class [] {} Person.class, stuHandler) <?>; duplicated code

Create the StuInvocationHandler class and implement the InvocationHandler interface. This class holds an instance target of the proxy object. There is an invoke method in InvocationHandler, and all methods that execute proxy objects will be replaced with invoke methods.

Then execute the corresponding method of the proxy object target in the invoke method.

public class StuInvocationHandler < T > implements InvocationHandler { //The proxy object held by invocationHandler T target; public StuInvocationHandler ( T target ) { this .target = target; } /** * proxy: stands for dynamic proxy object * method: represents the method being executed * args: represents the actual parameters passed in when calling the target method */ @Override public Object invoke( Object proxy, Method method, Object [] args) throws Throwable { System.out.println( "agent execution" +method.getName() + "method" ); */ //Insert a monitoring method in the agent process, and calculate the time-consuming method MonitorUtil.start(); Object result = method.invoke(target, args); MonitorUtil.finish(method.getName()); return result; } } Copy code

Perform operations:

public class ProxyTest { public static void main ( String [] args ) { //Create an instance object, this object is the proxied object Person zhangsan = new Student( " " ); //Create an InvocationHandler associated with the proxy object InvocationHandler stuHandler = new StuInvocationHandler<Person>(zhangsan); //Create a proxy object stuProxy to proxy zhangsan. Each execution method of the proxy object will replace the invoke method in the Invocation. Person stuProxy = (Person) Proxy .newProxyInstance(Person.class.getClassLoader(), new Class<?>[ ]{Person.class}, stuHandler); //Agent implementation of the method of paying the work fee stuProxy.giveMoney(); } } Copy code

Principle analysis

  • Above, we used the newProxyInstance method of the Proxy class to create a dynamic proxy object, which just encapsulates the steps of creating a dynamic proxy class (the red standard part):
public static Object newProxyInstance(ClassLoader loader, Class<?>[] interfaces, InvocationHandler h) throws IllegalArgumentException { Objects.requireNonNull(h); final Class<?>[] intfs = interfaces.clone(); final SecurityManager sm = System.getSecurityManager(); if (sm != null ) { checkProxyAccess(Reflection.getCallerClass(), loader, intfs); } /* * Look up or generate the designated proxy class. */ Class<?> cl = getProxyClass0(loader, intfs); /* * Invoke its constructor with the designated invocation handler. */ try { if (sm != null ) { checkNewProxyPermission(Reflection.getCallerClass(), cl); } final Constructor<?> cons = cl.getConstructor( constructorParams ); final InvocationHandler ih = h; if (!Modifier.isPublic(cl.getModifiers())) { AccessController.doPrivileged( new PrivilegedAction<Void>() { public Void run () { cons.setAccessible( true ); return null ; } }); } return cons.newInstance( new Object []{h}); } catch (IllegalAccessException|InstantiationException e) { throw new InternalError (e.toString(), e); } catch (InvocationTargetException e) { Throwable t = e.getCause(); if (t instanceof RuntimeException) { throw (RuntimeException) t; } else { throw new InternalError (t.toString(), t); } } catch (NoSuchMethodException e) { throw new InternalError (e.toString(), e); } } Copy code

In fact, what we should pay most attention to is Class<?> cl = getProxyClass0(loader, intfs); this sentence, here is the proxy class. This class file is cached in the java virtual machine

import java.lang.reflect.InvocationHandler; import java.lang.reflect.Method; import java.lang.reflect.Proxy; import java.lang.reflect.UndeclaredThrowableException; import proxy.Person; public final class $ Proxy0 extends Proxy implements Person { private static Method m1; private static Method m2; private static Method m3; private static Method m0; /** *Note that this is the construction method of generating the proxy class. The method parameter is of the InvocationHandler type. Seeing this, is it a bit clear? *Why the invocation method of the proxy object is to execute the invoke method in the InvocationHandler, and the InvocationHandler holds another *An instance of the object being proxied, I can't help but wonder if it is...? That's right, that's what you think. * *super(paramInvocationHandler) is the construction method of calling the parent class Proxy. *The parent class holds: protected InvocationHandler h; *Proxy construction method: * protected Proxy(InvocationHandler h) { * Objects.requireNonNull(h); * this.h = h; *} * */ public $Proxy0(InvocationHandler paramInvocationHandler) throws { super (paramInvocationHandler); } //This static block was originally at the end, I brought it to the front to facilitate the description of static { try { //Look at what's in the static block here. Did you find the giveMoney method? Please remember that the name m3 obtained by giveMoney through reflection, don't care about the others m1 = Class.forName( "java.lang.Object" ).getMethod( "equals" , new Class[] {Class.forName( "java.lang. Object" ) }); m2 = Class.forName( "java.lang.Object" ).getMethod( "toString" , new Class[ 0 ]); m3 = Class.forName( "proxy.Person" ).getMethod( "giveMoney" , new Class[ 0 ]); m0 = Class.forName( "java.lang.Object" ).getMethod( "hashCode" , new Class[ 0 ]); return ; } catch (NoSuchMethodException localNoSuchMethodException) { throw new NoSuchMethodError(localNoSuchMethodException.getMessage()); } catch (ClassNotFoundException localClassNotFoundException) { throw new NoClassDefFoundError(localClassNotFoundException.getMessage()); } } /** * *The giveMoney method of the proxy object is called here, and the invoke method in the InvocationHandler is directly called, and m3 is passed in. *this.h.invoke(this, m3, null); This is simple and clear. *Come, think about it again, the proxy object holds an InvocationHandler object, and the InvocationHandler object holds an object to be proxied. * Then contact the invoke method in the InformationHandler. Yes, it is like that. */ public final void giveMoney() throws { try { this .h.invoke( this , m3, null ); return ; } catch ( Error |RuntimeException localError) { throw localError; } catch (Throwable localThrowable) { throw new UndeclaredThrowableException(localThrowable); } } //Note that in order to save space, the content of toString, hashCode, and equals methods are omitted here. The principle is the same as the giveMoney method. } Copy code

We can regard InvocationHandler as an intermediary class. The intermediary class holds a proxy object and calls the corresponding method of the proxy object in the invoke method. Through the aggregation method, the reference of the proxy object is held, and all external calls to invoke are eventually converted to calls to the proxy object.

okhttp source code analysis

Dispatcher:

  • Synchronize

Dispatcher is executing a synchronized Call: it is directly added to the runningSyncCall queue, and the Call is not actually executed, but is handed over to external execution

/** Used by { @code Call#execute} to signal it is in-flight. */ synchronized void executed ( RealCall call ) { runningSyncCalls.add(call); } Copy code
  • asynchronous

Add Call to the queue: If the number of calls currently being executed is greater than maxRequest(64), or the call on the host of the call exceeds maxRequestsPerHos(5), then join readyAsyncCall to wait in the queue, otherwise join runningAsyncCalls and execute

synchronized void enqueue ( AsyncCall call ) { if (runningAsyncCalls.size() <maxRequests && runningCallsForHost(call) <maxRequestsPerHost) { runningAsyncCalls.add(call); executorService().execute(call); } else { readyAsyncCalls.add(call); } } Copy code

How does the asynchronous queue go from ready to running? , Finished will be called at the end of each call

private <T> void finished ( Deque<T> calls, T call, boolean promoteCalls ) { int runningCallsCount; Runnable idleCallback; synchronized ( this ) { if (!calls.remove(call)) throw new AssertionError( "Call wasn't in-flight!" ); //After each remove, execute promoteCalls to rotate. if (promoteCalls) promoteCalls(); runningCallsCount = runningCallsCount(); idleCallback = this .idleCallback; } //When the thread pool is empty, execute the callback if (runningCallsCount == 0 && idleCallback != null ) { idleCallback.run(); } } private void promoteCalls () { //If the currently executing thread is greater than maxRequests(64), do not operate if (runningAsyncCalls.size() >= maxRequests) return ; //Already running max capacity. if (readyAsyncCalls.isEmpty()) return ; //No ready calls to promote. for (Iterator<AsyncCall> i = readyAsyncCalls.iterator(); i.hasNext();) { AsyncCall call = i.next(); if (runningCallsForHost(call) <maxRequestsPerHost) { i.remove(); runningAsyncCalls.add(call); executorService().execute(call); } if (runningAsyncCalls.size() >= maxRequests) return ; //Reached max capacity. } } Copy code

You can know that finished first executes calls.remove(call) to delete the call, and then executes promoteCalls(), in the promoteCalls() method: if the current thread is greater than maxRequest, no operation is performed, if less than maxRequest, it traverses readyAsyncCalls, takes out a call, and puts Put this call into runningAsyncCalls, and then execute execute. In the traversal process, if runningAsyncCalls exceeds maxRequest, it will not be added, otherwise it will be added all the time. So it can be said: promoteCalls() is responsible for the conversion of ready Call to running Call. The specific execution request is implemented in RealCall, synchronously implemented in RealCall's execute, and asynchronous is implemented in AsyncCall's execute. . All of them call RealCall's getResponseWithInterceptorChain method to implement the call of the responsibility chain

okhttp caching mechanism

CacheControl corresponds to CacheControl in HTTP

public final class CacheControl { private final boolean noCache; private final boolean noStore; private final int maxAgeSeconds; private final int sMaxAgeSeconds; private final boolean isPrivate; private final boolean isPublic; private final boolean mustRevalidate; private final int maxStaleSeconds; private final int minFreshSeconds; private final boolean onlyIfCached; private final boolean noTransform; /** * Cache control request directives that require network validation of responses. Note that such * requests may be assisted by the cache via conditional GET requests. */ public static final CacheControl FORCE_NETWORK = new Builder().noCache().build(); /** * Cache control request directives that uses the cache only, even if the cached response is * stale. If the response isn't available in the cache or requires server validation, the call * will fail with a { @code 504 Unsatisfiable Request}. */ public static final CacheControl FORCE_CACHE = new Builder() .onlyIfCached() .maxStale(Integer.MAX_VALUE, TimeUnit.SECONDS) .build(); } Copy code
  • noCache()

Corresponding to "no-cache", if it appears in the header of the response, it does not mean that the response is not allowed to be cached, but that the client needs to re-authenticate with the server and perform an additional GET request to get the latest response; if it appears The request header indicates that the cache response is not applicable, that is, the memory network requests the response.

  • noStore()

Corresponds to "no-store", if it appears in the response header, it indicates that the response cannot be cached

  • maxAge(int maxAge,TimeUnit timeUnit)

Corresponding to "max-age", set the maximum inventory time of the cache response. If the cache response has reached the maximum survival time, no more network requests will be made

  • maxStale(int maxStale,TimeUnit timeUnit)

Corresponding to "max-stale", the maximum expiration time that the cache response can accept. If this parameter is not specified, the expired cache response will not be used

  • minFresh(int minFresh,TimeUnit timeUnit)

Corresponding to "min-fresh", set a response that will continue to refresh for the minimum number of seconds. If a response expires after minFresh has passed, then the cached response cannot be used, and the network request needs to be repeated

  • onlyIfCached()

Corresponds to "onlyIfCached", used in the request header, indicating that the request only accepts responses in the cache. If there is no response in the cache, a response with a status code of 504 is returned.

Detailed CacheStrategy class

Principles of Strategy

Different strategies are given according to whether the output networkRequest and cacheResponse values are null, as follows:

networkRequestcacheResponseresult
nullnullonly-if-cached (indicating that no network request is made, and the cache does not exist or expires, it will definitely return a 503 error)
nullnulnon-nulllDo not make a network request, directly return to the cache, do not request the network
non-nullnullNeed to make a network request, and the cache does not exist or in the past, directly access the network
non-nullnon-nullThe header contains the ETag/Last-Modified tag, it needs to be requested under the conditions, or access to the network

The construction of the CacheStrategy class

public Factory ( long nowMillis, Request request, Response cacheResponse ) { this .nowMillis = nowMillis; this .request = request; this .cacheResponse = cacheResponse; if (cacheResponse != null ) { this .sentRequestMillis = cacheResponse.sentRequestAtMillis(); this .receivedResponseMillis = cacheResponse.receivedResponseAtMillis(); Headers headers = cacheResponse.headers(); //Get the header median value in cacheReposne for (int i = 0 , size = headers.size(); i <size; i++) { String fieldName = headers.name(i); String value = headers.value(i) ; if ( "Date" .equalsIgnoreCase(fieldName)) { servedDate = HttpDate.parse(value); servedDateString = value; } else if ( "Expires" .equalsIgnoreCase(fieldName)) { expires = HttpDate.parse(value); } else if ( "Last-Modified" .equalsIgnoreCase(fieldName)) { lastModified = HttpDate.parse(value); lastModifiedString = value; } else if ( "ETag" .equalsIgnoreCase(fieldName)) { etag = value; } else if ( "Age" .equalsIgnoreCase(fieldName)) { ageSeconds = HttpHeaders.parseSeconds(value, -1 ); } } } } /** * Returns a strategy to satisfy { @code request} using the a cached response { @code response}. */ public CacheStrategy get () { CacheStrategy candidate = getCandidate(); if (candidate.networkRequest != null && request.cacheControl().onlyIfCached()) { //We're forbidden from using the network and the cache is insufficient. return new CacheStrategy( null , null ); } return candidate; } /** * Returns a strategy to satisfy { @code request} using the a cached response { @code response}. */ public CacheStrategy get () { //Get the current cache strategy CacheStrategy candidate = getCandidate(); //If the network request is not null and the cacheControl in the request is only cached if (candidate.networkRequest != null && request.cacheControl().onlyIfCached()) { //We're forbidden from using the network and the cache is insufficient. //Use a cache-only strategy return new CacheStrategy( null , null ); } return candidate; } /** Returns a strategy to use assuming the request can use the network. */ private CacheStrategy getCandidate () { //No cached response. //If there is no cached response, return a strategy with no response if (cacheResponse == null ) { return new CacheStrategy(request, null ); } //If it is https, the handshake is lost, and a strategy with no response is returned //Drop the cached response if it's missing a required handshake. if (request.isHttps() && cacheResponse.handshake() == null ) { return new CacheStrategy (request, null ); } //The response cannot be cached //If this response shouldn't have been stored, it should never be used //as a response source. This check should be redundant as long as the //persistence store is well-behaved and the rules are constant. if (!isCacheable(cacheResponse, request)) { return new CacheStrategy(request, null ); } //Get the CacheControl in the request header CacheControl requestCaching = request.cacheControl(); //If no cache is set in the request, then no cache if (requestCaching.noCache() || hasConditions(request)) { return new CacheStrategy(request, null ); } //Get the age of the response long ageMillis = cacheResponseAge(); //Get the time of the last response refresh long freshMillis = computeFreshnessLifetime(); //If there is a maximum duration requirement in the request, both choose the shortest time requirement if (requestCaching.maxAgeSeconds() !=- 1 ) { freshMillis = Math .min(freshMillis, SECONDS.toMillis(requestCaching.maxAgeSeconds())); } long minFreshMillis = 0 ; //If there is a minimum refresh time limit in the request if (requestCaching.minFreshSeconds() !=- 1 ) { //Update the minimum time limit with the minimum update time in the request minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds()); } //Maximum verification time long maxStaleMillis = 0 ; //Response cache controller CacheControl responseCaching = cacheResponse.cacheControl(); //If the response (server) does not have to be verified and there is a maximum verification second if (!responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() !=- 1 ) { //Update the maximum verification time maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds()); } //Response support caching //Duration + minimum refresh time <last refresh time + maximum verification time can be cached //current time (now)-elapsed time (sent) + time to survive <maximum survival time max-age) if (!responseCaching.noCache() && ageMillis + minFreshMillis <freshMillis + maxStaleMillis) { Response.Builder builder = cacheResponse.newBuilder(); if (ageMillis + minFreshMillis >= freshMillis) { builder.addHeader( "Warning" , "110 HttpURLConnection/"Response is stale\"" ); } long oneDayMillis = 24 * 60 * 60 * 1000L; if (ageMillis> oneDayMillis && isFreshnessLifetimeHeuristic()) { builder.addHeader( "Warning" , "113 HttpURLConnection/"Heuristic expiration\"" ); } //Cache response return new CacheStrategy( null , builder.build()); } //If you want to cache the request, you must meet certain conditions //Find a condition to add to the request. If the condition is satisfied, the response body //will not be transmitted. String conditionName; String conditionValue; if (etag! = null ) { conditionName = "If-None-Match" ; conditionValue = etag; } else if (lastModified != null ) { conditionName = "If-Modified-Since" ; conditionValue = lastModifiedString; } else if (servedDate != null ) { conditionName = "If-Modified-Since" ; conditionValue = servedDateString; } else { //If there is no condition, return a regular request return new CacheStrategy(request, null ); //No condition! Make a regular request. } Headers.Builder conditionalRequestHeaders = request.headers().newBuilder(); Internal.instance.addLenient(conditionalRequestHeaders, conditionName, conditionValue); Request conditionalRequest = request.newBuilder() .headers(conditionalRequestHeaders.build()) .build(); //Return the conditional cache request strategy return new CacheStrategy(conditionalRequest, cacheResponse); } Copy code

Through the above analysis, we can find that the cache strategy implemented by OKHTTP is essentially a large number of if/else judgments, which are actually written dead in the RFC standard documents. Having said so much above, then we are going to start today s topic-CacheInterceptor class

CacheInterceptor class

//CacheInterceptor.java @Override public Response intercept(Chain chain) throws IOException { //If there is a cache, remove it from the cache, it may be null Response cacheCandidate = cache != null ? cache.get(chain.request()) : null ; long now = System.currentTimeMillis(); //Get the cache strategy object CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get(); //Request in the strategy Request networkRequest = strategy.networkRequest; //Response in strategy Response cacheResponse = strategy.cacheResponse; //Cache is not empty judgment, if (cache != null ) { cache.trackResponse(strategy); } //Cache policy is not null and cache response is null if (cacheCandidate != null && cacheResponse == null ) { closeQuietly(cacheCandidate.body()); //The cache candidate wasn't applicable. Close it. } // If we're forbidden from using the network and the cache is insufficient, fail. if (networkRequest == null && cacheResponse == null ) { return new Response.Builder() .request(chain.request()) .protocol(Protocol.HTTP_1_1) .code( 504 ) .message( "Unsatisfiable Request (only-if-cached)" ) .body(Util.EMPTY_RESPONSE) .sentRequestAtMillis(-1L) .receivedResponseAtMillis(System.currentTimeMillis()) .build(); } //The cache is valid and the network is not used //If we don't need the network, we're done. if (networkRequest == null ) { return cacheResponse.newBuilder() .cacheResponse(stripBody(cacheResponse)) .build(); } //The cache is invalid, execute the next interceptor Response networkResponse = null ; try { networkResponse = chain.proceed(networkRequest); } finally { //If we're crashing on I/O or otherwise, don't leak the cache body. if (networkResponse == null && cacheCandidate != null ) { closeQuietly(cacheCandidate.body()); } } //There is a cache locally, choose which response to use according to the condition //If we have a cache response too, then we're doing a conditional get. if (cacheResponse != null ) { if (networkResponse.code() == HTTP_NOT_MODIFIED) { Response response = cacheResponse.newBuilder() .headers(combine(cacheResponse.headers(), networkResponse.headers())) .sentRequestAtMillis(networkResponse.sentRequestAtMillis()) .receivedResponseAtMillis(networkResponse.receivedResponseAtMillis()) .cacheResponse(stripBody(cacheResponse)) .networkResponse(stripBody(networkResponse)) .build(); networkResponse.body().close(); //Update the cache after combining headers but before stripping the //Content-Encoding header (as performed by initContentStream()). cache.trackConditionalCacheHit(); cache.update(cacheResponse, response); return response; } else { closeQuietly(cacheResponse.body()); } } //Use network response Response response = networkResponse.newBuilder() .cacheResponse(stripBody(cacheResponse)) .networkResponse(stripBody(networkResponse)) .build(); if (cache != null ) { //Cache locally if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) { //Offer this request to the cache. CacheRequest cacheRequest = cache.put(response); return cacheWritingResponse(cacheRequest, response); } if (HttpMethod.invalidatesCache(networkRequest.method())) { try { cache.remove(networkRequest); } catch (IOException ignored) { //The cache cannot be written. } } } return response; } Copy code

Simply talk about the above process:

  1. If the cache is configured, it will be fetched from the cache once, there is no guarantee
  2. Caching strategy
  3. Cache monitoring
  4. Prohibit the use of the network (according to the cache policy), the cache is invalid, return directly
  5. The cache is effective and the network is not used
  6. The cache is invalid, execute the next interceptor
  7. There is a cache locally, which response is selected based on the condition
  8. Use network response
  9. Cache to local

Arouter source code analysis

  • When using @Route annotation, the program will use the corresponding annotation processor during compilation ()
  • Call ARouter.init(Context), take the fully qualified name of the generated file through reflection and add it to the routermap, then instantiate the corresponding class through reflection and call the loadinto method to add the group to the following collection
  • There are mainly two important maps in the Warehouse class, a grouping table map (key=group name, value=routing table group information), and the other is a routing table map (key=path, value=RouteMeta and the corresponding class information)
//Warehouse streamlined source code class Warehouse { //group node collection static Map < String , Class<? extends IRouteGroup>> groupsIndex = new HashMap<>(); //group child node collection static Map < String , RouteMeta> routes = new HashMap<>(); } Copy code
  • When the program calls the navigation() method, the map of the routing table is set to the corresponding value, the corresponding RouteMeta object is obtained through the path, and the intent method finally jumps to the corresponding activity
//The navigation() method in _ARouter condensed final class _ARouter { protected Object _navigation ( final Context context, final Postcard postcard, final int requestCode, final NavigationCallback callback ) { switch (postcard.getType()) { case ACTIVITY: //Build intent final Intent intent = new Intent(currentContext, postcard.getDestination( )); intent.putExtras(postcard.getExtras()); //Set flags. int flags = postcard.getFlags(); if (- 1 != flags) { intent.setFlags(flags); } else if (!(currentContext instanceof Activity)) { //Non activity, need less one flag. intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); } //Navigation in main looper. new Handler(Looper.getMainLooper()).post( new Runnable () { @Override public void run () { if (requestCode> 0 ) { //Need start for result ActivityCompat.startActivityForResult((Activity) currentContext, intent, requestCode, postcard.getOptionsBundle()); } else { ActivityCompat.startActivity(currentContext, intent, postcard.getOptionsBundle()); } if (( 0 != postcard.getEnterAnim() || 0 != postcard.getExitAnim()) && currentContext instanceof Activity) { //Old version. ((Activity) currentContext).overridePendingTransition(postcard.getEnterAnim(), postcard.getExitAnim()); } if ( null != callback) { //Navigation over. callback.onArrival(postcard); } } }); break ; case PROVIDER: return postcard.getProvider(); case BOARDCAST: case CONTENT_PROVIDER: case FRAGMENT: Class fragmentMeta = postcard.getDestination(); try { Object instance = fragmentMeta.getConstructor().newInstance(); if (instance instanceof Fragment) { ((Fragment) instance).setArguments(postcard.getExtras()); } else if (instance instanceof android.support.v4.app.Fragment) { ((android.support.v4.app.Fragment) instance).setArguments(postcard.getExtras()); } return instance; } catch (Exception ex) { logger.error(Consts.TAG, "Fetch fragment instance error, " + TextUtils.formatStackTrace(ex.getStackTrace())); } case METHOD: case SERVICE: default : return null ; } return null ; } } Copy code

The use of route interceptor-if you want to jump to a page and need to log in, you can use the interceptor. Through the unified processing of the interceptor, you will jump to the corresponding interface after the login is completed, a bit like the aop principle

Eventbus source code analysis

Mainly analyze three parts:

1. Registration: EventBus.getDefault().register(obj)

public void register ( Object subscriber ) { //Get the Class object of the passed obj through reflection, if it is a registration operation done in MainActivity, //then the subscriber is the MainActivity object Class<?> subscriberClass = subscriber.getClass(); //step 1 List<SubscriberMethod> subscriberMethods = subscriberMethodFinder.findSubscriberMethods(subscriberClass); //Step 2 synchronized ( this ) { for (SubscriberMethod subscriberMethod: subscriberMethods) { subscribe(subscriber, subscriberMethod); } } } Copy code

Step 1 is to get all the methods annotated by @Subscribe in the currently registered object

  • SubscriberMethod defines the Method method name, ThreadMode thread model, eventType event class object, priority refers to the priority of receiving the event, sticky refers to whether it is a sticky event, SubscriberMethod encapsulates this information
  • The first is to get it from the cache. If there is a direct return in the cache, if there is no data in the cache, different methods will be called according to the boolean value of ignoreGeneratedIndex.
  • If at this time, the obtained method set is still empty, the program will throw an exception to remind the user that the registered object and his parent class are not public methods annotated by @Subscribe (insert a sentence here, in many cases, if you type formal EventBus did not obfuscate when packaged, it will throw this exception, because the method name is obfuscated, EventBus will not be found), store the obtained method set in the cache, and return the method set .

Step 2 Deal with the following issues

  • Are these methods already registered for the event? Do you want to consider whether the method names are the same?
  • There are multiple methods registered for this event in a registered object, how do we save these methods

EventBus.getDefault().post(xxx);

public void post ( Object event ) { PostingThreadState postingState = currentPostingThreadState.get(); List< Object > eventQueue = postingState.eventQueue; //Add Event to the List collection eventQueue.add(event); if (!postingState.isPosting) { postingState.isMainThread = Looper.getMainLooper() == Looper.myLooper(); postingState.isPosting = true ; if (postingState.canceled) { throw new EventBusException( "Internal error. Abort state was not reset" ); } try { //Traverse the list collection, the condition is whether the collection is empty while (!eventQueue.isEmpty()) { postSingleEvent(eventQueue.remove( 0 ), postingState); } } finally { postingState.isPosting = false ; postingState.isMainThread = false ; } } } Copy code
  • The current Event is added to the eventQueue, and the while loop is used to process each Event event of the post
  • Get event type
  • Get subscribers and subscription methods based on event type
  • Process subscription events according to whether the current sender is the main thread and the thread model of the subscription method
    • The subscription method is the main thread
      • The sender is the main thread, directly reflecting the execution of the subscription method
      • The sender is a child thread, and the subscription method is executed by reflection through the handler message mechanism
    • The subscription method is a child thread
      • The sender is the main thread, and the reflection subscription method is executed through the thread pool
      • The sender is a child thread, directly reflecting the execution of the subscription method

Glide execution process and three-level cache

glide execution flow

  • Glide.with(this), get a requestManager object
    • Pass in the Application parameter. At this time, the life cycle of the Application object is the life cycle of the application, so Glide does not need to do any special processing. It is automatically synchronized with the life cycle of the application. If the application is closed, Glide The loading will also terminate at the same time.
    • Passing in non-Application parameters No matter what you pass in the Glide.with() method is Activity, FragmentActivity, Fragment under the v4 package, or Fragment under the app package, the final process is the same, that is, the current Add a hidden Fragment to the Activity. So why add a hidden Fragment here? Because Glide needs to know the life cycle of loading. Because the life cycle of the Fragment and the Activity are synchronized, if the Activity is destroyed, the Fragment can be monitored, so that Glide can capture the event and stop the image loading.

Caching strategy

Memory cache-->Disk cache-->Network load

Process

The cache key is a unique identifier for realizing memory and disk caching

//Create EngineKey object EngineKey key = keyFactory.buildKey(model, signature, width, height, transformations, resourceClass, transcodeClass, options); Copy code

Here, the buildkey method of keyFactory is called to create the EngineKey object, and the data (String, URL, etc.) passed in by the load, the width and height of the picture, the signature, and the setting parameters are passed in.

The equals and hashcode methods are overridden in EngineKey to ensure that the same EngineKey object is considered only when all the parameters passed into the EngineKey are the same.

Memory cache

Cache principle: weak reference mechanism and LruCache algorithm

  • Weak reference mechanism: When the JVM performs garbage collection, regardless of whether the current memory is sufficient, it will reclaim the objects associated with the weak reference
  • LruCache algorithm: LinkedHashMap is used internally to store external cache objects in a strongly referenced manner. When the cache is full, LruCache will remove the cache objects used earlier, and then add new cache objects

Cache implementation: The pictures that are in use are cached using the weak reference mechanism, and the pictures that are not in use are cached using LruCache.

Why use weak references www.it610.com/article/128...

  • This can protect the currently used resources from being recycled by the LruCache algorithm
  • Using weak references, you can cache the strongly referenced resources in use without hindering the unreferenced resources that the system needs to recycle.

Disk cache

RxJava2 source code analysis

our aim:

  1. Know how the source (Observable) sends the data.
  2. Know how the end point (Observer) receives the data.
  3. When did you associate the source with the destination
  4. Know how thread scheduling is implemented
  5. Know how the operator is implemented

summary:

  • In the subscribeActual() method, the source and destination are associated
  • source.subscribe(parent); When this code is executed, the ObservableEmitter is used to send data to the Observer from the ObservableOnSubscribe. That is, the data is pushed from the source to the end.
  • In CreateEmitter, only when the relationship between Observable and Observer is not dispose, will the Observer's onXXXX() method be called back
  • Observer's onComplete() and onError() can only be executed once, because CreateEmitter will automatically dispose() after calling back any of them. According to the previous point, verify this conclusion
  • Error first and then complete, complete is not displayed. Otherwise it will crash
  • One more thing to note is that onSubscribe() is called back in the thread where we execute the subscribe() code, and is not affected by thread scheduling.

Thread scheduling subscribeOn

  • Return an ObservableSubscribeOn wrapper object
  • When the object returned in the previous step is subscribed, the subscribeActual() method in this class is called back, and the thread is immediately switched to the corresponding Schedulers.xxx() thread.
  • In the switched thread, execute source.subscribe(parent); to subscribe to the upstream (end point) Observable
  • The upstream (end point) Observable starts sending data, and the upstream sending data is just calling the onXXX() method corresponding to the downstream observer, so the operation is performed in the thread after the switch.