KafkaListener注解的實(shí)現(xiàn)機(jī)制源碼解析
序
本文只要研究一下KafkaListener的實(shí)現(xiàn)機(jī)制
KafkaListener
org/springframework/kafka/annotation/KafkaListener.java
@Target({ ElementType.TYPE, ElementType.METHOD, ElementType.ANNOTATION_TYPE }) @Retention(RetentionPolicy.RUNTIME) @MessageMapping @Documented @Repeatable(KafkaListeners.class) public @interface KafkaListener { String id() default ""; String containerFactory() default ""; String[] topics() default {}; String topicPattern() default ""; TopicPartition[] topicPartitions() default {}; String containerGroup() default ""; String errorHandler() default ""; String groupId() default ""; boolean idIsGroup() default true; String clientIdPrefix() default ""; String beanRef() default "__listener"; String concurrency() default ""; String autoStartup() default ""; String[] properties() default {}; }
KafkaListener注解定義了id、topics、groupId等屬性
KafkaListenerAnnotationBeanPostProcessor
org/springframework/kafka/annotation/KafkaListenerAnnotationBeanPostProcessor.java
public class KafkaListenerAnnotationBeanPostProcessor<K, V> implements BeanPostProcessor, Ordered, BeanFactoryAware, SmartInitializingSingleton { private final KafkaListenerEndpointRegistrar registrar = new KafkaListenerEndpointRegistrar(); @Override public int getOrder() { return LOWEST_PRECEDENCE; } @Override public void setBeanFactory(BeanFactory beanFactory) { this.beanFactory = beanFactory; if (beanFactory instanceof ConfigurableListableBeanFactory) { this.resolver = ((ConfigurableListableBeanFactory) beanFactory).getBeanExpressionResolver(); this.expressionContext = new BeanExpressionContext((ConfigurableListableBeanFactory) beanFactory, this.listenerScope); } } @Override public void afterSingletonsInstantiated() { this.registrar.setBeanFactory(this.beanFactory); if (this.beanFactory instanceof ListableBeanFactory) { Map<String, KafkaListenerConfigurer> instances = ((ListableBeanFactory) this.beanFactory).getBeansOfType(KafkaListenerConfigurer.class); for (KafkaListenerConfigurer configurer : instances.values()) { configurer.configureKafkaListeners(this.registrar); } } if (this.registrar.getEndpointRegistry() == null) { if (this.endpointRegistry == null) { Assert.state(this.beanFactory != null, "BeanFactory must be set to find endpoint registry by bean name"); this.endpointRegistry = this.beanFactory.getBean( KafkaListenerConfigUtils.KAFKA_LISTENER_ENDPOINT_REGISTRY_BEAN_NAME, KafkaListenerEndpointRegistry.class); } this.registrar.setEndpointRegistry(this.endpointRegistry); } if (this.defaultContainerFactoryBeanName != null) { this.registrar.setContainerFactoryBeanName(this.defaultContainerFactoryBeanName); } // Set the custom handler method factory once resolved by the configurer MessageHandlerMethodFactory handlerMethodFactory = this.registrar.getMessageHandlerMethodFactory(); if (handlerMethodFactory != null) { this.messageHandlerMethodFactory.setMessageHandlerMethodFactory(handlerMethodFactory); } else { addFormatters(this.messageHandlerMethodFactory.defaultFormattingConversionService); } // Actually register all listeners this.registrar.afterPropertiesSet(); } @Override public Object postProcessAfterInitialization(final Object bean, final String beanName) throws BeansException { if (!this.nonAnnotatedClasses.contains(bean.getClass())) { Class<?> targetClass = AopUtils.getTargetClass(bean); Collection<KafkaListener> classLevelListeners = findListenerAnnotations(targetClass); final boolean hasClassLevelListeners = classLevelListeners.size() > 0; final List<Method> multiMethods = new ArrayList<>(); Map<Method, Set<KafkaListener>> annotatedMethods = MethodIntrospector.selectMethods(targetClass, (MethodIntrospector.MetadataLookup<Set<KafkaListener>>) method -> { Set<KafkaListener> listenerMethods = findListenerAnnotations(method); return (!listenerMethods.isEmpty() ? listenerMethods : null); }); if (hasClassLevelListeners) { Set<Method> methodsWithHandler = MethodIntrospector.selectMethods(targetClass, (ReflectionUtils.MethodFilter) method -> AnnotationUtils.findAnnotation(method, KafkaHandler.class) != null); multiMethods.addAll(methodsWithHandler); } if (annotatedMethods.isEmpty()) { this.nonAnnotatedClasses.add(bean.getClass()); if (this.logger.isTraceEnabled()) { this.logger.trace("No @KafkaListener annotations found on bean type: " + bean.getClass()); } } else { // Non-empty set of methods for (Map.Entry<Method, Set<KafkaListener>> entry : annotatedMethods.entrySet()) { Method method = entry.getKey(); for (KafkaListener listener : entry.getValue()) { processKafkaListener(listener, method, bean, beanName); } } if (this.logger.isDebugEnabled()) { this.logger.debug(annotatedMethods.size() + " @KafkaListener methods processed on bean '" + beanName + "': " + annotatedMethods); } } if (hasClassLevelListeners) { processMultiMethodListeners(classLevelListeners, multiMethods, bean, beanName); } } return bean; } }
KafkaListenerAnnotationBeanPostProcessor實(shí)現(xiàn)了BeanPostProcessor, Ordered, BeanFactoryAware, SmartInitializingSingleton接口,其getOrder返回LOWEST_PRECEDENCE
其afterSingletonsInstantiated方法(SmartInitializingSingleton接口
)首先獲取KafkaListenerConfigurer,然后設(shè)置configureKafkaListeners為registrar,最后是執(zhí)行registrar.afterPropertiesSet()
其postProcessAfterInitialization方法(BeanPostProcessor接口
)會(huì)收集標(biāo)注KafkaListener的bean的方法,然后針對(duì)每個(gè)方法執(zhí)行processKafkaListener
processKafkaListener
protected void processKafkaListener(KafkaListener kafkaListener, Method method, Object bean, String beanName) { Method methodToUse = checkProxy(method, bean); MethodKafkaListenerEndpoint<K, V> endpoint = new MethodKafkaListenerEndpoint<>(); endpoint.setMethod(methodToUse); processListener(endpoint, kafkaListener, bean, methodToUse, beanName); } protected void processListener(MethodKafkaListenerEndpoint<?, ?> endpoint, KafkaListener kafkaListener, Object bean, Object adminTarget, String beanName) { String beanRef = kafkaListener.beanRef(); if (StringUtils.hasText(beanRef)) { this.listenerScope.addListener(beanRef, bean); } endpoint.setBean(bean); endpoint.setMessageHandlerMethodFactory(this.messageHandlerMethodFactory); endpoint.setId(getEndpointId(kafkaListener)); endpoint.setGroupId(getEndpointGroupId(kafkaListener, endpoint.getId())); endpoint.setTopicPartitions(resolveTopicPartitions(kafkaListener)); endpoint.setTopics(resolveTopics(kafkaListener)); endpoint.setTopicPattern(resolvePattern(kafkaListener)); endpoint.setClientIdPrefix(resolveExpressionAsString(kafkaListener.clientIdPrefix(), "clientIdPrefix")); String group = kafkaListener.containerGroup(); if (StringUtils.hasText(group)) { Object resolvedGroup = resolveExpression(group); if (resolvedGroup instanceof String) { endpoint.setGroup((String) resolvedGroup); } } String concurrency = kafkaListener.concurrency(); if (StringUtils.hasText(concurrency)) { endpoint.setConcurrency(resolveExpressionAsInteger(concurrency, "concurrency")); } String autoStartup = kafkaListener.autoStartup(); if (StringUtils.hasText(autoStartup)) { endpoint.setAutoStartup(resolveExpressionAsBoolean(autoStartup, "autoStartup")); } resolveKafkaProperties(endpoint, kafkaListener.properties()); KafkaListenerContainerFactory<?> factory = null; String containerFactoryBeanName = resolve(kafkaListener.containerFactory()); if (StringUtils.hasText(containerFactoryBeanName)) { Assert.state(this.beanFactory != null, "BeanFactory must be set to obtain container factory by bean name"); try { factory = this.beanFactory.getBean(containerFactoryBeanName, KafkaListenerContainerFactory.class); } catch (NoSuchBeanDefinitionException ex) { throw new BeanInitializationException("Could not register Kafka listener endpoint on [" + adminTarget + "] for bean " + beanName + ", no " + KafkaListenerContainerFactory.class.getSimpleName() + " with id '" + containerFactoryBeanName + "' was found in the application context", ex); } } endpoint.setBeanFactory(this.beanFactory); String errorHandlerBeanName = resolveExpressionAsString(kafkaListener.errorHandler(), "errorHandler"); if (StringUtils.hasText(errorHandlerBeanName)) { endpoint.setErrorHandler(this.beanFactory.getBean(errorHandlerBeanName, KafkaListenerErrorHandler.class)); } this.registrar.registerEndpoint(endpoint, factory); if (StringUtils.hasText(beanRef)) { this.listenerScope.removeListener(beanRef); } }
processKafkaListener方法將method轉(zhuǎn)換為MethodKafkaListenerEndpoint,然后執(zhí)行processListener方法,它主要是將KafkaListener注解的信息填充到MethodKafkaListenerEndpoint上,確定KafkaListenerContainerFactory,最后執(zhí)行registrar.registerEndpoint(endpoint, factory)
registrar.registerEndpoint
org/springframework/kafka/config/KafkaListenerEndpointRegistrar.java
/** * Register a new {@link KafkaListenerEndpoint} alongside the * {@link KafkaListenerContainerFactory} to use to create the underlying container. * <p>The {@code factory} may be {@code null} if the default factory has to be * used for that endpoint. * @param endpoint the {@link KafkaListenerEndpoint} instance to register. * @param factory the {@link KafkaListenerContainerFactory} to use. */ public void registerEndpoint(KafkaListenerEndpoint endpoint, KafkaListenerContainerFactory<?> factory) { Assert.notNull(endpoint, "Endpoint must be set"); Assert.hasText(endpoint.getId(), "Endpoint id must be set"); // Factory may be null, we defer the resolution right before actually creating the container KafkaListenerEndpointDescriptor descriptor = new KafkaListenerEndpointDescriptor(endpoint, factory); synchronized (this.endpointDescriptors) { if (this.startImmediately) { // Register and start immediately this.endpointRegistry.registerListenerContainer(descriptor.endpoint, resolveContainerFactory(descriptor), true); } else { this.endpointDescriptors.add(descriptor); } } }
KafkaListenerEndpointRegistrar的registerEndpoint會(huì)創(chuàng)建KafkaListenerEndpointDescriptor,然后執(zhí)行endpointRegistry.registerListenerContainer
endpointRegistry.registerListenerContainer
org/springframework/kafka/config/KafkaListenerEndpointRegistry.java
public void registerListenerContainer(KafkaListenerEndpoint endpoint, KafkaListenerContainerFactory<?> factory, boolean startImmediately) { Assert.notNull(endpoint, "Endpoint must not be null"); Assert.notNull(factory, "Factory must not be null"); String id = endpoint.getId(); Assert.hasText(id, "Endpoint id must not be empty"); synchronized (this.listenerContainers) { Assert.state(!this.listenerContainers.containsKey(id), "Another endpoint is already registered with id '" + id + "'"); MessageListenerContainer container = createListenerContainer(endpoint, factory); this.listenerContainers.put(id, container); if (StringUtils.hasText(endpoint.getGroup()) && this.applicationContext != null) { List<MessageListenerContainer> containerGroup; if (this.applicationContext.containsBean(endpoint.getGroup())) { containerGroup = this.applicationContext.getBean(endpoint.getGroup(), List.class); } else { containerGroup = new ArrayList<MessageListenerContainer>(); this.applicationContext.getBeanFactory().registerSingleton(endpoint.getGroup(), containerGroup); } containerGroup.add(container); } if (startImmediately) { startIfNecessary(container); } } } /** * Start the specified {@link MessageListenerContainer} if it should be started * on startup. * @param listenerContainer the listener container to start. * @see MessageListenerContainer#isAutoStartup() */ private void startIfNecessary(MessageListenerContainer listenerContainer) { if (this.contextRefreshed || listenerContainer.isAutoStartup()) { listenerContainer.start(); } }
KafkaListenerEndpointRegistry的registerListenerContainer方法會(huì)根據(jù)endpoint和factory來(lái)創(chuàng)建MessageListenerContainer,然后放入到listenerContainers中,對(duì)于startImmediately的會(huì)執(zhí)行startIfNecessary,它主要是執(zhí)行l(wèi)istenerContainer.start()
MessageListenerContainer
org/springframework/kafka/listener/MessageListenerContainer.java
public interface MessageListenerContainer extends SmartLifecycle { void setupMessageListener(Object messageListener); Map<String, Map<MetricName, ? extends Metric>> metrics(); default ContainerProperties getContainerProperties() { throw new UnsupportedOperationException("This container doesn't support retrieving its properties"); } default Collection<TopicPartition> getAssignedPartitions() { throw new UnsupportedOperationException("This container doesn't support retrieving its assigned partitions"); } default void pause() { throw new UnsupportedOperationException("This container doesn't support pause"); } default void resume() { throw new UnsupportedOperationException("This container doesn't support resume"); } default boolean isPauseRequested() { throw new UnsupportedOperationException("This container doesn't support pause/resume"); } default boolean isContainerPaused() { throw new UnsupportedOperationException("This container doesn't support pause/resume"); } default void setAutoStartup(boolean autoStartup) { // empty } default String getGroupId() { throw new UnsupportedOperationException("This container does not support retrieving the group id"); } @Nullable default String getListenerId() { throw new UnsupportedOperationException("This container does not support retrieving the listener id"); } }
MessageListenerContainer繼承了SmartLifecycle接口,它有一個(gè)泛型接口為GenericMessageListenerContainer,后者有一個(gè)抽象類為AbstractMessageListenerContainer,然后它有兩個(gè)子類,分別是KafkaMessageListenerContainer與ConcurrentMessageListenerContainer
AbstractMessageListenerContainer
public abstract class AbstractMessageListenerContainer<K, V> implements GenericMessageListenerContainer<K, V>, BeanNameAware, ApplicationEventPublisherAware { @Override public final void start() { checkGroupId(); synchronized (this.lifecycleMonitor) { if (!isRunning()) { Assert.isTrue(this.containerProperties.getMessageListener() instanceof GenericMessageListener, () -> "A " + GenericMessageListener.class.getName() + " implementation must be provided"); doStart(); } } } @Override public final void stop() { synchronized (this.lifecycleMonitor) { if (isRunning()) { final CountDownLatch latch = new CountDownLatch(1); doStop(latch::countDown); try { latch.await(this.containerProperties.getShutdownTimeout(), TimeUnit.MILLISECONDS); // NOSONAR publishContainerStoppedEvent(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } } } //...... }
AbstractMessageListenerContainer的start方法會(huì)回調(diào)子類的doStart方法,其stop方法會(huì)回調(diào)子類的doStop方法
KafkaMessageListenerContainer
org/springframework/kafka/listener/KafkaMessageListenerContainer.java
public class KafkaMessageListenerContainer<K, V> // NOSONAR comment density extends AbstractMessageListenerContainer<K, V> { @Override protected void doStart() { if (isRunning()) { return; } if (this.clientIdSuffix == null) { // stand-alone container checkTopics(); } ContainerProperties containerProperties = getContainerProperties(); checkAckMode(containerProperties); Object messageListener = containerProperties.getMessageListener(); Assert.state(messageListener != null, "A MessageListener is required"); if (containerProperties.getConsumerTaskExecutor() == null) { SimpleAsyncTaskExecutor consumerExecutor = new SimpleAsyncTaskExecutor( (getBeanName() == null ? "" : getBeanName()) + "-C-"); containerProperties.setConsumerTaskExecutor(consumerExecutor); } Assert.state(messageListener instanceof GenericMessageListener, "Listener must be a GenericListener"); GenericMessageListener<?> listener = (GenericMessageListener<?>) messageListener; ListenerType listenerType = deteremineListenerType(listener); this.listenerConsumer = new ListenerConsumer(listener, listenerType); setRunning(true); this.listenerConsumerFuture = containerProperties .getConsumerTaskExecutor() .submitListenable(this.listenerConsumer); } //...... }
KafkaMessageListenerContainer的doStart方法會(huì)獲取到messageListener,然后創(chuàng)建ListenerConsumer,最后提交到線程池中執(zhí)行
ConcurrentMessageListenerContainer
org/springframework/kafka/listener/ConcurrentMessageListenerContainer.java
public class ConcurrentMessageListenerContainer<K, V> extends AbstractMessageListenerContainer<K, V> { @Override protected void doStart() { if (!isRunning()) { checkTopics(); ContainerProperties containerProperties = getContainerProperties(); TopicPartitionInitialOffset[] topicPartitions = containerProperties.getTopicPartitions(); if (topicPartitions != null && this.concurrency > topicPartitions.length) { this.logger.warn("When specific partitions are provided, the concurrency must be less than or " + "equal to the number of partitions; reduced from " + this.concurrency + " to " + topicPartitions.length); this.concurrency = topicPartitions.length; } setRunning(true); for (int i = 0; i < this.concurrency; i++) { KafkaMessageListenerContainer<K, V> container; if (topicPartitions == null) { container = new KafkaMessageListenerContainer<>(this, this.consumerFactory, containerProperties); } else { container = new KafkaMessageListenerContainer<>(this, this.consumerFactory, containerProperties, partitionSubset(containerProperties, i)); } String beanName = getBeanName(); container.setBeanName((beanName != null ? beanName : "consumer") + "-" + i); if (getApplicationEventPublisher() != null) { container.setApplicationEventPublisher(getApplicationEventPublisher()); } container.setClientIdSuffix("-" + i); container.setGenericErrorHandler(getGenericErrorHandler()); container.setAfterRollbackProcessor(getAfterRollbackProcessor()); container.setRecordInterceptor(getRecordInterceptor()); container.setEmergencyStop(() -> { stop(() -> { // NOSONAR }); publishContainerStoppedEvent(); }); if (isPaused()) { container.pause(); } container.start(); this.containers.add(container); } } } //...... }
ConcurrentMessageListenerContainer的doStart會(huì)根據(jù)concurrency值來(lái)創(chuàng)建對(duì)應(yīng)的KafkaMessageListenerContainer,然后執(zhí)行其start方法
ListenerConsumer
org/springframework/kafka/listener/KafkaMessageListenerContainer.java
private final class ListenerConsumer implements SchedulingAwareRunnable, ConsumerSeekCallback { @Override public void run() { this.consumerThread = Thread.currentThread(); if (this.genericListener instanceof ConsumerSeekAware) { ((ConsumerSeekAware) this.genericListener).registerSeekCallback(this); } if (this.transactionManager != null) { ProducerFactoryUtils.setConsumerGroupId(this.consumerGroupId); } this.count = 0; this.last = System.currentTimeMillis(); initAsignedPartitions(); while (isRunning()) { try { pollAndInvoke(); } catch (@SuppressWarnings(UNUSED) WakeupException e) { // Ignore, we're stopping or applying immediate foreign acks } catch (NoOffsetForPartitionException nofpe) { this.fatalError = true; ListenerConsumer.this.logger.error("No offset and no reset policy", nofpe); break; } catch (Exception e) { handleConsumerException(e); } catch (Error e) { // NOSONAR - rethrown Runnable runnable = KafkaMessageListenerContainer.this.emergencyStop; if (runnable != null) { runnable.run(); } this.logger.error("Stopping container due to an Error", e); wrapUp(); throw e; } } wrapUp(); } protected void pollAndInvoke() { if (!this.autoCommit && !this.isRecordAck) { processCommits(); } processSeeks(); checkPaused(); ConsumerRecords<K, V> records = this.consumer.poll(this.pollTimeout); this.lastPoll = System.currentTimeMillis(); checkResumed(); debugRecords(records); if (records != null && records.count() > 0) { if (this.containerProperties.getIdleEventInterval() != null) { this.lastReceive = System.currentTimeMillis(); } invokeListener(records); } else { checkIdle(); } } private void invokeListener(final ConsumerRecords<K, V> records) { if (this.isBatchListener) { invokeBatchListener(records); } else { invokeRecordListener(records); } } private void doInvokeBatchOnMessage(final ConsumerRecords<K, V> records, List<ConsumerRecord<K, V>> recordList) { switch (this.listenerType) { case ACKNOWLEDGING_CONSUMER_AWARE: this.batchListener.onMessage(recordList, this.isAnyManualAck ? new ConsumerBatchAcknowledgment(records) : null, this.consumer); break; case ACKNOWLEDGING: this.batchListener.onMessage(recordList, this.isAnyManualAck ? new ConsumerBatchAcknowledgment(records) : null); break; case CONSUMER_AWARE: this.batchListener.onMessage(recordList, this.consumer); break; case SIMPLE: this.batchListener.onMessage(recordList); break; } } private void doInvokeOnMessage(final ConsumerRecord<K, V> recordArg) { ConsumerRecord<K, V> record = recordArg; if (this.recordInterceptor != null) { record = this.recordInterceptor.intercept(record); } if (record == null) { if (this.logger.isDebugEnabled()) { this.logger.debug("RecordInterceptor returned null, skipping: " + recordArg); } } else { switch (this.listenerType) { case ACKNOWLEDGING_CONSUMER_AWARE: this.listener.onMessage(record, this.isAnyManualAck ? new ConsumerAcknowledgment(record) : null, this.consumer); break; case CONSUMER_AWARE: this.listener.onMessage(record, this.consumer); break; case ACKNOWLEDGING: this.listener.onMessage(record, this.isAnyManualAck ? new ConsumerAcknowledgment(record) : null); break; case SIMPLE: this.listener.onMessage(record); break; } } } //...... }
ListenerConsumer實(shí)現(xiàn)了org.springframework.scheduling.SchedulingAwareRunnable接口(它繼承了Runnable接口
)以及org.springframework.kafka.listener.ConsumerSeekCallback接口
其run方法主要是執(zhí)行initAsignedPartitions,然后循環(huán)執(zhí)行pollAndInvoke,對(duì)于NoOffsetForPartitionException則跳出異常,對(duì)于其他Exception則執(zhí)行handleConsumerException,對(duì)于Error執(zhí)行emergencyStop與wrapUp方法
pollAndInvoke方法主要是執(zhí)行consumer.poll(),然后通過(guò)invokeListener(records)回調(diào),最后是通過(guò)doInvokeBatchOnMessage、doInvokeOnMessage去回調(diào)listener.onMessage方法
小結(jié)
KafkaListenerAnnotationBeanPostProcessor主要是收集標(biāo)注KafkaListener的bean的方法,然后針對(duì)每個(gè)方法執(zhí)行processKafkaListener,processKafkaListener方法將method轉(zhuǎn)換為MethodKafkaListenerEndpoint,執(zhí)行registrar.registerEndpoint(endpoint, factory)
KafkaListenerEndpointRegistry的registerListenerContainer方法會(huì)根據(jù)endpoint和factory來(lái)創(chuàng)建MessageListenerContainer,然后放入到listenerContainers中,對(duì)于startImmediately的會(huì)執(zhí)行startIfNecessary,它主要是執(zhí)行l(wèi)istenerContainer.start()
MessageListenerContainer有兩個(gè)主要的實(shí)現(xiàn)類分別是KafkaMessageListenerContainer與ConcurrentMessageListenerContainer,后者的start方法主要是根據(jù)concurrency創(chuàng)建對(duì)應(yīng)數(shù)量的KafkaMessageListenerContainer,最后都是執(zhí)行KafkaMessageListenerContainer的start方法,它會(huì)創(chuàng)建ListenerConsumer,最后提交到線程池中執(zhí)行;ListenerConsumer主要是執(zhí)行pollAndInvoke,拉取消息,然后回到listener的onMessage方法
整體的鏈路就是KafkaListenerAnnotationBeanPostProcessor --> KafkaListenerEndpointRegistry --> MessageListenerContainer --> GenericMessageListener.onMessage
以上就是KafkaListener的實(shí)現(xiàn)機(jī)制的詳細(xì)內(nèi)容,更多關(guān)于KafkaListener的實(shí)現(xiàn)機(jī)制的資料請(qǐng)關(guān)注腳本之家其它相關(guān)文章!
相關(guān)文章
list轉(zhuǎn)tree和list中查找某節(jié)點(diǎn)下的所有數(shù)據(jù)操作
這篇文章主要介紹了list轉(zhuǎn)tree和list中查找某節(jié)點(diǎn)下的所有數(shù)據(jù)操作,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2020-09-09SpringBoot+TestNG單元測(cè)試的實(shí)現(xiàn)
本文主要介紹了SpringBoot+TestNG單元測(cè)試的實(shí)現(xiàn),文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2021-07-07使用Java servlet實(shí)現(xiàn)自動(dòng)登錄退出功能
這篇文章主要介紹了使用Java servlet實(shí)現(xiàn)自動(dòng)登錄退出功能,,本文通過(guò)實(shí)例代碼給大家介紹的非常詳細(xì),具有一定的參考借鑒價(jià)值,需要的朋友可以參考下2019-11-11Java對(duì)象創(chuàng)建的幾種方式總結(jié)
本文詳細(xì)介紹了Java中創(chuàng)建對(duì)象的五種方法,包括使用new關(guān)鍵字、Class的newInstance()方法、Constructor的newInstance()方法、克隆以及反序列化,同時(shí)討論了這些方式是否調(diào)用了構(gòu)造器以及創(chuàng)建對(duì)象的條件,文章還提供了示例代碼進(jìn)行演示,需要的朋友可以參考下2025-02-02Java并發(fā)編程service層處理并發(fā)事務(wù)加鎖可能會(huì)無(wú)效問(wèn)題
這篇文章主要介紹了Java并發(fā)編程service層處理并發(fā)事務(wù)加鎖可能會(huì)無(wú)效問(wèn)題,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2023-07-07詳解Java多線程編程中CountDownLatch阻塞線程的方法
在Java中和ReadWriteLock.ReadLock一樣,CountDownLatch的本質(zhì)也是一個(gè)"共享鎖",這里我們就來(lái)詳解Java多線程編程中CountDownLatch阻塞線程的方法:2016-07-07