elasticsearch索引index之put?mapping的設(shè)置分析
mapping的設(shè)置過程
mapping機(jī)制使得elasticsearch索引數(shù)據(jù)變的更加靈活,近乎于no schema。mapping可以在建立索引時設(shè)置,也可以在后期設(shè)置。
后期設(shè)置可以是修改mapping(無法對已有的field屬性進(jìn)行修改,一般來說只是增加新的field)或者對沒有mapping的索引設(shè)置mapping。
put mapping操作必須是master節(jié)點來完成,因為它涉及到集群matedata的修改,同時它跟index和type密切相關(guān)。修改只是針對特定index的特定type。
在Action support分析中我們分析過幾種Action的抽象類型,put mapping Action屬于TransportMasterNodeOperationAction的子類。
put mapping
它實現(xiàn)了masterOperation方法,每個繼承自TransportMasterNodeOperationAction的子類都會根據(jù)自己的具體功能來實現(xiàn)這個方法。
這里的實現(xiàn)如下所示:
protected void masterOperation(final PutMappingRequest request, final ClusterState state, final ActionListener<PutMappingResponse> listener) throws ElasticsearchException {
final String[] concreteIndices = clusterService.state().metaData().concreteIndices(request.indicesOptions(), request.indices());
//構(gòu)造request
PutMappingClusterStateUpdateRequest updateRequest = new PutMappingClusterStateUpdateRequest()
.ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout())
.indices(concreteIndices).type(request.type())
.source(request.source()).ignoreConflicts(request.ignoreConflicts());
//調(diào)用putMapping方法,同時傳入一個Listener
metaDataMappingService.putMapping(updateRequest, new ActionListener<ClusterStateUpdateResponse>() {
@Override
public void onResponse(ClusterStateUpdateResponse response) {
listener.onResponse(new PutMappingResponse(response.isAcknowledged()));
}
@Override
public void onFailure(Throwable t) {
logger.debug("failed to put mappings on indices [{}], type [{}]", t, concreteIndices, request.type());
listener.onFailure(t);
}
});
}以上是TransportPutMappingAction對masterOperation方法的實現(xiàn),這里并沒有多少復(fù)雜的邏輯和操作。具體操作在matedataMappingService中。
updateTask響應(yīng)
跟之前的CreateIndex一樣,put Mapping也是向master提交一個updateTask。所有邏輯也都在execute方法中。這個task的基本跟CreateIndex一樣,也需要在給定的時間內(nèi)響應(yīng)。它的代碼如下所示:
public void putMapping(final PutMappingClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {
//提交一個高基本的updateTask
clusterService.submitStateUpdateTask("put-mapping [" + request.type() + "]", Priority.HIGH, new AckedClusterStateUpdateTask<ClusterStateUpdateResponse>(request, listener) {
@Override
protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {
return new ClusterStateUpdateResponse(acknowledged);
}
@Override
public ClusterState execute(final ClusterState currentState) throws Exception {
List<String> indicesToClose = Lists.newArrayList();
try {
//必須針對已經(jīng)在matadata中存在的index,否則拋出異常
for (String index : request.indices()) {
if (!currentState.metaData().hasIndex(index)) {
throw new IndexMissingException(new Index(index));
}
}
//還需要存在于indices中,否則無法進(jìn)行操作。所以這里要進(jìn)行預(yù)建
for (String index : request.indices()) {
if (indicesService.hasIndex(index)) {
continue;
}
final IndexMetaData indexMetaData = currentState.metaData().index(index);
//不存在就進(jìn)行創(chuàng)建
IndexService indexService = indicesService.createIndex(indexMetaData.index(), indexMetaData.settings(), clusterService.localNode().id());
indicesToClose.add(indexMetaData.index());
// make sure to add custom default mapping if exists
if (indexMetaData.mappings().containsKey(MapperService.DEFAULT_MAPPING)) {
indexService.mapperService().merge(MapperService.DEFAULT_MAPPING, indexMetaData.mappings().get(MapperService.DEFAULT_MAPPING).source(), false);
}
// only add the current relevant mapping (if exists)
if (indexMetaData.mappings().containsKey(request.type())) {
indexService.mapperService().merge(request.type(), indexMetaData.mappings().get(request.type()).source(), false);
}
}
//合并更新Mapping
Map<String, DocumentMapper> newMappers = newHashMap();
Map<String, DocumentMapper> existingMappers = newHashMap();
//針對每個index進(jìn)行Mapping合并
for (String index : request.indices()) {
IndexService indexService = indicesService.indexServiceSafe(index);
// try and parse it (no need to add it here) so we can bail early in case of parsing exception
DocumentMapper newMapper;
DocumentMapper existingMapper = indexService.mapperService().documentMapper(request.type());
if (MapperService.DEFAULT_MAPPING.equals(request.type())) {//存在defaultmapping則合并default mapping
// _default_ types do not go through merging, but we do test the new settings. Also don't apply the old default
newMapper = indexService.mapperService().parse(request.type(), new CompressedString(request.source()), false);
} else {
newMapper = indexService.mapperService().parse(request.type(), new CompressedString(request.source()), existingMapper == null);
if (existingMapper != null) {
// first, simulate
DocumentMapper.MergeResult mergeResult = existingMapper.merge(newMapper, mergeFlags().simulate(true));
// if we have conflicts, and we are not supposed to ignore them, throw an exception
if (!request.ignoreConflicts() && mergeResult.hasConflicts()) {
throw new MergeMappingException(mergeResult.conflicts());
}
}
}
newMappers.put(index, newMapper);
if (existingMapper != null) {
existingMappers.put(index, existingMapper);
}
}
String mappingType = request.type();
if (mappingType == null) {
mappingType = newMappers.values().iterator().next().type();
} else if (!mappingType.equals(newMappers.values().iterator().next().type())) {
throw new InvalidTypeNameException("Type name provided does not match type name within mapping definition");
}
if (!MapperService.DEFAULT_MAPPING.equals(mappingType) && !PercolatorService.TYPE_NAME.equals(mappingType) && mappingType.charAt(0) == '_') {
throw new InvalidTypeNameException("Document mapping type name can't start with '_'");
}
final Map<String, MappingMetaData> mappings = newHashMap();
for (Map.Entry<String, DocumentMapper> entry : newMappers.entrySet()) {
String index = entry.getKey();
// do the actual merge here on the master, and update the mapping source
DocumentMapper newMapper = entry.getValue();
IndexService indexService = indicesService.indexService(index);
if (indexService == null) {
continue;
}
CompressedString existingSource = null;
if (existingMappers.containsKey(entry.getKey())) {
existingSource = existingMappers.get(entry.getKey()).mappingSource();
}
DocumentMapper mergedMapper = indexService.mapperService().merge(newMapper.type(), newMapper.mappingSource(), false);
CompressedString updatedSource = mergedMapper.mappingSource();
if (existingSource != null) {
if (existingSource.equals(updatedSource)) {
// same source, no changes, ignore it
} else {
// use the merged mapping source
mappings.put(index, new MappingMetaData(mergedMapper));
if (logger.isDebugEnabled()) {
logger.debug("[{}] update_mapping [{}] with source [{}]", index, mergedMapper.type(), updatedSource);
} else if (logger.isInfoEnabled()) {
logger.info("[{}] update_mapping [{}]", index, mergedMapper.type());
}
}
} else {
mappings.put(index, new MappingMetaData(mergedMapper));
if (logger.isDebugEnabled()) {
logger.debug("[{}] create_mapping [{}] with source [{}]", index, newMapper.type(), updatedSource);
} else if (logger.isInfoEnabled()) {
logger.info("[{}] create_mapping [{}]", index, newMapper.type());
}
}
}
if (mappings.isEmpty()) {
// no changes, return
return currentState;
}
//根據(jù)mapping的更新情況重新生成matadata
MetaData.Builder builder = MetaData.builder(currentState.metaData());
for (String indexName : request.indices()) {
IndexMetaData indexMetaData = currentState.metaData().index(indexName);
if (indexMetaData == null) {
throw new IndexMissingException(new Index(indexName));
}
MappingMetaData mappingMd = mappings.get(indexName);
if (mappingMd != null) {
builder.put(IndexMetaData.builder(indexMetaData).putMapping(mappingMd));
}
}
return ClusterState.builder(currentState).metaData(builder).build();
} finally {
for (String index : indicesToClose) {
indicesService.removeIndex(index, "created for mapping processing");
}
}
}
});
}以上就是mapping的設(shè)置過程,首先它跟Create index一樣,只有master節(jié)點才能操作,而且是以task的形式提交給master;其次它的本質(zhì)是將request中的mapping和index現(xiàn)存的或者是default mapping合并,并最終生成新的matadata更新到集群的各個節(jié)點。
總結(jié)
集群中的master操作無論是index方面還是集群方面,最終都是集群matadata的更新過程。而這些操作只能在master上進(jìn)行,并且都是會超時的任務(wù)。put mapping當(dāng)然也不例外。上面的兩段代碼基本概況了mapping的設(shè)置過程。這里就不再重復(fù)了。
這里還有一個問題沒有涉及到就是mapping的合并。mapping合并會在很多地方用到。在下一節(jié)中會它進(jìn)行詳細(xì)分析,更多關(guān)于elasticsearch索引index put mapping設(shè)置的資料請關(guān)注腳本之家其它相關(guān)文章!
相關(guān)文章
Spring?Cloud?Ribbon?負(fù)載均衡使用策略示例詳解
Spring?Cloud?Ribbon?是基于Netflix?Ribbon?實現(xiàn)的一套客戶端負(fù)載均衡工具,Ribbon客戶端組件提供了一系列的完善的配置,如超時,重試等,這篇文章主要介紹了Spring?Cloud?Ribbon?負(fù)載均衡使用策略示例詳解,需要的朋友可以參考下2023-03-03
詳解AngularJs與SpringMVC簡單結(jié)合使用
本篇文章主要介紹了AngularJs與SpringMVC簡單結(jié)合使用,小編覺得挺不錯的,現(xiàn)在分享給大家,也給大家做個參考。一起跟隨小編過來看看吧2017-06-06

