欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

iOS基于AVFoundation 制作用于剪輯視頻項(xiàng)目

 更新時(shí)間:2021年12月08日 11:05:52   作者:bqiss  
這篇文章主要為大家介紹了利用AVFoundation 制作用于剪輯視頻的項(xiàng)目,可以實(shí)現(xiàn)視頻擴(kuò)展或者回退的功能,感興趣的小伙伴快來(lái)跟隨小編一起學(xué)習(xí)吧

最近做了一個(gè)剪輯視頻的小項(xiàng)目,踩了一些小坑,但還是有驚無(wú)險(xiǎn)的實(shí)現(xiàn)了功能。

其實(shí) Apple 官方也給了一個(gè) UIVideoEditController 讓我們來(lái)做視頻的處理,但難以進(jìn)行擴(kuò)展或者自定義,所以咱們就用 Apple 給的一個(gè)框架 AVFoundation 來(lái)開(kāi)發(fā)自定義的視頻處理。

而且發(fā)現(xiàn)網(wǎng)上并沒(méi)有相關(guān)的并且比較系統(tǒng)的資料,于是寫(xiě)下了本文,希望能對(duì)也在做視頻處理方面的新手(比如我)能帶來(lái)幫助。

項(xiàng)目效果圖

項(xiàng)目的功能大概就是對(duì)視頻軌道的撤銷、分割、刪除還有拖拽視頻塊來(lái)對(duì)視頻擴(kuò)展或者回退的功能

功能實(shí)現(xiàn)

一、選取視頻并播放

通過(guò) UIImagePickerController 選取視頻并且跳轉(zhuǎn)到自定義的編輯控制器

這一部分沒(méi)什么好說(shuō)的

示例:

 //選擇視頻
       @objc func selectVideo() {
           if UIImagePickerController.isSourceTypeAvailable(.photoLibrary) {
               //初始化圖片控制器
               let imagePicker = UIImagePickerController()
               //設(shè)置代理
               imagePicker.delegate = self
               //指定圖片控制器類型
               imagePicker.sourceType = .photoLibrary
               //只顯示視頻類型的文件
               imagePicker.mediaTypes = [kUTTypeMovie as String]
               //彈出控制器,顯示界面
               self.present(imagePicker, animated: true, completion: nil)
           }
           else {
               print("讀取相冊(cè)錯(cuò)誤")
           }
       }

    func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
        //獲取視頻路徑(選擇后視頻會(huì)自動(dòng)復(fù)制到app臨時(shí)文件夾下)
        guard let videoURL = info[UIImagePickerController.InfoKey.mediaURL] as? URL else {
            return
        }
        let pathString = videoURL.relativePath
        print("視頻地址:\(pathString)")

        //圖片控制器退出
        self.dismiss(animated: true, completion: {
            let editorVC = EditorVideoViewController.init(with: videoURL)

            editorVC.modalPresentationStyle = UIModalPresentationStyle.fullScreen
            self.present(editorVC, animated: true) {

            }
        })
    }

二、按幀獲取縮略圖初始化視頻軌道

CMTime

在講實(shí)現(xiàn)方法之前先介紹一下 CMTime,CMTime 可以用于描述更精確的時(shí)間,比如我們想表達(dá)視頻中的一個(gè)瞬間例如 1:01 大多數(shù)時(shí)候你可以用 NSTimeInterval t = 61.0 這是沒(méi)有什么大問(wèn)題的,但浮點(diǎn)數(shù)有個(gè)比較嚴(yán)重的問(wèn)題就是無(wú)法精確的表達(dá)10的-6次方比如將一百萬(wàn)個(gè)0.0000001相加,運(yùn)算結(jié)果可能會(huì)變成1.0000000000079181,在視頻流傳輸?shù)倪^(guò)程中伴隨著大量的數(shù)據(jù)加減,這樣就會(huì)造成誤差,所以我們需要另一種表達(dá)時(shí)間的方式,那就是 CMTime

CMTime是一種C函數(shù)結(jié)構(gòu)體,有4個(gè)成員。

typedef struct {

CMTimeValue value; // 當(dāng)前的CMTimeValue 的值

CMTimeScale timescale; // 當(dāng)前的CMTimeValue 的參考標(biāo)準(zhǔn) (比如:1000)

CMTimeFlags flags;

CMTimeEpoch epoch;

} CMTime;

比如說(shuō)平時(shí)我們所說(shuō)的如果 timescale = 1000,那么 CMTimeValue = 1000 * 1 = 100

CMTimeScale timescale: 當(dāng)前的CMTimeValue 的參考標(biāo)準(zhǔn),它表示1秒的時(shí)間被分成了多少份。因?yàn)檎麄€(gè)CMTime的精度是由它控制的所以它顯的尤為重要。例如,當(dāng)timescale為1的時(shí)候,CMTime不能表示1秒一下的時(shí)間和1秒內(nèi)的增長(zhǎng)。相同的,當(dāng)timescale為1000的時(shí)候,每秒鐘便被分成了1000份,CMTime的value便代表了多少毫秒。

實(shí)現(xiàn)方法

調(diào)用方法 generateCGImagesAsynchronously(forTimes requestedTimes: [NSValue], completionHandler handler: @escaping AVAssetImageGeneratorCompletionHandler)

 /**
    	@method			generateCGImagesAsynchronouslyForTimes:completionHandler:
    	@abstract		Returns a series of CGImageRefs for an asset at or near the specified times.
    	@param			requestedTimes
    					An NSArray of NSValues, each containing a CMTime, specifying the asset times at which an image is requested.
    	@param			handler
    					A block that will be called when an image request is complete.
    	@discussion		Employs an efficient "batch mode" for getting images in time order.
    					The client will receive exactly one handler callback for each requested time in requestedTimes.
    					Changes to generator properties (snap behavior, maximum size, etc...) will not affect outstanding asynchronous image generation requests.
    					The generated image is not retained.  Clients should retain the image if they wish it to persist after the completion handler returns.
    */
    open func generateCGImagesAsynchronously(forTimes requestedTimes: [NSValue], completionHandler handler: @escaping AVAssetImageGeneratorCompletionHandler)

瀏覽官方的注釋,可以看出需要傳入兩個(gè)參數(shù) :

requestedTimes: [NSValue]:請(qǐng)求時(shí)間的數(shù)組(類型為 NSValue)每一個(gè)元素包含一個(gè) CMTime,用于指定請(qǐng)求視頻的時(shí)間。

completionHandler handler: @escaping AVAssetImageGeneratorCompletionHandler: 圖像請(qǐng)求完成時(shí)將調(diào)用的塊,由于方法是異步調(diào)用的,所以需要返回主線程更新 UI。

示例:

func splitVideoFileUrlFps(splitFileUrl:URL, fps:Float, splitCompleteClosure:@escaping (Bool, [UIImage]) -> Void) {
        var splitImages = [UIImage]()
		
		//初始化 Asset
        let optDict = NSDictionary(object: NSNumber(value: false), forKey: AVURLAssetPreferPreciseDurationAndTimingKey as NSCopying)
        let urlAsset = AVURLAsset(url: splitFileUrl, options: optDict as? [String : Any])

        let cmTime = urlAsset.duration
        let durationSeconds: Float64 = CMTimeGetSeconds(cmTime)

        var times = [NSValue]()
        let totalFrames: Float64 = durationSeconds * Float64(fps)
        var timeFrame: CMTime

		//定義 CMTime 即請(qǐng)求縮略圖的時(shí)間間隔
        for i in 0...Int(totalFrames) {
            timeFrame = CMTimeMake(value: Int64(i), timescale: Int32(fps))
            let timeValue = NSValue(time: timeFrame)

            times.append(timeValue)
        }

        let imageGenerator = AVAssetImageGenerator(asset: urlAsset)
        imageGenerator.requestedTimeToleranceBefore = CMTime.zero
        imageGenerator.requestedTimeToleranceAfter = CMTime.zero

        let timesCount = times.count
		
		//調(diào)用獲取縮略圖的方法
        imageGenerator.generateCGImagesAsynchronously(forTimes: times) { (requestedTime, image, actualTime, result, error) in

        var isSuccess = false
        switch (result) {
        case AVAssetImageGenerator.Result.cancelled:
            print("cancelled------")

        case AVAssetImageGenerator.Result.failed:
            print("failed++++++")

        case AVAssetImageGenerator.Result.succeeded:

            let framImg = UIImage(cgImage: image!)

            splitImages.append(self.flipImage(image: framImg, orientaion: 1))
            if (Int(requestedTime.value) == (timesCount-1)) { //最后一幀時(shí) 回調(diào)賦值
                isSuccess = true
                splitCompleteClosure(isSuccess, splitImages)
                print("completed")
            }
        }
        }
    }

				//調(diào)用時(shí)利用回調(diào)更新 UI
self.splitVideoFileUrlFps(splitFileUrl: url, fps: 1) { [weak self](isSuccess, splitImgs) in
            if isSuccess {
                //由于方法是異步的,所以需要回主線程更新 UI
                DispatchQueue.main.async {
                
                    }
                print("圖片總數(shù)目imgcount:\(String(describing: self?.imageArr.count))")
            }
        }

三、視頻指定時(shí)間跳轉(zhuǎn)

 /**
     @method			seekToTime:toleranceBefore:toleranceAfter:
     @abstract			Moves the playback cursor within a specified time bound.
     @param				time
     @param				toleranceBefore
     @param				toleranceAfter
     @discussion		Use this method to seek to a specified time for the current player item.
    					The time seeked to will be within the range [time-toleranceBefore, time+toleranceAfter] and may differ from the specified time for efficiency.
    					Pass kCMTimeZero for both toleranceBefore and toleranceAfter to request sample accurate seeking which may incur additional decoding delay. 
    					Messaging this method with beforeTolerance:kCMTimePositiveInfinity and afterTolerance:kCMTimePositiveInfinity is the same as messaging seekToTime: directly.
     */
    open func seek(to time: CMTime, toleranceBefore: CMTime, toleranceAfter: CMTime)

三個(gè)傳入的參數(shù) time: CMTime, toleranceBefore: CMTime, tolearnceAfter: CMTime ,time 參數(shù)很好理解,即為想要跳轉(zhuǎn)的時(shí)間。那么后面兩個(gè)參數(shù),按照官方的注釋理解,簡(jiǎn)單來(lái)說(shuō)為“誤差的容忍度”,他將會(huì)在你擬定的這個(gè)區(qū)間內(nèi)跳轉(zhuǎn),即為 [time-toleranceBefore, time+toleranceAfter] ,當(dāng)然如果你傳 kCMTimeZero(在我當(dāng)前的版本這個(gè)參數(shù)被被改為了 CMTime.zero),即為精確搜索,但是這會(huì)導(dǎo)致額外的解碼時(shí)間。

示例:

	let totalTime = self.avPlayer.currentItem?.duration
	let scale = self.avPlayer.currentItem?.duration.timescale
	
	//width:跳轉(zhuǎn)到的視頻軌長(zhǎng)度 videoWidth:視頻軌總長(zhǎng)度
 	let process = width / videoWidth
 	
      //快進(jìn)函數(shù)
 	self.avPlayer.seek(to: CMTimeMake(value: Int64(totalTime * process * scale!), timescale: scale!), toleranceBefore: CMTime.zero, toleranceAfter: CMTime.zero)

四、播放器監(jiān)聽(tīng)

通過(guò)播放器的監(jiān)聽(tīng)我們可以改變控制軌道的移動(dòng),達(dá)到視頻播放器和視頻軌道的聯(lián)動(dòng)

/**
    	@method			addPeriodicTimeObserverForInterval:queue:usingBlock:
    	@abstract		Requests invocation of a block during playback to report changing time.
    	@param			interval
    	  The interval of invocation of the block during normal playback, according to progress of the current time of the player.
    	@param			queue
    	  The serial queue onto which block should be enqueued.  If you pass NULL, the main queue (obtained using dispatch_get_main_queue()) will be used.  Passing a
    	  concurrent queue to this method will result in undefined behavior.
    	@param			block
    	  The block to be invoked periodically.
    	@result
    	  An object conforming to the NSObject protocol.  You must retain this returned value as long as you want the time observer to be invoked by the player.
    	  Pass this object to -removeTimeObserver: to cancel time observation.
    	@discussion		The block is invoked periodically at the interval specified, interpreted according to the timeline of the current item.
    					The block is also invoked whenever time jumps and whenever playback starts or stops.
    					If the interval corresponds to a very short interval in real time, the player may invoke the block less frequently
    					than requested. Even so, the player will invoke the block sufficiently often for the client to update indications
    					of the current time appropriately in its end-user interface.
    					Each call to -addPeriodicTimeObserverForInterval:queue:usingBlock: should be paired with a corresponding call to -removeTimeObserver:.
    					Releasing the observer object without a call to -removeTimeObserver: will result in undefined behavior.
    */
    open func addPeriodicTimeObserver(forInterval interval: CMTime, queue: DispatchQueue?, using block: @escaping (CMTime) -> Void) -> Any

比較重要的一個(gè)參數(shù)是 interval: CMTime 這決定了代碼回調(diào)的間隔時(shí)間,同時(shí)如果你在這個(gè)回調(diào)里改變視頻軌道的 frame 那么這也會(huì)決定視頻軌道移動(dòng)的流暢度

示例:

//player的監(jiān)聽(tīng)
        self.avPlayer.addPeriodicTimeObserver(forInterval: CMTimeMake(value: 1, timescale: 120), queue: DispatchQueue.main) { [weak self](time) in
                //與軌道的聯(lián)動(dòng)操作
        }

與快進(jìn)方法沖突的問(wèn)題

這個(gè)監(jiān)聽(tīng)方法和第三點(diǎn)中的快進(jìn)方法會(huì)造成一個(gè)問(wèn)題:當(dāng)你拖動(dòng)視頻軌道并且去快進(jìn)的時(shí)候也會(huì)觸發(fā)這個(gè)回調(diào)于是就造成了 拖動(dòng)視頻軌道 frame (改變 frame) -> 快進(jìn)方法 -> 觸發(fā)回調(diào) -> 改變 frame 這一個(gè)死循環(huán)。那么就得添加判斷條件來(lái)不去觸發(fā)這個(gè)回調(diào)。

快進(jìn)方法與播放器聯(lián)動(dòng)帶來(lái)的問(wèn)題

播放視頻是異步的,并且快進(jìn)方法解碼視頻需要時(shí)間,所以就導(dǎo)致了在雙方聯(lián)動(dòng)的過(guò)程中帶來(lái)的時(shí)間差。并且當(dāng)你認(rèn)為視頻已經(jīng)快進(jìn)完成的時(shí)候,想要去改變視頻軌道的位置,由于解碼帶來(lái)的時(shí)間,導(dǎo)致了在回調(diào)的時(shí)候會(huì)傳入幾個(gè)錯(cuò)誤的時(shí)間,使得視頻軌道來(lái)回晃動(dòng)。所以當(dāng)前項(xiàng)目的做法是,回調(diào)時(shí)需要判斷將要改變的 frame 是否合法(是否過(guò)大、過(guò)?。?/p>

ps:如果關(guān)于這兩個(gè)問(wèn)題有更好的解決辦法,歡迎一起討論!

五、導(dǎo)出視頻

 /**
        @method         insertTimeRange:ofTrack:atTime:error:
        @abstract       Inserts a timeRange of a source track into a track of a composition.
        @param          timeRange
                        Specifies the timeRange of the track to be inserted.
        @param          track
                        Specifies the source track to be inserted. Only AVAssetTracks of AVURLAssets and AVCompositions are supported (AVCompositions starting in MacOS X 10.10 and iOS 8.0).
        @param          startTime
                        Specifies the time at which the inserted track is to be presented by the composition track. You may pass kCMTimeInvalid for startTime to indicate that the timeRange should be appended to the end of the track.
        @param          error
                        Describes failures that may be reported to the user, e.g. the asset that was selected for insertion in the composition is restricted by copy-protection.
        @result         A BOOL value indicating the success of the insertion.
        @discussion
          You provide a reference to an AVAssetTrack and the timeRange within it that you want to insert. You specify the start time in the target composition track at which the timeRange should be inserted.
    
          Note that the inserted track timeRange will be presented at its natural duration and rate. It can be scaled to a different duration (and presented at a different rate) via -scaleTimeRange:toDuration:.
    */
    open func insertTimeRange(_ timeRange: CMTimeRange, of track: AVAssetTrack, at startTime: CMTime) throws

?傳入的三個(gè)參數(shù):

timeRange: CMTimeRange:指定要插入的視頻的時(shí)間范圍

track: AVAssetTrack:指定要插入的視頻軌道。僅支持AVURLAssets和AVCompositions的AvassetTrack(從MacOS X 10.10和iOS 8.0開(kāi)始的AVCompositions)。

starTime: CMTime: 指定合成視頻插入的時(shí)間點(diǎn)??梢詡鬟fkCMTimeInvalid 參數(shù),以指定視頻應(yīng)附加到前一個(gè)視頻的末尾。

示例:

	let composition = AVMutableComposition()
            //合并視頻、音頻軌道
    let videoTrack = composition.addMutableTrack(
                        withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID())
  	let audioTrack = composition.addMutableTrack(
                        withMediaType: AVMediaType.audio, preferredTrackID: CMPersistentTrackID())


 	let asset = AVAsset.init(url: self.url)

	var insertTime: CMTime = CMTime.zero

 	let timeScale = self.avPlayer.currentItem?.duration.timescale

			//循環(huán)每個(gè)片段的信息
	for clipsInfo in self.clipsInfoArr {
            
            //片段的總時(shí)間
		let clipsDuration = Double(Float(clipsInfo.width) / self.videoWidth) * self.totalTime
            
            //片段的開(kāi)始時(shí)間
		let startDuration = -Float(clipsInfo.offset) / self.perSecondLength

      	do {
           	try videoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTimeMake(value: Int64(startDuration * Float(timeScale!)), timescale: timeScale!), duration:CMTimeMake(value: Int64(clipsDuration * Double(timeScale!)), timescale: timeScale!)), of: asset.tracks(withMediaType: AVMediaType.video)[0], at: insertTime)
                } catch _ {}

		do {
         	try audioTrack?.insertTimeRange(CMTimeRangeMake(start: CMTimeMake(value: Int64(startDuration * Float(timeScale!)), timescale: timeScale!), duration:CMTimeMake(value: Int64(clipsDuration * Double(timeScale!)), timescale: timeScale!)), of: asset.tracks(withMediaType: AVMediaType.audio)[0], at: insertTime)
      		 } catch _ {}
   		insertTime = CMTimeAdd(insertTime, CMTimeMake(value: Int64(clipsDuration * Double(timeScale!)), timescale: timeScale!))
            }

  		videoTrack?.preferredTransform = CGAffineTransform(rotationAngle: CGFloat.pi / 2)

            //獲取合并后的視頻路徑
  		let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory,.userDomainMask,true)[0]

  		let destinationPath = documentsPath + "/mergeVideo-\(arc4random()%1000).mov"
     	print("合并后的視頻:\(destinationPath)")

end:通過(guò)這幾個(gè) API 再加上交互的邏輯就能實(shí)現(xiàn)完整的剪輯功能啦!如果文中有不足的地方,歡迎指出!

到此這篇關(guān)于iOS基于AVFoundation 制作用于剪輯視頻項(xiàng)目的文章就介紹到這了,更多相關(guān)iOS AVFoundation 剪輯視頻內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

相關(guān)文章

  • iOS中如何使用iconfont圖標(biāo)實(shí)例詳解

    iOS中如何使用iconfont圖標(biāo)實(shí)例詳解

    iconfont大家在開(kāi)發(fā)中應(yīng)該會(huì)經(jīng)常用到,下面這篇文章主要給大家介紹了在iOS中如何使用iconfont圖標(biāo)實(shí)例的相關(guān)資料,文中通過(guò)示例代碼介紹的非常詳細(xì),需要的朋友可以參考借鑒,下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧
    2018-07-07
  • 詳解iOS平臺(tái)調(diào)用后臺(tái)接口的正確姿勢(shì)

    詳解iOS平臺(tái)調(diào)用后臺(tái)接口的正確姿勢(shì)

    這篇文章主要介紹了詳解iOS平臺(tái)調(diào)用后臺(tái)接口的正確姿勢(shì),文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧
    2019-10-10
  • IOS實(shí)現(xiàn)簡(jiǎn)易版的QQ下拉列表

    IOS實(shí)現(xiàn)簡(jiǎn)易版的QQ下拉列表

    在我們?nèi)粘i_(kāi)發(fā)中tableView是用的非常多的控件, 無(wú)論在新聞應(yīng)用, 視頻, 聊天應(yīng)用中都廣泛使用, 那么今天小編也分享一個(gè)用tableView實(shí)現(xiàn)的類似QQ界面的下拉列表.效果很簡(jiǎn)單,有需要的朋友們可以參考借鑒。
    2016-08-08
  • iOS推送增加右側(cè)顯示圖Service Extension

    iOS推送增加右側(cè)顯示圖Service Extension

    這篇文章主要為大家介紹了iOS推送增加右側(cè)顯示圖Service Extension,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪
    2022-10-10
  • 深入解析iOS應(yīng)用開(kāi)發(fā)中對(duì)設(shè)計(jì)模式中的橋接模式的使用

    深入解析iOS應(yīng)用開(kāi)發(fā)中對(duì)設(shè)計(jì)模式中的橋接模式的使用

    這篇文章主要介紹了iOS應(yīng)用開(kāi)發(fā)中對(duì)設(shè)計(jì)模式中的橋接模式的使用,bridge橋接模式中主張把抽象部分與實(shí)現(xiàn)部分分離,需要的朋友可以參考下
    2016-03-03
  • React Native學(xué)習(xí)教程之自定義NavigationBar詳解

    React Native學(xué)習(xí)教程之自定義NavigationBar詳解

    這篇文章主要給大家介紹了關(guān)于React Native學(xué)習(xí)教程之自定義NavigationBar的相關(guān)資料,文中通過(guò)是示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧。
    2017-10-10
  • IOS開(kāi)發(fā)Objective-C?Runtime使用示例詳解

    IOS開(kāi)發(fā)Objective-C?Runtime使用示例詳解

    這篇文章主要為大家介紹了IOS開(kāi)發(fā)Objective-C?Runtime使用示例詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪
    2023-02-02
  • 詳解iOS開(kāi)發(fā)中UItableview控件的數(shù)據(jù)刷新功能的實(shí)現(xiàn)

    詳解iOS開(kāi)發(fā)中UItableview控件的數(shù)據(jù)刷新功能的實(shí)現(xiàn)

    這篇文章主要介紹了詳解iOS開(kāi)發(fā)中UItableview控件的數(shù)據(jù)刷新功能的實(shí)現(xiàn),代碼基于傳統(tǒng)的Objective-C,需要的朋友可以參考下
    2015-12-12
  • iOS中設(shè)置圓角的幾種方法示例

    iOS中設(shè)置圓角的幾種方法示例

    這篇文章主要介紹了iOS中設(shè)置圓角的三種方法,其中包括使用layer屬性、使用繪圖設(shè)置圓角以及通過(guò)另一張mask圖創(chuàng)建新圖,需要的朋友可以參考借鑒,下面來(lái)一起看看吧。
    2017-03-03
  • iOS開(kāi)發(fā)中如何實(shí)現(xiàn)一個(gè)平滑的顏色過(guò)渡

    iOS開(kāi)發(fā)中如何實(shí)現(xiàn)一個(gè)平滑的顏色過(guò)渡

    這篇文章給大家分享在ios開(kāi)發(fā)中如何從a顏色平滑的過(guò)渡到b顏色。代碼簡(jiǎn)單易懂,需要的朋友參考下吧
    2017-05-05

最新評(píng)論