服务器之家:专注于服务器技术及软件下载分享
分类导航

PHP教程|ASP.NET教程|Java教程|ASP教程|编程技术|正则表达式|C/C++|IOS|C#|Swift|Android|VB|R语言|JavaScript|易语言|vb.net|

服务器之家 - 编程语言 - IOS - iOS实现微信朋友圈视频截取功能

iOS实现微信朋友圈视频截取功能

2021-05-05 22:05suiling IOS

这篇文章主要介绍了iOS实现微信朋友圈视频截取功能,微信使用非常普遍,功能也很强大,不知道大家对微信朋友圈视频截取功能有没有了解,下面脚本之家小编给大家带来详解介绍,感兴趣的朋友一起看看吧

序言

微信现在这么普及,功能也做的越来越强大,不知大家对于微信朋友圈发视频截取的功能或者苹果拍视频对视频编辑的功能有没有了解(作者这里也猜测,微信的这个功能也是仿苹果的)。感觉这个功能确实很方便实用,近来作者也在研究音视频功能,所以就实现了一下这个功能。

功能其实看着挺简单,实现过程也踩了不少坑。一方面记录一下;另一方面也算是对实现过程的再一次梳理,这样大家看代码也会比较明白。

效果

我们先看看我实现的效果

iOS实现微信朋友圈视频截取功能

实现

实现过程分析

整个功能可以分为三部分:

  • 视频播放

这部分我们单独封装一个视频播放器即可

  • 下边的滑动视图

这部分实现过程比较复杂,一共分成了4部分。灰色遮盖、左右把手滑块、滑块中间上下两条线、图片管理视图

控制器视图逻辑组装和功能实现

  • 视频播放器的封装

这里使用avplayer、playerlayer、avplayeritem这三个类实现了视频播放功能;由于整个事件都是基于kvo监听的,所以增加了block代码提供了对外监听使用。

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
#import "fofmovieplayer.h"
@interface fofmovieplayer()
{
  avplayerlooper *_playerlooper;
  avplayeritem *_playitem;
  bool _loop;
}
@property(nonatomic,strong)nsurl *url;
@property(nonatomic,strong)avplayer *player;
@property(nonatomic,strong)avplayerlayer *playerlayer;
@property(nonatomic,strong)avplayeritem *playitem;
@property (nonatomic,assign) cmtime duration;
@end
@implementation fofmovieplayer
-(instancetype)initwithframe:(cgrect)frame url:(nsurl *)url superlayer:(calayer *)superlayer{
  self = [super init];
  if (self) {
    [self initplayers:superlayer];
    _playerlayer.frame = frame;
    self.url = url;
  }
  return self;
}
-(instancetype)initwithframe:(cgrect)frame url:(nsurl *)url superlayer:(calayer *)superlayer loop:(bool)loop{
  self = [self initwithframe:frame url:url superlayer:superlayer];
  if (self) {
    _loop = loop;
  }
  return self;
}
- (void)initplayers:(calayer *)superlayer{
  self.player = [[avplayer alloc] init];
  self.playerlayer = [avplayerlayer playerlayerwithplayer:self.player];
  self.playerlayer.videogravity = avlayervideogravityresize;
  [superlayer addsublayer:self.playerlayer];
}
- (void)initloopplayers:(calayer *)superlayer{
  self.player = [[avqueueplayer alloc] init];
  self.playerlayer = [avplayerlayer playerlayerwithplayer:self.player];
  self.playerlayer.videogravity = avlayervideogravityresize;
  [superlayer addsublayer:self.playerlayer];
}
-(void)fof_play{
  [self.player play];
}
-(void)fof_pause{
  [self.player pause];
}
#pragma mark - observe
-(void)observevalueforkeypath:(nsstring *)keypath ofobject:(id)object change:(nsdictionary*)change context:(void *)context{
  if ([keypath isequaltostring:@"status"]) {
    avplayeritem *item = (avplayeritem *)object;
    avplayeritemstatus status = [[change objectforkey:@"new"] intvalue]; // 获取更改后的状态
    if (status == avplayeritemstatusreadytoplay) {
      _duration = item.duration;//只有在此状态下才能获取,不能在avplayeritem初始化后马上获取
      nslog(@"准备播放");
      if (self.blockstatusreadyplay) {
        self.blockstatusreadyplay(item);
      }
    } else if (status == avplayeritemstatusfailed) {
      if (self.blockstatusfailed) {
        self.blockstatusfailed();
      }
      avplayeritem *item = (avplayeritem *)object;
      nslog(@"%@",item.error);
      nslog(@"avplayerstatusfailed");
    } else {
      self.blockstatusunknown();
      nslog(@"%@",item.error);
      nslog(@"avplayerstatusunknown");
    }
  }else if ([keypath isequaltostring:@"tracking"]){
    nsinteger status = [change[@"new"] integervalue];
    if (self.blocktracking) {
      self.blocktracking(status);
    }
    if (status) {//正在拖动
      [self.player pause];
    }else{//停止拖动
    }
  }else if ([keypath isequaltostring:@"loadedtimeranges"]){
    nsarray *array = _playitem.loadedtimeranges;
    cmtimerange timerange = [array.firstobject cmtimerangevalue];//本次缓冲时间范围
    cgfloat startseconds = cmtimegetseconds(timerange.start);
    cgfloat durationseconds = cmtimegetseconds(timerange.duration);
    nstimeinterval totalbuffer = startseconds + durationseconds;//缓冲总长度
    double progress = totalbuffer/cmtimegetseconds(_duration);
    if (self.blockloadedtimeranges) {
      self.blockloadedtimeranges(progress);
    }
    nslog(@"当前缓冲时间:%f",totalbuffer);
  }else if ([keypath isequaltostring:@"playbackbufferempty"]){
    nslog(@"缓存不够,不能播放!");
  }else if ([keypath isequaltostring:@"playbacklikelytokeepup"]){
    if (self.blockplaybacklikelytokeepup) {
      self.blockplaybacklikelytokeepup([change[@"new"] boolvalue]);
    }
  }
}
-(void)seturl:(nsurl *)url{
  _url = url;
  [self.player replacecurrentitemwithplayeritem:self.playitem];
}
-(avplayeritem *)playitem{
  _playitem = [[avplayeritem alloc] initwithurl:_url];
  //监听播放器的状态,准备好播放、失败、未知错误
  [_playitem addobserver:self forkeypath:@"status" options:nskeyvalueobservingoptionnew context:nil];
  //  监听缓存的时间
  [_playitem addobserver:self forkeypath:@"loadedtimeranges" options:nskeyvalueobservingoptionnew context:nil];
  //  监听获取当缓存不够,视频加载不出来的情况:
  [_playitem addobserver:self forkeypath:@"playbackbufferempty" options:nskeyvalueobservingoptionnew context:nil];
  //  用于监听缓存足够播放的状态
  [_playitem addobserver:self forkeypath:@"playbacklikelytokeepup" options:nskeyvalueobservingoptionnew context:nil];
  [[nsnotificationcenter defaultcenter] addobserver:self selector:@selector(private_playermoviefinish) name:avplayeritemdidplaytoendtimenotification object:nil];
  return _playitem;
}
- (void)private_playermoviefinish{
  nslog(@"播放结束");
  if (self.blockplaytoendtime) {
    self.blockplaytoendtime();
  }
  if (_loop) {//默认提供一个循环播放的功能
    [self.player pause];
    cmtime time = cmtimemake(1, 1);
    __weak typeof(self)this = self;
    [self.player seektotime:time completionhandler:^(bool finished) {
      [this.player play];
    }];
  }
}
-(void)dealloc{
  nslog(@"-----销毁-----");
}
@end

视频播放器就不重点讲了,作者计划单独写一篇有关视频播放器的。

下边的滑动视图

灰色遮盖

灰色遮盖比较简单这里作者只是用了uiview

?
1
2
3
4
5
6
7
self.leftmaskview = [[uiview alloc] initwithframe:cgrectmake(0, 0, 0, height)];
self.leftmaskview.backgroundcolor = [uicolor graycolor];
self.leftmaskview.alpha = 0.8;
[self addsubview:self.leftmaskview];
self.rightmaskview = [[uiview alloc] initwithframe:cgrectmake(0, 0, 0, height)];
self.rightmaskview.backgroundcolor = [uicolor graycolor];
self.rightmaskview.alpha = 0.8;

滑块中间上下两条线

这两根线单独封装了一个视图line,一开始也想到用一个uiview就好了,但是发现一个问题,就是把手的滑动与线的滑动速度不匹配,线比较慢。

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
@implementation line
-(void)setbeginpoint:(cgpoint)beginpoint{
  _beginpoint = beginpoint;
  [self setneedsdisplay];
}
-(void)setendpoint:(cgpoint)endpoint{
  _endpoint = endpoint;
  [self setneedsdisplay];
}
- (void)drawrect:(cgrect)rect {
  cgcontextref context = uigraphicsgetcurrentcontext();
  cgcontextsetlinewidth(context, 3);
  cgcontextsetstrokecolorwithcolor(context, [uicolor colorwithwhite:0.9 alpha:1].cgcolor);
  cgcontextmovetopoint(context, self.beginpoint.x, self.beginpoint.y);
  cgcontextaddlinetopoint(context, self.endpoint.x, self.endpoint.y);
  cgcontextstrokepath(context);
}

图片管理视图

这里封装了一个videopieces,用来组装把手、线、遮盖的逻辑,并且用来显示图片。由于图片只有10张,所以这里紧紧是一个for循环,增加了10个uiimageview

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
@interface videopieces()
{
  cgpoint _beginpoint;
}
@property(nonatomic,strong) haft *lefthaft;
@property(nonatomic,strong) haft *righthaft;
@property(nonatomic,strong) line *topline;
@property(nonatomic,strong) line *bottomline;
@property(nonatomic,strong) uiview *leftmaskview;
@property(nonatomic,strong) uiview *rightmaskview;
@end
@implementation videopieces
-(instancetype)initwithframe:(cgrect)frame{
  self = [super initwithframe:frame];
  if (self) {
    [self initsubviews:frame];
  }
  return self;
}
- (void)initsubviews:(cgrect)frame{
  cgfloat height = cgrectgetheight(frame);
  cgfloat width = cgrectgetwidth(frame);
  cgfloat mingap = 30;
  cgfloat widthhaft = 10;
  cgfloat heightline = 3;
  _lefthaft = [[haft alloc] initwithframe:cgrectmake(0, 0, widthhaft, height)];
  _lefthaft.alpha = 0.8;
  _lefthaft.backgroundcolor = [uicolor colorwithwhite:0.9 alpha:1];
  _lefthaft.rightedgeinset = 20;
  _lefthaft.lefedgeinset = 5;
  __weak typeof(self) this = self;
  [_lefthaft setblockmove:^(cgpoint point) {
    cgfloat maxx = this.righthaft.frame.origin.x-mingap;
    if (point.x=minx) {
      this.topline.endpoint = cgpointmake(point.x-widthhaft, heightline/2.0);
      this.bottomline.endpoint = cgpointmake(point.x-widthhaft, heightline/2.0);
      this.righthaft.frame = cgrectmake(point.x, 0, widthhaft, height);
      this.rightmaskview.frame = cgrectmake(point.x+widthhaft, 0, width-point.x-widthhaft, height);
      if (this.blockseekoffright) {
        this.blockseekoffright(point.x);
      }
    }
  }];
  [_righthaft setblockmoveend:^{
    if (this.blockmoveend) {
      this.blockmoveend();
    }
  }];
  _topline = [[line alloc] init];
  _topline.alpha = 0.8;
  _topline.frame = cgrectmake(widthhaft, 0, width-2*widthhaft, heightline);
  _topline.beginpoint = cgpointmake(0, heightline/2.0);
  _topline.endpoint = cgpointmake(cgrectgetwidth(_topline.bounds), heightline/2.0);
  _topline.backgroundcolor = [uicolor clearcolor];
  [self addsubview:_topline];
  _bottomline = [[line alloc] init];
  _bottomline.alpha = 0.8;
  _bottomline.frame = cgrectmake(widthhaft, height-heightline, width-2*widthhaft, heightline);
  _bottomline.beginpoint = cgpointmake(0, heightline/2.0);
  _bottomline.endpoint = cgpointmake(cgrectgetwidth(_bottomline.bounds), heightline/2.0);
  _bottomline.backgroundcolor = [uicolor clearcolor];
  [self addsubview:_bottomline];
  [self addsubview:_lefthaft];
  [self addsubview:_righthaft];
  self.leftmaskview = [[uiview alloc] initwithframe:cgrectmake(0, 0, 0, height)];
  self.leftmaskview.backgroundcolor = [uicolor graycolor];
  self.leftmaskview.alpha = 0.8;
  [self addsubview:self.leftmaskview];
  self.rightmaskview = [[uiview alloc] initwithframe:cgrectmake(0, 0, 0, height)];
  self.rightmaskview.backgroundcolor = [uicolor graycolor];
  self.rightmaskview.alpha = 0.8;
  [self addsubview:self.rightmaskview];
}
-(void)touchesbegan:(nsset*)touches withevent:(uievent *)event{
  uitouch *touch = touches.anyobject;
  _beginpoint = [touch locationinview:self];
}

把手的实现

把手的实现这里优化了一点,就是滑动的时候比较灵敏,一开始用手指滑动的时候不是非常灵敏,经常手指滑动了,但是把手没有动。

增加了灵敏度的方法其实就是增加了接收事件区域的大小,重写了-(bool)pointinside:(cgpoint)point withevent:(uievent *)event这个方法

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
@implementation haft
-(instancetype)initwithframe:(cgrect)frame{
  self = [super initwithframe:frame];
  if (self) {
    self.userinteractionenabled = true;
  }
  return self;
}
-(bool)pointinside:(cgpoint)point withevent:(uievent *)event{
  cgrect rect = cgrectmake(self.bounds.origin.x-self.lefedgeinset, self.bounds.origin.y-self.topedgeinset, cgrectgetwidth(self.bounds)+self.lefedgeinset+self.rightedgeinset, cgrectgetheight(self.bounds)+self.bottomedgeinset+self.topedgeinset);
  if (cgrectcontainspoint(rect, point)) {
    return yes;
  }
  return no;
}
-(void)touchesbegan:(nsset*)touches withevent:(uievent *)event{
  nslog(@"开始");
}
-(void)touchesmoved:(nsset*)touches withevent:(uievent *)event{
  nslog(@"move");
  uitouch *touch = touches.anyobject;
  cgpoint point = [touch locationinview:self.superview];
  cgfloat maxx = cgrectgetwidth(self.superview.bounds)-cgrectgetwidth(self.bounds);
  if (point.x>maxx) {
    point.x = maxx;
  }
  if (point.x>=0&&point.x<=(cgrectgetwidth(self.superview.bounds)-cgrectgetwidth(self.bounds))&&self.blockmove) {
    self.blockmove(point);
  }
}
-(void)touchesended:(nsset*)touches withevent:(uievent *)event{
  if (self.blockmoveend) {
    self.blockmoveend();
  }
}
- (void)drawrect:(cgrect)rect {
  cgfloat width = cgrectgetwidth(self.bounds);
  cgfloat height = cgrectgetheight(self.bounds);
  cgfloat linewidth = 1.5;
  cgfloat lineheight = 12;
  cgfloat gap = (width-linewidth*2)/3.0;
  cgfloat liney = (height-lineheight)/2.0;
  cgcontextref context = uigraphicsgetcurrentcontext();
  cgcontextsetlinewidth(context, linewidth);
  cgcontextsetstrokecolorwithcolor(context, [[uicolor graycolor] colorwithalphacomponent:0.8].cgcolor);
  cgcontextmovetopoint(context, gap+linewidth/2, liney);
  cgcontextaddlinetopoint(context, gap+linewidth/2, liney+lineheight);
  cgcontextstrokepath(context);
  cgcontextsetlinewidth(context, linewidth);
  cgcontextsetstrokecolorwithcolor(context, [[uicolor graycolor] colorwithalphacomponent:0.8].cgcolor);
  cgcontextmovetopoint(context, gap*2+linewidth+linewidth/2, liney);
  cgcontextaddlinetopoint(context, gap*2+linewidth+linewidth/2, liney+lineheight);
  cgcontextstrokepath(context);
}

控制器视图逻辑组装和功能实现

这部分逻辑是最重要也是最复杂的。

获取10张缩略图

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
- (nsarray *)getvideothumbnail:(nsstring *)path count:(nsinteger)count splitcompleteblock:(void(^)(bool success, nsmutablearray *splitimgs))splitcompleteblock {
  avasset *asset = [avasset assetwithurl:[nsurl fileurlwithpath:path]];
  nsmutablearray *arrayimages = [nsmutablearray array];
  [asset loadvaluesasynchronouslyforkeys:@[@"duration"] completionhandler:^{
    avassetimagegenerator *generator = [avassetimagegenerator assetimagegeneratorwithasset:asset];
//    generator.maximumsize = cgsizemake(480,136);//如果是cgsizemake(480,136),则获取到的图片是{240, 136}。与实际大小成比例
    generator.appliespreferredtracktransform = yes;//这个属性保证我们获取的图片的方向是正确的。比如有的视频需要旋转手机方向才是视频的正确方向。
    /**因为有误差,所以需要设置以下两个属性。如果不设置误差有点大,设置了之后相差非常非常的小**/
    generator.requestedtimetoleranceafter = kcmtimezero;
    generator.requestedtimetolerancebefore = kcmtimezero;
    float64 seconds = cmtimegetseconds(asset.duration);
    nsmutablearray *array = [nsmutablearray array];
    for (int i = 0; i
      cmtime time = cmtimemakewithseconds(i*(seconds/10.0),1);//想要获取图片的时间位置
      [array addobject:[nsvalue valuewithcmtime:time]];
    }
    __block int i = 0;
    [generator generatecgimagesasynchronouslyfortimes:array completionhandler:^(cmtime requestedtime, cgimageref _nullable imageref, cmtime actualtime, avassetimagegeneratorresult result, nserror * _nullable error) {
      i++;
      if (result==avassetimagegeneratorsucceeded) {
        uiimage *image = [uiimage imagewithcgimage:imageref];
        [arrayimages addobject:image];
      }else{
        nslog(@"获取图片失败!!!");
      }
      if (i==count) {
        dispatch_async(dispatch_get_main_queue(), ^{
          splitcompleteblock(yes,arrayimages);
        });
      }
    }];
  }];
  return arrayimages;
}

10张图片很容易获取到,不过这里要注意一点:回调的时候要放到异步主队列回调!要不会出现图片显示延迟比较严重的问题。

监听左右滑块事件

?
1
2
3
4
5
6
7
8
9
10
11
12
[_videopieces setblockseekoffleft:^(cgfloat offx) {
  this.seeking = true;
  [this.movieplayer fof_pause];
  this.laststartseconds = this.totalseconds*offx/cgrectgetwidth(this.videopieces.bounds);
  [this.movieplayer.player seektotime:cmtimemakewithseconds(this.laststartseconds, 1) tolerancebefore:kcmtimezero toleranceafter:kcmtimezero];
}];
[_videopieces setblockseekoffright:^(cgfloat offx) {
  this.seeking = true;
  [this.movieplayer fof_pause];
  this.lastendseconds = this.totalseconds*offx/cgrectgetwidth(this.videopieces.bounds);
  [this.movieplayer.player seektotime:cmtimemakewithseconds(this.lastendseconds, 1) tolerancebefore:kcmtimezero toleranceafter:kcmtimezero];
}];

这里通过监听左右滑块的事件,将偏移距离转换成时间,从而设置播放器的开始时间和结束时间。

循环播放

?
1
2
3
4
5
6
7
8
self.timeobservertoken = [self.movieplayer.player addperiodictimeobserverforinterval:cmtimemakewithseconds(0.5, nsec_per_sec) queue:dispatch_get_main_queue() usingblock:^(cmtime time) {
  if (!this.seeking) {
    if (fabs(cmtimegetseconds(time)-this.lastendseconds)<=0.02) {
        [this.movieplayer fof_pause];
        [this private_replayatbegintime:this.laststartseconds];
      }
  }
}];

这里有两个注意点:

1. addperiodictimeobserverforinterval要进行释放,否则会有内存泄漏。

?
1
2
3
-(void)dealloc{
  [self.movieplayer.player removetimeobserver:self.timeobservertoken];
}

2.这里监听了播放时间,进而计算是否达到了我们右边把手拖动的时间,如果达到了则重新播放。 这个问题作者思考了很久,怎么实现边播放边截取?差点进入了一个误区,真去截取视频。其实这里不用截取视频,只是控制播放时间和结束时间就可以了,最后只截取一次就行了。

总结

这次微信小视频编辑实现过程中,确实遇到了挺多的小问题。不过通过仔细的研究,最终完美实现了,有种如释重负的感觉。哈哈。

源码

github源码

总结

以上所述是小编给大家介绍的ios实现微信朋友圈视频截取功能,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对服务器之家网站的支持!

原文链接:http://www.cocoachina.com/ios/20180718/24212.html

延伸 · 阅读

精彩推荐
  • IOSIOS开发之字典转字符串的实例详解

    IOS开发之字典转字符串的实例详解

    这篇文章主要介绍了IOS开发之字典转字符串的实例详解的相关资料,希望通过本文能帮助到大家,让大家掌握这样的方法,需要的朋友可以参考下...

    苦练内功5832021-04-01
  • IOS关于iOS自适应cell行高的那些事儿

    关于iOS自适应cell行高的那些事儿

    这篇文章主要给大家介绍了关于iOS自适应cell行高的那些事儿,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的...

    daisy6092021-05-17
  • IOSiOS 雷达效果实例详解

    iOS 雷达效果实例详解

    这篇文章主要介绍了iOS 雷达效果实例详解的相关资料,需要的朋友可以参考下...

    SimpleWorld11022021-01-28
  • IOSiOS中tableview 两级cell的展开与收回的示例代码

    iOS中tableview 两级cell的展开与收回的示例代码

    本篇文章主要介绍了iOS中tableview 两级cell的展开与收回的示例代码,小编觉得挺不错的,现在分享给大家,也给大家做个参考。一起跟随小编过来看看吧...

    J_Kang3862021-04-22
  • IOSiOS通过逆向理解Block的内存模型

    iOS通过逆向理解Block的内存模型

    自从对 iOS 的逆向初窥门径后,我也经常通过它来分析一些比较大的应用,参考一下这些应用中某些功能的实现。这个探索的过程乐趣多多,不仅能满足自...

    Swiftyper12832021-03-03
  • IOS解析iOS开发中的FirstResponder第一响应对象

    解析iOS开发中的FirstResponder第一响应对象

    这篇文章主要介绍了解析iOS开发中的FirstResponder第一响应对象,包括View的FirstResponder的释放问题,需要的朋友可以参考下...

    一片枫叶4662020-12-25
  • IOSIOS 屏幕适配方案实现缩放window的示例代码

    IOS 屏幕适配方案实现缩放window的示例代码

    这篇文章主要介绍了IOS 屏幕适配方案实现缩放window的示例代码,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要...

    xiari5772021-06-01
  • IOSiOS布局渲染之UIView方法的调用时机详解

    iOS布局渲染之UIView方法的调用时机详解

    在你刚开始开发 iOS 应用时,最难避免或者是调试的就是和布局相关的问题,下面这篇文章主要给大家介绍了关于iOS布局渲染之UIView方法调用时机的相关资料...

    windtersharp7642021-05-04