diff options
author | Alecks Gates <agates@mail.agates.io> | 2023-05-22 09:00:05 -0500 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-05-22 16:00:05 +0200 |
commit | cb0eda5602a21d1626a7face32de6153ed07b5f9 (patch) | |
tree | d6a7a4e31c7267c130871ac8e3beb42994271c20 /server/tests/feeds | |
parent | 3f0ceab06e5320f62f593c49daa30d963dbc36f9 (diff) | |
download | PeerTube-cb0eda5602a21d1626a7face32de6153ed07b5f9.tar.gz PeerTube-cb0eda5602a21d1626a7face32de6153ed07b5f9.tar.zst PeerTube-cb0eda5602a21d1626a7face32de6153ed07b5f9.zip |
Add Podcast RSS feeds (#5487)
* Initial test implementation of Podcast RSS
This is a pretty simple implementation to add support for The Podcast Namespace in RSS -- instead of affecting the existing RSS implementation, this adds a new UI option.
I attempted to retain compatibility with the rest of the RSS feed implementation as much as possible and have created a temporary fork of the "pfeed" library to support this effort.
* Update to pfeed-podcast 1.2.2
* Initial test implementation of Podcast RSS
This is a pretty simple implementation to add support for The Podcast Namespace in RSS -- instead of affecting the existing RSS implementation, this adds a new UI option.
I attempted to retain compatibility with the rest of the RSS feed implementation as much as possible and have created a temporary fork of the "pfeed" library to support this effort.
* Update to pfeed-podcast 1.2.2
* Initial test implementation of Podcast RSS
This is a pretty simple implementation to add support for The Podcast Namespace in RSS -- instead of affecting the existing RSS implementation, this adds a new UI option.
I attempted to retain compatibility with the rest of the RSS feed implementation as much as possible and have created a temporary fork of the "pfeed" library to support this effort.
* Update to pfeed-podcast 1.2.2
* Add correct feed image to RSS channel
* Prefer HLS videos for podcast RSS
Remove video/stream titles, add optional height attribute to podcast RSS
* Prefix podcast RSS images with root server URL
* Add optional video query support to include captions
* Add transcripts & person images to podcast RSS feed
* Prefer webseed/webtorrent files over HLS fragmented mp4s
* Experimentally adding podcast fields to basic config page
* Add validation for new basic config fields
* Don't include "content" in podcast feed, use full description for "description"
* Initial test implementation of Podcast RSS
This is a pretty simple implementation to add support for The Podcast Namespace in RSS -- instead of affecting the existing RSS implementation, this adds a new UI option.
I attempted to retain compatibility with the rest of the RSS feed implementation as much as possible and have created a temporary fork of the "pfeed" library to support this effort.
* Update to pfeed-podcast 1.2.2
* Add correct feed image to RSS channel
* Prefer HLS videos for podcast RSS
Remove video/stream titles, add optional height attribute to podcast RSS
* Prefix podcast RSS images with root server URL
* Add optional video query support to include captions
* Add transcripts & person images to podcast RSS feed
* Prefer webseed/webtorrent files over HLS fragmented mp4s
* Experimentally adding podcast fields to basic config page
* Add validation for new basic config fields
* Don't include "content" in podcast feed, use full description for "description"
* Add medium/socialInteract to podcast RSS feeds. Use HTML for description
* Change base production image to bullseye, install prosody in image
* Add liveItem and trackers to Podcast RSS feeds
Remove height from alternateEnclosure, replaced with title.
* Clear Podcast RSS feed cache when live streams start/end
* Upgrade to Node 16
* Refactor clearCacheRoute to use ApiCache
* Remove unnecessary type hint
* Update dockerfile to node 16, install python-is-python2
* Use new file paths for captions/playlists
* Fix legacy videos in RSS after migration to object storage
* Improve method of identifying non-fragmented mp4s in podcast RSS feeds
* Don't include fragmented MP4s in podcast RSS feeds
* Add experimental support for podcast:categories on the podcast RSS item
* Fix undefined category when no videos exist
Allows for empty feeds to exist (important for feeds that might only go live)
* Add support for podcast:locked -- user has to opt in to show their email
* Use comma for podcast:categories delimiter
* Make cache clearing async
* Fix merge, temporarily test with pfeed-podcast
* Syntax changes
* Add EXT_MIMETYPE constants for captions
* Update & fix tests, fix enclosure mimetypes, remove admin email
* Add test for podacst:socialInteract
* Add filters hooks for podcast customTags
* Remove showdown, updated to pfeed-podcast 6.1.2
* Add 'action:api.live-video.state.updated' hook
* Avoid assigning undefined category to podcast feeds
* Remove nvmrc
* Remove comment
* Remove unused podcast config
* Remove more unused podcast config
* Fix MChannelAccountDefault type hint missed in merge
* Remove extra line
* Re-add newline in config
* Fix lint errors for isEmailPublic
* Fix thumbnails in podcast feeds
* Requested changes based on review
* Provide podcast rss 2.0 only on video channels
* Misc cleanup for a less messy PR
* Lint fixes
* Remove pfeed-podcast
* Add peertube version to new hooks
* Don't use query include, remove TODO
* Remove film medium hack
* Clear podcast rss cache before video/channel update hooks
* Clear podcast rss cache before video uploaded/deleted hooks
* Refactor podcast feed cache clearing
* Set correct person name from video channel
* Styling
* Fix tests
---------
Co-authored-by: Chocobozzz <me@florianbigard.com>
Diffstat (limited to 'server/tests/feeds')
-rw-r--r-- | server/tests/feeds/feeds.ts | 361 |
1 files changed, 249 insertions, 112 deletions
diff --git a/server/tests/feeds/feeds.ts b/server/tests/feeds/feeds.ts index ecd1badc1..57eefff6d 100644 --- a/server/tests/feeds/feeds.ts +++ b/server/tests/feeds/feeds.ts | |||
@@ -11,6 +11,7 @@ import { | |||
11 | makeGetRequest, | 11 | makeGetRequest, |
12 | makeRawRequest, | 12 | makeRawRequest, |
13 | PeerTubeServer, | 13 | PeerTubeServer, |
14 | PluginsCommand, | ||
14 | setAccessTokensToServers, | 15 | setAccessTokensToServers, |
15 | setDefaultChannelAvatar, | 16 | setDefaultChannelAvatar, |
16 | stopFfmpeg, | 17 | stopFfmpeg, |
@@ -26,12 +27,15 @@ const expect = chai.expect | |||
26 | describe('Test syndication feeds', () => { | 27 | describe('Test syndication feeds', () => { |
27 | let servers: PeerTubeServer[] = [] | 28 | let servers: PeerTubeServer[] = [] |
28 | let serverHLSOnly: PeerTubeServer | 29 | let serverHLSOnly: PeerTubeServer |
30 | |||
29 | let userAccessToken: string | 31 | let userAccessToken: string |
30 | let rootAccountId: number | 32 | let rootAccountId: number |
31 | let rootChannelId: number | 33 | let rootChannelId: number |
34 | |||
32 | let userAccountId: number | 35 | let userAccountId: number |
33 | let userChannelId: number | 36 | let userChannelId: number |
34 | let userFeedToken: string | 37 | let userFeedToken: string |
38 | |||
35 | let liveId: string | 39 | let liveId: string |
36 | 40 | ||
37 | before(async function () { | 41 | before(async function () { |
@@ -93,7 +97,11 @@ describe('Test syndication feeds', () => { | |||
93 | await servers[0].comments.createThread({ videoId: id, text: 'comment on unlisted video' }) | 97 | await servers[0].comments.createThread({ videoId: id, text: 'comment on unlisted video' }) |
94 | } | 98 | } |
95 | 99 | ||
96 | await waitJobs(servers) | 100 | await serverHLSOnly.videos.upload({ attributes: { name: 'hls only video' } }) |
101 | |||
102 | await waitJobs([ ...servers, serverHLSOnly ]) | ||
103 | |||
104 | await servers[0].plugins.install({ path: PluginsCommand.getPluginTestPath('-podcast-custom-tags') }) | ||
97 | }) | 105 | }) |
98 | 106 | ||
99 | describe('All feed', function () { | 107 | describe('All feed', function () { |
@@ -108,6 +116,11 @@ describe('Test syndication feeds', () => { | |||
108 | } | 116 | } |
109 | }) | 117 | }) |
110 | 118 | ||
119 | it('Should be well formed XML (covers Podcast endpoint)', async function () { | ||
120 | const podcast = await servers[0].feed.getPodcastXML({ ignoreCache: true, channelId: rootChannelId }) | ||
121 | expect(podcast).xml.to.be.valid() | ||
122 | }) | ||
123 | |||
111 | it('Should be well formed JSON (covers JSON feed 1.0 endpoint)', async function () { | 124 | it('Should be well formed JSON (covers JSON feed 1.0 endpoint)', async function () { |
112 | for (const feed of [ 'video-comments' as 'video-comments', 'videos' as 'videos' ]) { | 125 | for (const feed of [ 'video-comments' as 'video-comments', 'videos' as 'videos' ]) { |
113 | const jsonText = await servers[0].feed.getJSON({ feed, ignoreCache: true }) | 126 | const jsonText = await servers[0].feed.getJSON({ feed, ignoreCache: true }) |
@@ -153,168 +166,290 @@ describe('Test syndication feeds', () => { | |||
153 | 166 | ||
154 | describe('Videos feed', function () { | 167 | describe('Videos feed', function () { |
155 | 168 | ||
156 | it('Should contain a valid enclosure (covers RSS 2.0 endpoint)', async function () { | 169 | describe('Podcast feed', function () { |
157 | for (const server of servers) { | 170 | |
158 | const rss = await server.feed.getXML({ feed: 'videos', ignoreCache: true }) | 171 | it('Should contain a valid podcast:alternateEnclosure', async function () { |
172 | // Since podcast feeds should only work on the server they originate on, | ||
173 | // only test the first server where the videos reside | ||
174 | const rss = await servers[0].feed.getPodcastXML({ ignoreCache: false, channelId: rootChannelId }) | ||
159 | expect(XMLValidator.validate(rss)).to.be.true | 175 | expect(XMLValidator.validate(rss)).to.be.true |
160 | 176 | ||
161 | const parser = new XMLParser({ parseAttributeValue: true, ignoreAttributes: false }) | 177 | const parser = new XMLParser({ parseAttributeValue: true, ignoreAttributes: false }) |
162 | const xmlDoc = parser.parse(rss) | 178 | const xmlDoc = parser.parse(rss) |
163 | 179 | ||
164 | const enclosure = xmlDoc.rss.channel.item[0].enclosure | 180 | const enclosure = xmlDoc.rss.channel.item.enclosure |
165 | expect(enclosure).to.exist | 181 | expect(enclosure).to.exist |
182 | const alternateEnclosure = xmlDoc.rss.channel.item['podcast:alternateEnclosure'] | ||
183 | expect(alternateEnclosure).to.exist | ||
184 | |||
185 | expect(alternateEnclosure['@_type']).to.equal('video/webm') | ||
186 | expect(alternateEnclosure['@_length']).to.equal(218910) | ||
187 | expect(alternateEnclosure['@_lang']).to.equal('zh') | ||
188 | expect(alternateEnclosure['@_title']).to.equal('720p') | ||
189 | expect(alternateEnclosure['@_default']).to.equal(true) | ||
190 | |||
191 | expect(alternateEnclosure['podcast:source'][0]['@_uri']).to.contain('-720.webm') | ||
192 | expect(alternateEnclosure['podcast:source'][0]['@_uri']).to.equal(enclosure['@_url']) | ||
193 | expect(alternateEnclosure['podcast:source'][1]['@_uri']).to.contain('-720.torrent') | ||
194 | expect(alternateEnclosure['podcast:source'][1]['@_contentType']).to.equal('application/x-bittorrent') | ||
195 | expect(alternateEnclosure['podcast:source'][2]['@_uri']).to.contain('magnet:?') | ||
196 | }) | ||
166 | 197 | ||
167 | expect(enclosure['@_type']).to.equal('video/webm') | 198 | it('Should contain a valid podcast:alternateEnclosure with HLS only', async function () { |
168 | expect(enclosure['@_length']).to.equal(218910) | 199 | const rss = await serverHLSOnly.feed.getPodcastXML({ ignoreCache: false, channelId: rootChannelId }) |
169 | expect(enclosure['@_url']).to.contain('-720.webm') | 200 | expect(XMLValidator.validate(rss)).to.be.true |
170 | } | ||
171 | }) | ||
172 | 201 | ||
173 | it('Should contain a valid \'attachments\' object (covers JSON feed 1.0 endpoint)', async function () { | 202 | const parser = new XMLParser({ parseAttributeValue: true, ignoreAttributes: false }) |
174 | for (const server of servers) { | 203 | const xmlDoc = parser.parse(rss) |
175 | const json = await server.feed.getJSON({ feed: 'videos', ignoreCache: true }) | 204 | |
176 | const jsonObj = JSON.parse(json) | 205 | const enclosure = xmlDoc.rss.channel.item.enclosure |
177 | expect(jsonObj.items.length).to.be.equal(2) | 206 | const alternateEnclosure = xmlDoc.rss.channel.item['podcast:alternateEnclosure'] |
178 | expect(jsonObj.items[0].attachments).to.exist | 207 | expect(alternateEnclosure).to.exist |
179 | expect(jsonObj.items[0].attachments.length).to.be.eq(1) | 208 | |
180 | expect(jsonObj.items[0].attachments[0].mime_type).to.be.eq('application/x-bittorrent') | 209 | expect(alternateEnclosure['@_type']).to.equal('application/x-mpegURL') |
181 | expect(jsonObj.items[0].attachments[0].size_in_bytes).to.be.eq(218910) | 210 | expect(alternateEnclosure['@_lang']).to.equal('zh') |
182 | expect(jsonObj.items[0].attachments[0].url).to.contain('720.torrent') | 211 | expect(alternateEnclosure['@_title']).to.equal('HLS') |
183 | } | 212 | expect(alternateEnclosure['@_default']).to.equal(true) |
213 | |||
214 | expect(alternateEnclosure['podcast:source']['@_uri']).to.contain('-master.m3u8') | ||
215 | expect(alternateEnclosure['podcast:source']['@_uri']).to.equal(enclosure['@_url']) | ||
216 | }) | ||
217 | |||
218 | it('Should contain a valid podcast:socialInteract', async function () { | ||
219 | const rss = await servers[0].feed.getPodcastXML({ ignoreCache: false, channelId: rootChannelId }) | ||
220 | expect(XMLValidator.validate(rss)).to.be.true | ||
221 | |||
222 | const parser = new XMLParser({ parseAttributeValue: true, ignoreAttributes: false }) | ||
223 | const xmlDoc = parser.parse(rss) | ||
224 | |||
225 | const item = xmlDoc.rss.channel.item | ||
226 | const socialInteract = item['podcast:socialInteract'] | ||
227 | expect(socialInteract).to.exist | ||
228 | expect(socialInteract['@_protocol']).to.equal('activitypub') | ||
229 | expect(socialInteract['@_uri']).to.exist | ||
230 | expect(socialInteract['@_accountUrl']).to.exist | ||
231 | }) | ||
232 | |||
233 | it('Should contain a valid support custom tags for plugins', async function () { | ||
234 | const rss = await servers[0].feed.getPodcastXML({ ignoreCache: false, channelId: userChannelId }) | ||
235 | expect(XMLValidator.validate(rss)).to.be.true | ||
236 | |||
237 | const parser = new XMLParser({ parseAttributeValue: true, ignoreAttributes: false }) | ||
238 | const xmlDoc = parser.parse(rss) | ||
239 | |||
240 | const fooTag = xmlDoc.rss.channel.fooTag | ||
241 | expect(fooTag).to.exist | ||
242 | expect(fooTag['@_bar']).to.equal('baz') | ||
243 | expect(fooTag['#text']).to.equal(42) | ||
244 | |||
245 | const bizzBuzzItem = xmlDoc.rss.channel['biz:buzzItem'] | ||
246 | expect(bizzBuzzItem).to.exist | ||
247 | |||
248 | let nestedTag = bizzBuzzItem.nestedTag | ||
249 | expect(nestedTag).to.exist | ||
250 | expect(nestedTag).to.equal('example nested tag') | ||
251 | |||
252 | const item = xmlDoc.rss.channel.item | ||
253 | const fizzTag = item.fizzTag | ||
254 | expect(fizzTag).to.exist | ||
255 | expect(fizzTag['@_bar']).to.equal('baz') | ||
256 | expect(fizzTag['#text']).to.equal(21) | ||
257 | |||
258 | const bizzBuzz = item['biz:buzz'] | ||
259 | expect(bizzBuzz).to.exist | ||
260 | |||
261 | nestedTag = bizzBuzz.nestedTag | ||
262 | expect(nestedTag).to.exist | ||
263 | expect(nestedTag).to.equal('example nested tag') | ||
264 | }) | ||
265 | |||
266 | it('Should contain a valid podcast:liveItem for live streams', async function () { | ||
267 | this.timeout(120000) | ||
268 | |||
269 | const { uuid } = await servers[0].live.create({ | ||
270 | fields: { | ||
271 | name: 'live-0', | ||
272 | privacy: VideoPrivacy.PUBLIC, | ||
273 | channelId: rootChannelId, | ||
274 | permanentLive: false | ||
275 | } | ||
276 | }) | ||
277 | liveId = uuid | ||
278 | |||
279 | const ffmpeg = await servers[0].live.sendRTMPStreamInVideo({ videoId: liveId, copyCodecs: true, fixtureName: 'video_short.mp4' }) | ||
280 | await servers[0].live.waitUntilPublished({ videoId: liveId }) | ||
281 | |||
282 | const rss = await servers[0].feed.getPodcastXML({ ignoreCache: false, channelId: rootChannelId }) | ||
283 | expect(XMLValidator.validate(rss)).to.be.true | ||
284 | |||
285 | const parser = new XMLParser({ parseAttributeValue: true, ignoreAttributes: false }) | ||
286 | const xmlDoc = parser.parse(rss) | ||
287 | const liveItem = xmlDoc.rss.channel['podcast:liveItem'] | ||
288 | expect(liveItem.title).to.equal('live-0') | ||
289 | expect(liveItem['@_status']).to.equal('live') | ||
290 | |||
291 | const enclosure = liveItem.enclosure | ||
292 | const alternateEnclosure = liveItem['podcast:alternateEnclosure'] | ||
293 | expect(alternateEnclosure).to.exist | ||
294 | expect(alternateEnclosure['@_type']).to.equal('application/x-mpegURL') | ||
295 | expect(alternateEnclosure['@_title']).to.equal('HLS live stream') | ||
296 | expect(alternateEnclosure['@_default']).to.equal(true) | ||
297 | |||
298 | expect(alternateEnclosure['podcast:source']['@_uri']).to.contain('/master.m3u8') | ||
299 | expect(alternateEnclosure['podcast:source']['@_uri']).to.equal(enclosure['@_url']) | ||
300 | |||
301 | await stopFfmpeg(ffmpeg) | ||
302 | |||
303 | await servers[0].live.waitUntilEnded({ videoId: liveId }) | ||
304 | |||
305 | await waitJobs(servers) | ||
306 | }) | ||
184 | }) | 307 | }) |
185 | 308 | ||
186 | it('Should filter by account', async function () { | 309 | describe('JSON feed', function () { |
187 | { | ||
188 | const json = await servers[0].feed.getJSON({ feed: 'videos', query: { accountId: rootAccountId }, ignoreCache: true }) | ||
189 | const jsonObj = JSON.parse(json) | ||
190 | expect(jsonObj.items.length).to.be.equal(1) | ||
191 | expect(jsonObj.items[0].title).to.equal('my super name for server 1') | ||
192 | expect(jsonObj.items[0].author.name).to.equal('Main root channel') | ||
193 | } | ||
194 | 310 | ||
195 | { | 311 | it('Should contain a valid \'attachments\' object', async function () { |
196 | const json = await servers[0].feed.getJSON({ feed: 'videos', query: { accountId: userAccountId }, ignoreCache: true }) | 312 | for (const server of servers) { |
197 | const jsonObj = JSON.parse(json) | 313 | const json = await server.feed.getJSON({ feed: 'videos', ignoreCache: true }) |
198 | expect(jsonObj.items.length).to.be.equal(1) | 314 | const jsonObj = JSON.parse(json) |
199 | expect(jsonObj.items[0].title).to.equal('user video') | 315 | expect(jsonObj.items.length).to.be.equal(2) |
200 | expect(jsonObj.items[0].author.name).to.equal('Main john channel') | 316 | expect(jsonObj.items[0].attachments).to.exist |
201 | } | 317 | expect(jsonObj.items[0].attachments.length).to.be.eq(1) |
318 | expect(jsonObj.items[0].attachments[0].mime_type).to.be.eq('application/x-bittorrent') | ||
319 | expect(jsonObj.items[0].attachments[0].size_in_bytes).to.be.eq(218910) | ||
320 | expect(jsonObj.items[0].attachments[0].url).to.contain('720.torrent') | ||
321 | } | ||
322 | }) | ||
202 | 323 | ||
203 | for (const server of servers) { | 324 | it('Should filter by account', async function () { |
204 | { | 325 | { |
205 | const json = await server.feed.getJSON({ feed: 'videos', query: { accountName: 'root@' + servers[0].host }, ignoreCache: true }) | 326 | const json = await servers[0].feed.getJSON({ feed: 'videos', query: { accountId: rootAccountId }, ignoreCache: true }) |
206 | const jsonObj = JSON.parse(json) | 327 | const jsonObj = JSON.parse(json) |
207 | expect(jsonObj.items.length).to.be.equal(1) | 328 | expect(jsonObj.items.length).to.be.equal(1) |
208 | expect(jsonObj.items[0].title).to.equal('my super name for server 1') | 329 | expect(jsonObj.items[0].title).to.equal('my super name for server 1') |
330 | expect(jsonObj.items[0].author.name).to.equal('Main root channel') | ||
209 | } | 331 | } |
210 | 332 | ||
211 | { | 333 | { |
212 | const json = await server.feed.getJSON({ feed: 'videos', query: { accountName: 'john@' + servers[0].host }, ignoreCache: true }) | 334 | const json = await servers[0].feed.getJSON({ feed: 'videos', query: { accountId: userAccountId }, ignoreCache: true }) |
213 | const jsonObj = JSON.parse(json) | 335 | const jsonObj = JSON.parse(json) |
214 | expect(jsonObj.items.length).to.be.equal(1) | 336 | expect(jsonObj.items.length).to.be.equal(1) |
215 | expect(jsonObj.items[0].title).to.equal('user video') | 337 | expect(jsonObj.items[0].title).to.equal('user video') |
338 | expect(jsonObj.items[0].author.name).to.equal('Main john channel') | ||
216 | } | 339 | } |
217 | } | ||
218 | }) | ||
219 | 340 | ||
220 | it('Should filter by video channel', async function () { | 341 | for (const server of servers) { |
221 | { | 342 | { |
222 | const json = await servers[0].feed.getJSON({ feed: 'videos', query: { videoChannelId: rootChannelId }, ignoreCache: true }) | 343 | const json = await server.feed.getJSON({ feed: 'videos', query: { accountName: 'root@' + servers[0].host }, ignoreCache: true }) |
223 | const jsonObj = JSON.parse(json) | 344 | const jsonObj = JSON.parse(json) |
224 | expect(jsonObj.items.length).to.be.equal(1) | 345 | expect(jsonObj.items.length).to.be.equal(1) |
225 | expect(jsonObj.items[0].title).to.equal('my super name for server 1') | 346 | expect(jsonObj.items[0].title).to.equal('my super name for server 1') |
226 | expect(jsonObj.items[0].author.name).to.equal('Main root channel') | 347 | } |
227 | } | 348 | |
228 | 349 | { | |
229 | { | 350 | const json = await server.feed.getJSON({ feed: 'videos', query: { accountName: 'john@' + servers[0].host }, ignoreCache: true }) |
230 | const json = await servers[0].feed.getJSON({ feed: 'videos', query: { videoChannelId: userChannelId }, ignoreCache: true }) | 351 | const jsonObj = JSON.parse(json) |
231 | const jsonObj = JSON.parse(json) | 352 | expect(jsonObj.items.length).to.be.equal(1) |
232 | expect(jsonObj.items.length).to.be.equal(1) | 353 | expect(jsonObj.items[0].title).to.equal('user video') |
233 | expect(jsonObj.items[0].title).to.equal('user video') | 354 | } |
234 | expect(jsonObj.items[0].author.name).to.equal('Main john channel') | 355 | } |
235 | } | 356 | }) |
236 | 357 | ||
237 | for (const server of servers) { | 358 | it('Should filter by video channel', async function () { |
238 | { | 359 | { |
239 | const query = { videoChannelName: 'root_channel@' + servers[0].host } | 360 | const json = await servers[0].feed.getJSON({ feed: 'videos', query: { videoChannelId: rootChannelId }, ignoreCache: true }) |
240 | const json = await server.feed.getJSON({ feed: 'videos', query, ignoreCache: true }) | ||
241 | const jsonObj = JSON.parse(json) | 361 | const jsonObj = JSON.parse(json) |
242 | expect(jsonObj.items.length).to.be.equal(1) | 362 | expect(jsonObj.items.length).to.be.equal(1) |
243 | expect(jsonObj.items[0].title).to.equal('my super name for server 1') | 363 | expect(jsonObj.items[0].title).to.equal('my super name for server 1') |
364 | expect(jsonObj.items[0].author.name).to.equal('Main root channel') | ||
244 | } | 365 | } |
245 | 366 | ||
246 | { | 367 | { |
247 | const query = { videoChannelName: 'john_channel@' + servers[0].host } | 368 | const json = await servers[0].feed.getJSON({ feed: 'videos', query: { videoChannelId: userChannelId }, ignoreCache: true }) |
248 | const json = await server.feed.getJSON({ feed: 'videos', query, ignoreCache: true }) | ||
249 | const jsonObj = JSON.parse(json) | 369 | const jsonObj = JSON.parse(json) |
250 | expect(jsonObj.items.length).to.be.equal(1) | 370 | expect(jsonObj.items.length).to.be.equal(1) |
251 | expect(jsonObj.items[0].title).to.equal('user video') | 371 | expect(jsonObj.items[0].title).to.equal('user video') |
372 | expect(jsonObj.items[0].author.name).to.equal('Main john channel') | ||
252 | } | 373 | } |
253 | } | ||
254 | }) | ||
255 | 374 | ||
256 | it('Should correctly have videos feed with HLS only', async function () { | 375 | for (const server of servers) { |
257 | this.timeout(120000) | 376 | { |
258 | 377 | const query = { videoChannelName: 'root_channel@' + servers[0].host } | |
259 | await serverHLSOnly.videos.upload({ attributes: { name: 'hls only video' } }) | 378 | const json = await server.feed.getJSON({ feed: 'videos', query, ignoreCache: true }) |
379 | const jsonObj = JSON.parse(json) | ||
380 | expect(jsonObj.items.length).to.be.equal(1) | ||
381 | expect(jsonObj.items[0].title).to.equal('my super name for server 1') | ||
382 | } | ||
383 | |||
384 | { | ||
385 | const query = { videoChannelName: 'john_channel@' + servers[0].host } | ||
386 | const json = await server.feed.getJSON({ feed: 'videos', query, ignoreCache: true }) | ||
387 | const jsonObj = JSON.parse(json) | ||
388 | expect(jsonObj.items.length).to.be.equal(1) | ||
389 | expect(jsonObj.items[0].title).to.equal('user video') | ||
390 | } | ||
391 | } | ||
392 | }) | ||
260 | 393 | ||
261 | await waitJobs([ serverHLSOnly ]) | 394 | it('Should correctly have videos feed with HLS only', async function () { |
395 | this.timeout(120000) | ||
262 | 396 | ||
263 | const json = await serverHLSOnly.feed.getJSON({ feed: 'videos', ignoreCache: true }) | 397 | const json = await serverHLSOnly.feed.getJSON({ feed: 'videos', ignoreCache: true }) |
264 | const jsonObj = JSON.parse(json) | 398 | const jsonObj = JSON.parse(json) |
265 | expect(jsonObj.items.length).to.be.equal(1) | 399 | expect(jsonObj.items.length).to.be.equal(1) |
266 | expect(jsonObj.items[0].attachments).to.exist | 400 | expect(jsonObj.items[0].attachments).to.exist |
267 | expect(jsonObj.items[0].attachments.length).to.be.eq(4) | 401 | expect(jsonObj.items[0].attachments.length).to.be.eq(4) |
268 | |||
269 | for (let i = 0; i < 4; i++) { | ||
270 | expect(jsonObj.items[0].attachments[i].mime_type).to.be.eq('application/x-bittorrent') | ||
271 | expect(jsonObj.items[0].attachments[i].size_in_bytes).to.be.greaterThan(0) | ||
272 | expect(jsonObj.items[0].attachments[i].url).to.exist | ||
273 | } | ||
274 | }) | ||
275 | 402 | ||
276 | it('Should not display waiting live videos', async function () { | 403 | for (let i = 0; i < 4; i++) { |
277 | const { uuid } = await servers[0].live.create({ | 404 | expect(jsonObj.items[0].attachments[i].mime_type).to.be.eq('application/x-bittorrent') |
278 | fields: { | 405 | expect(jsonObj.items[0].attachments[i].size_in_bytes).to.be.greaterThan(0) |
279 | name: 'live', | 406 | expect(jsonObj.items[0].attachments[i].url).to.exist |
280 | privacy: VideoPrivacy.PUBLIC, | ||
281 | channelId: rootChannelId | ||
282 | } | 407 | } |
283 | }) | 408 | }) |
284 | liveId = uuid | ||
285 | 409 | ||
286 | const json = await servers[0].feed.getJSON({ feed: 'videos', ignoreCache: true }) | 410 | it('Should not display waiting live videos', async function () { |
411 | const { uuid } = await servers[0].live.create({ | ||
412 | fields: { | ||
413 | name: 'live', | ||
414 | privacy: VideoPrivacy.PUBLIC, | ||
415 | channelId: rootChannelId | ||
416 | } | ||
417 | }) | ||
418 | liveId = uuid | ||
287 | 419 | ||
288 | const jsonObj = JSON.parse(json) | 420 | const json = await servers[0].feed.getJSON({ feed: 'videos', ignoreCache: true }) |
289 | expect(jsonObj.items.length).to.be.equal(2) | 421 | |
290 | expect(jsonObj.items[0].title).to.equal('my super name for server 1') | 422 | const jsonObj = JSON.parse(json) |
291 | expect(jsonObj.items[1].title).to.equal('user video') | 423 | expect(jsonObj.items.length).to.be.equal(2) |
292 | }) | 424 | expect(jsonObj.items[0].title).to.equal('my super name for server 1') |
425 | expect(jsonObj.items[1].title).to.equal('user video') | ||
426 | }) | ||
293 | 427 | ||
294 | it('Should display published live videos', async function () { | 428 | it('Should display published live videos', async function () { |
295 | this.timeout(120000) | 429 | this.timeout(120000) |
296 | 430 | ||
297 | const ffmpeg = await servers[0].live.sendRTMPStreamInVideo({ videoId: liveId, copyCodecs: true, fixtureName: 'video_short.mp4' }) | 431 | const ffmpeg = await servers[0].live.sendRTMPStreamInVideo({ videoId: liveId, copyCodecs: true, fixtureName: 'video_short.mp4' }) |
298 | await servers[0].live.waitUntilPublished({ videoId: liveId }) | 432 | await servers[0].live.waitUntilPublished({ videoId: liveId }) |
299 | 433 | ||
300 | const json = await servers[0].feed.getJSON({ feed: 'videos', ignoreCache: true }) | 434 | const json = await servers[0].feed.getJSON({ feed: 'videos', ignoreCache: true }) |
301 | 435 | ||
302 | const jsonObj = JSON.parse(json) | 436 | const jsonObj = JSON.parse(json) |
303 | expect(jsonObj.items.length).to.be.equal(3) | 437 | expect(jsonObj.items.length).to.be.equal(3) |
304 | expect(jsonObj.items[0].title).to.equal('live') | 438 | expect(jsonObj.items[0].title).to.equal('live') |
305 | expect(jsonObj.items[1].title).to.equal('my super name for server 1') | 439 | expect(jsonObj.items[1].title).to.equal('my super name for server 1') |
306 | expect(jsonObj.items[2].title).to.equal('user video') | 440 | expect(jsonObj.items[2].title).to.equal('user video') |
307 | 441 | ||
308 | await stopFfmpeg(ffmpeg) | 442 | await stopFfmpeg(ffmpeg) |
309 | }) | 443 | }) |
310 | 444 | ||
311 | it('Should have the channel avatar as feed icon', async function () { | 445 | it('Should have the channel avatar as feed icon', async function () { |
312 | const json = await servers[0].feed.getJSON({ feed: 'videos', query: { videoChannelId: rootChannelId }, ignoreCache: true }) | 446 | const json = await servers[0].feed.getJSON({ feed: 'videos', query: { videoChannelId: rootChannelId }, ignoreCache: true }) |
313 | 447 | ||
314 | const jsonObj = JSON.parse(json) | 448 | const jsonObj = JSON.parse(json) |
315 | const imageUrl = jsonObj.icon | 449 | const imageUrl = jsonObj.icon |
316 | expect(imageUrl).to.include('/lazy-static/avatars/') | 450 | expect(imageUrl).to.include('/lazy-static/avatars/') |
317 | await makeRawRequest({ url: imageUrl, expectedStatus: HttpStatusCode.OK_200 }) | 451 | await makeRawRequest({ url: imageUrl, expectedStatus: HttpStatusCode.OK_200 }) |
452 | }) | ||
318 | }) | 453 | }) |
319 | }) | 454 | }) |
320 | 455 | ||
@@ -470,6 +605,8 @@ describe('Test syndication feeds', () => { | |||
470 | }) | 605 | }) |
471 | 606 | ||
472 | after(async function () { | 607 | after(async function () { |
608 | await servers[0].plugins.uninstall({ npmName: 'peertube-plugin-test-podcast-custom-tags' }) | ||
609 | |||
473 | await cleanupTests([ ...servers, serverHLSOnly ]) | 610 | await cleanupTests([ ...servers, serverHLSOnly ]) |
474 | }) | 611 | }) |
475 | }) | 612 | }) |