| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
update test case
|
|\
| |
| | |
Private links counter in the header
|
| | |
|
| | |
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Relates to #102
Additions:
- application:
- export: allow prepending note permalinks with the instance's URL
- test coverage
Modifications:
- export template: switch to an HTML form
- link selection (all/private/public)
- prepend note permalinks with the instance's URL
Signed-off-by: VirtualTam <virtualtam@flibidi.net>
|
| |
|
|\
| |
| | |
Use correct 'UTC' timezone
|
| | |
|
|\ \
| | |
| | | |
Fixes #531 - Title retrieving is failing with multiple use case
|
| | |
| | |
| | |
| | | |
see https://github.com/shaarli/Shaarli/issues/531 for details
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Arrays are key-value maps. We should reindex the array after a filter
since we are using the key and count to do array access in filterTags.
An example would be searching for "foo, bar", after the array filter,
our array is actually (0 -> foo, 2 -> bar) which will cause an error
when trying to access $searchtags[1].
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Relates to https://github.com/shaarli/netscape-bookmark-parser/issues/5
Fixes:
- respect the Netscape bookmark format "specification"
Modifications:
- [application] introduce the NetscapeBookmarkUtils class
- [template] export - improve formatting, rename export selection parameter
- [template] export.bookmarks - template for Netscape exports
- [tests] bookmark filtering, additional field generation
Signed-off-by: VirtualTam <virtualtam@flibidi.net>
|
|/
|
|
|
|
|
|
| |
* New config: `$GLOBALS['config']['REDIRECTOR_URLENCODE']` (default `true`).
* Parameter added to LinkDB constructor.
* Fixes a bug with urlencode and escaped url.
* In `index.php`, LinkDB is now instanciate once for `importFile()` and `showDaily()`.
* TU
|
|\
| |
| | |
Refactor and rebase #380: Firefox reader view links
|
| |
| |
| |
| |
| | |
Fixes #366
Closes #380
|
| | |
|
|\ \
| |/
|/| |
Refactor RSS feeds generation, and do it through templates
|
| |
| |
| |
| |
| |
| |
| |
| | |
* search type now carried by LinkDB in order to factorize code between different search sources.
* LinkDB->filter split in 3 method: filterSearch, filterHash, filterDay (we know what type of filter is needed).
* filterHash now throw a LinkNotFoundException if it doesn't exist: internal implementation choice, still displays a 404.
* Smallhash regex has been rewritten.
* Unit tests update
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Minor changes:
* Fix the date which was in a invalid format.
* Avoid empty categories (tags).
* Use the locale to set the language
|
| | |
|
|/ |
|
|
|
|
| |
Going through multiple reverse proxy will store multiple scheme and port in HTTP header separated by a comma. Shaarli will use the first one to generate server_url.
|
|\
| |
| | |
Allow crossed search between terms and tags
|
| |
| |
| |
| |
| |
| |
| | |
* Partial fix of #449
* Current use case: search term + click on tag.
* LinkFilter now returns all links if no filter is given.
* Unit tests.
|
|\ \
| | |
| | | |
Markdown: don't escape content + sanitize sensible tags
|
| | |
| | |
| | |
| | | |
Instead of trying to fix broken content for Markdown parsing, parse it unescaped, then sanatize sensible tags such as scripts, etc.
|
|\ \ \
| |_|/
|/| | |
Fixes #481: tag cloud fatal error
|
| |/ |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Closes #270
Modifications:
- replace custom date parsing by DateTime calls
- use proper date formatting for RSS feeds
Deletions:
- linkdate2timestamp()
- linkdate2rfc822
- linkdate2iso8601
Signed-off-by: VirtualTam <virtualtam@flibidi.net>
|
| |
|
| |
|
|
|
|
|
|
|
| |
Tags starting with a dot '.' are now private.
They can only be seen and searched when logged in.
Fixes #315
|
|
|
|
|
|
|
|
|
|
|
| |
* Searching '-mytag' will now exlude all shaares with 'mytag' tag.
* All tags starting with a '-' are renamed without it (through the Updater).
* Unit tests.
Minor code changes:
* LinkDB->filter() can now take no parameters (get all link depending on logged status).
* tagsStrToArray() is now static and filters blank tags.
|
|
|
|
|
|
|
| |
* contains methods designed to be run once.
* is able to upgrade the datastore or the configuration.
* is based on methods names, stored in a text file with ';' separator (updates.txt).
* begins with existing function 'mergeDeprecatedConfigFile()' (options.php).
|
|\
| |
| | |
Implemented a little more sophisticated searching (squashed)
|
| |
| |
| |
| |
| |
| | |
particular order.
+ unit tests
|
|\ \
| | |
| | | |
Fixes #378 - Plugin administration UI.
|
| |/ |
|
|\ \
| |/
|/| |
PLUGIN Markdown
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Parse link description in Markdown (HTML) before rendering.
* hard remove of Shaarli's HTML before parsing.
* Using Parsedown <https://github.com/erusev/parsedown> PHP lib.
* Includes basic markdown CSS.
* Style: removed 400px height max limit for shaares.
* Unit tests.
|
| | |
|
|\ \
| | |
| | | |
tests: add a make target to check file permissions
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Additions:
- [makefile] check versioned files are not executable
- [travis] call the new make target
Signed-off-by: VirtualTam <virtualtam@flibidi.net>
|
|/ /
| |
| |
| |
| |
| | |
Relates to #436
Signed-off-by: VirtualTam <virtualtam@flibidi.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes #436
Modifications:
- remove calls to strval() on safe data
- update the date format: 'Y/m/d_H:i:s' => 'Y/m/d H:i:s'
Signed-off-by: VirtualTam <virtualtam@flibidi.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Relates to #436
Modifications:
- inject dependencies to global variables ($_SERVER, $GLOBALS)
- apply coding conventions
- add test coverage
Signed-off-by: VirtualTam <virtualtam@flibidi.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* `get_http_url()` renamed to `get_http_response()`.
* Use the same HTTP context to retrieve response headers and content.
* Follow HTTP 301 and 302 redirections to retrieve the title (default max 3 redirections).
* Add `LinkUtils` to extract titles and charset.
* Try to retrieve charset from HTTP headers first (new), then HTML content.
* Use mb_string to re-encode title if necessary.
|