| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Signed-off-by: VirtualTam <virtualtam@flibidi.net>
|
| |
|
|\
| |
| | |
Extract the title/charset during page download, and check content type
|
| |
| |
| |
| |
| |
| |
| | |
Use CURLOPT_WRITEFUNCTION to check the response code and content type (only allow HTML).
Also extract the title and charset during downloading chunk of data, and stop it when everything has been extracted.
Closes #579
|
| |
| |
| |
| | |
set to false
|
|/
|
|
|
|
| |
With markdown plugin disabled
relates to #966
|
|
|
|
| |
Signed-off-by: VirtualTam <virtualtam@flibidi.net>
|
|
|
|
|
|
|
| |
All existing link will keep their permalinks.
New links will have smallhash generated with date+id.
The purpose of this is to avoid collision between links due to their creation date.
|
| |
|
|
|
|
|
|
| |
* Hashtag are auto-linked with a filter search
* Supports unicode
* Compatible with markdown (excluded in code blocks)
|
| |
|
|
|
|
| |
see https://github.com/shaarli/Shaarli/issues/531 for details
|
| |
|
|
|
|
|
|
|
|
| |
Additions:
- [makefile] check versioned files are not executable
- [travis] call the new make target
Signed-off-by: VirtualTam <virtualtam@flibidi.net>
|
|
* `get_http_url()` renamed to `get_http_response()`.
* Use the same HTTP context to retrieve response headers and content.
* Follow HTTP 301 and 302 redirections to retrieve the title (default max 3 redirections).
* Add `LinkUtils` to extract titles and charset.
* Try to retrieve charset from HTTP headers first (new), then HTML content.
* Use mb_string to re-encode title if necessary.
|