diff options
author | Radek Simko <radek.simko@gmail.com> | 2017-08-14 16:10:17 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2017-08-14 16:10:17 +0200 |
commit | 00a66330be57dc8d7f987b4235d65d372f6b471f (patch) | |
tree | 864f925049d422033dd25a73bafce32b361c8827 | |
parent | b6a7c48445fdb87dcae46906aa7e9349209d8bb5 (diff) | |
parent | c680a8e1622ed0f18751d9d167c836ee24f5e897 (diff) | |
download | terraform-provider-statuscake-00a66330be57dc8d7f987b4235d65d372f6b471f.tar.gz terraform-provider-statuscake-00a66330be57dc8d7f987b4235d65d372f6b471f.tar.zst terraform-provider-statuscake-00a66330be57dc8d7f987b4235d65d372f6b471f.zip |
Merge pull request #3 from terraform-providers/vendor-tf-0.10
vendor: github.com/hashicorp/terraform/...@v0.10.0
101 files changed, 19379 insertions, 269 deletions
diff --git a/vendor/github.com/blang/semver/LICENSE b/vendor/github.com/blang/semver/LICENSE new file mode 100644 index 0000000..5ba5c86 --- /dev/null +++ b/vendor/github.com/blang/semver/LICENSE | |||
@@ -0,0 +1,22 @@ | |||
1 | The MIT License | ||
2 | |||
3 | Copyright (c) 2014 Benedikt Lang <github at benediktlang.de> | ||
4 | |||
5 | Permission is hereby granted, free of charge, to any person obtaining a copy | ||
6 | of this software and associated documentation files (the "Software"), to deal | ||
7 | in the Software without restriction, including without limitation the rights | ||
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
9 | copies of the Software, and to permit persons to whom the Software is | ||
10 | furnished to do so, subject to the following conditions: | ||
11 | |||
12 | The above copyright notice and this permission notice shall be included in | ||
13 | all copies or substantial portions of the Software. | ||
14 | |||
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN | ||
21 | THE SOFTWARE. | ||
22 | |||
diff --git a/vendor/github.com/blang/semver/README.md b/vendor/github.com/blang/semver/README.md new file mode 100644 index 0000000..08b2e4a --- /dev/null +++ b/vendor/github.com/blang/semver/README.md | |||
@@ -0,0 +1,194 @@ | |||
1 | semver for golang [![Build Status](https://travis-ci.org/blang/semver.svg?branch=master)](https://travis-ci.org/blang/semver) [![GoDoc](https://godoc.org/github.com/blang/semver?status.png)](https://godoc.org/github.com/blang/semver) [![Coverage Status](https://img.shields.io/coveralls/blang/semver.svg)](https://coveralls.io/r/blang/semver?branch=master) | ||
2 | ====== | ||
3 | |||
4 | semver is a [Semantic Versioning](http://semver.org/) library written in golang. It fully covers spec version `2.0.0`. | ||
5 | |||
6 | Usage | ||
7 | ----- | ||
8 | ```bash | ||
9 | $ go get github.com/blang/semver | ||
10 | ``` | ||
11 | Note: Always vendor your dependencies or fix on a specific version tag. | ||
12 | |||
13 | ```go | ||
14 | import github.com/blang/semver | ||
15 | v1, err := semver.Make("1.0.0-beta") | ||
16 | v2, err := semver.Make("2.0.0-beta") | ||
17 | v1.Compare(v2) | ||
18 | ``` | ||
19 | |||
20 | Also check the [GoDocs](http://godoc.org/github.com/blang/semver). | ||
21 | |||
22 | Why should I use this lib? | ||
23 | ----- | ||
24 | |||
25 | - Fully spec compatible | ||
26 | - No reflection | ||
27 | - No regex | ||
28 | - Fully tested (Coverage >99%) | ||
29 | - Readable parsing/validation errors | ||
30 | - Fast (See [Benchmarks](#benchmarks)) | ||
31 | - Only Stdlib | ||
32 | - Uses values instead of pointers | ||
33 | - Many features, see below | ||
34 | |||
35 | |||
36 | Features | ||
37 | ----- | ||
38 | |||
39 | - Parsing and validation at all levels | ||
40 | - Comparator-like comparisons | ||
41 | - Compare Helper Methods | ||
42 | - InPlace manipulation | ||
43 | - Ranges `>=1.0.0 <2.0.0 || >=3.0.0 !3.0.1-beta.1` | ||
44 | - Wildcards `>=1.x`, `<=2.5.x` | ||
45 | - Sortable (implements sort.Interface) | ||
46 | - database/sql compatible (sql.Scanner/Valuer) | ||
47 | - encoding/json compatible (json.Marshaler/Unmarshaler) | ||
48 | |||
49 | Ranges | ||
50 | ------ | ||
51 | |||
52 | A `Range` is a set of conditions which specify which versions satisfy the range. | ||
53 | |||
54 | A condition is composed of an operator and a version. The supported operators are: | ||
55 | |||
56 | - `<1.0.0` Less than `1.0.0` | ||
57 | - `<=1.0.0` Less than or equal to `1.0.0` | ||
58 | - `>1.0.0` Greater than `1.0.0` | ||
59 | - `>=1.0.0` Greater than or equal to `1.0.0` | ||
60 | - `1.0.0`, `=1.0.0`, `==1.0.0` Equal to `1.0.0` | ||
61 | - `!1.0.0`, `!=1.0.0` Not equal to `1.0.0`. Excludes version `1.0.0`. | ||
62 | |||
63 | Note that spaces between the operator and the version will be gracefully tolerated. | ||
64 | |||
65 | A `Range` can link multiple `Ranges` separated by space: | ||
66 | |||
67 | Ranges can be linked by logical AND: | ||
68 | |||
69 | - `>1.0.0 <2.0.0` would match between both ranges, so `1.1.1` and `1.8.7` but not `1.0.0` or `2.0.0` | ||
70 | - `>1.0.0 <3.0.0 !2.0.3-beta.2` would match every version between `1.0.0` and `3.0.0` except `2.0.3-beta.2` | ||
71 | |||
72 | Ranges can also be linked by logical OR: | ||
73 | |||
74 | - `<2.0.0 || >=3.0.0` would match `1.x.x` and `3.x.x` but not `2.x.x` | ||
75 | |||
76 | AND has a higher precedence than OR. It's not possible to use brackets. | ||
77 | |||
78 | Ranges can be combined by both AND and OR | ||
79 | |||
80 | - `>1.0.0 <2.0.0 || >3.0.0 !4.2.1` would match `1.2.3`, `1.9.9`, `3.1.1`, but not `4.2.1`, `2.1.1` | ||
81 | |||
82 | Range usage: | ||
83 | |||
84 | ``` | ||
85 | v, err := semver.Parse("1.2.3") | ||
86 | range, err := semver.ParseRange(">1.0.0 <2.0.0 || >=3.0.0") | ||
87 | if range(v) { | ||
88 | //valid | ||
89 | } | ||
90 | |||
91 | ``` | ||
92 | |||
93 | Example | ||
94 | ----- | ||
95 | |||
96 | Have a look at full examples in [examples/main.go](examples/main.go) | ||
97 | |||
98 | ```go | ||
99 | import github.com/blang/semver | ||
100 | |||
101 | v, err := semver.Make("0.0.1-alpha.preview+123.github") | ||
102 | fmt.Printf("Major: %d\n", v.Major) | ||
103 | fmt.Printf("Minor: %d\n", v.Minor) | ||
104 | fmt.Printf("Patch: %d\n", v.Patch) | ||
105 | fmt.Printf("Pre: %s\n", v.Pre) | ||
106 | fmt.Printf("Build: %s\n", v.Build) | ||
107 | |||
108 | // Prerelease versions array | ||
109 | if len(v.Pre) > 0 { | ||
110 | fmt.Println("Prerelease versions:") | ||
111 | for i, pre := range v.Pre { | ||
112 | fmt.Printf("%d: %q\n", i, pre) | ||
113 | } | ||
114 | } | ||
115 | |||
116 | // Build meta data array | ||
117 | if len(v.Build) > 0 { | ||
118 | fmt.Println("Build meta data:") | ||
119 | for i, build := range v.Build { | ||
120 | fmt.Printf("%d: %q\n", i, build) | ||
121 | } | ||
122 | } | ||
123 | |||
124 | v001, err := semver.Make("0.0.1") | ||
125 | // Compare using helpers: v.GT(v2), v.LT, v.GTE, v.LTE | ||
126 | v001.GT(v) == true | ||
127 | v.LT(v001) == true | ||
128 | v.GTE(v) == true | ||
129 | v.LTE(v) == true | ||
130 | |||
131 | // Or use v.Compare(v2) for comparisons (-1, 0, 1): | ||
132 | v001.Compare(v) == 1 | ||
133 | v.Compare(v001) == -1 | ||
134 | v.Compare(v) == 0 | ||
135 | |||
136 | // Manipulate Version in place: | ||
137 | v.Pre[0], err = semver.NewPRVersion("beta") | ||
138 | if err != nil { | ||
139 | fmt.Printf("Error parsing pre release version: %q", err) | ||
140 | } | ||
141 | |||
142 | fmt.Println("\nValidate versions:") | ||
143 | v.Build[0] = "?" | ||
144 | |||
145 | err = v.Validate() | ||
146 | if err != nil { | ||
147 | fmt.Printf("Validation failed: %s\n", err) | ||
148 | } | ||
149 | ``` | ||
150 | |||
151 | |||
152 | Benchmarks | ||
153 | ----- | ||
154 | |||
155 | BenchmarkParseSimple-4 5000000 390 ns/op 48 B/op 1 allocs/op | ||
156 | BenchmarkParseComplex-4 1000000 1813 ns/op 256 B/op 7 allocs/op | ||
157 | BenchmarkParseAverage-4 1000000 1171 ns/op 163 B/op 4 allocs/op | ||
158 | BenchmarkStringSimple-4 20000000 119 ns/op 16 B/op 1 allocs/op | ||
159 | BenchmarkStringLarger-4 10000000 206 ns/op 32 B/op 2 allocs/op | ||
160 | BenchmarkStringComplex-4 5000000 324 ns/op 80 B/op 3 allocs/op | ||
161 | BenchmarkStringAverage-4 5000000 273 ns/op 53 B/op 2 allocs/op | ||
162 | BenchmarkValidateSimple-4 200000000 9.33 ns/op 0 B/op 0 allocs/op | ||
163 | BenchmarkValidateComplex-4 3000000 469 ns/op 0 B/op 0 allocs/op | ||
164 | BenchmarkValidateAverage-4 5000000 256 ns/op 0 B/op 0 allocs/op | ||
165 | BenchmarkCompareSimple-4 100000000 11.8 ns/op 0 B/op 0 allocs/op | ||
166 | BenchmarkCompareComplex-4 50000000 30.8 ns/op 0 B/op 0 allocs/op | ||
167 | BenchmarkCompareAverage-4 30000000 41.5 ns/op 0 B/op 0 allocs/op | ||
168 | BenchmarkSort-4 3000000 419 ns/op 256 B/op 2 allocs/op | ||
169 | BenchmarkRangeParseSimple-4 2000000 850 ns/op 192 B/op 5 allocs/op | ||
170 | BenchmarkRangeParseAverage-4 1000000 1677 ns/op 400 B/op 10 allocs/op | ||
171 | BenchmarkRangeParseComplex-4 300000 5214 ns/op 1440 B/op 30 allocs/op | ||
172 | BenchmarkRangeMatchSimple-4 50000000 25.6 ns/op 0 B/op 0 allocs/op | ||
173 | BenchmarkRangeMatchAverage-4 30000000 56.4 ns/op 0 B/op 0 allocs/op | ||
174 | BenchmarkRangeMatchComplex-4 10000000 153 ns/op 0 B/op 0 allocs/op | ||
175 | |||
176 | See benchmark cases at [semver_test.go](semver_test.go) | ||
177 | |||
178 | |||
179 | Motivation | ||
180 | ----- | ||
181 | |||
182 | I simply couldn't find any lib supporting the full spec. Others were just wrong or used reflection and regex which i don't like. | ||
183 | |||
184 | |||
185 | Contribution | ||
186 | ----- | ||
187 | |||
188 | Feel free to make a pull request. For bigger changes create a issue first to discuss about it. | ||
189 | |||
190 | |||
191 | License | ||
192 | ----- | ||
193 | |||
194 | See [LICENSE](LICENSE) file. | ||
diff --git a/vendor/github.com/blang/semver/json.go b/vendor/github.com/blang/semver/json.go new file mode 100644 index 0000000..a74bf7c --- /dev/null +++ b/vendor/github.com/blang/semver/json.go | |||
@@ -0,0 +1,23 @@ | |||
1 | package semver | ||
2 | |||
3 | import ( | ||
4 | "encoding/json" | ||
5 | ) | ||
6 | |||
7 | // MarshalJSON implements the encoding/json.Marshaler interface. | ||
8 | func (v Version) MarshalJSON() ([]byte, error) { | ||
9 | return json.Marshal(v.String()) | ||
10 | } | ||
11 | |||
12 | // UnmarshalJSON implements the encoding/json.Unmarshaler interface. | ||
13 | func (v *Version) UnmarshalJSON(data []byte) (err error) { | ||
14 | var versionString string | ||
15 | |||
16 | if err = json.Unmarshal(data, &versionString); err != nil { | ||
17 | return | ||
18 | } | ||
19 | |||
20 | *v, err = Parse(versionString) | ||
21 | |||
22 | return | ||
23 | } | ||
diff --git a/vendor/github.com/blang/semver/package.json b/vendor/github.com/blang/semver/package.json new file mode 100644 index 0000000..1cf8ebd --- /dev/null +++ b/vendor/github.com/blang/semver/package.json | |||
@@ -0,0 +1,17 @@ | |||
1 | { | ||
2 | "author": "blang", | ||
3 | "bugs": { | ||
4 | "URL": "https://github.com/blang/semver/issues", | ||
5 | "url": "https://github.com/blang/semver/issues" | ||
6 | }, | ||
7 | "gx": { | ||
8 | "dvcsimport": "github.com/blang/semver" | ||
9 | }, | ||
10 | "gxVersion": "0.10.0", | ||
11 | "language": "go", | ||
12 | "license": "MIT", | ||
13 | "name": "semver", | ||
14 | "releaseCmd": "git commit -a -m \"gx publish $VERSION\"", | ||
15 | "version": "3.5.1" | ||
16 | } | ||
17 | |||
diff --git a/vendor/github.com/blang/semver/range.go b/vendor/github.com/blang/semver/range.go new file mode 100644 index 0000000..fca406d --- /dev/null +++ b/vendor/github.com/blang/semver/range.go | |||
@@ -0,0 +1,416 @@ | |||
1 | package semver | ||
2 | |||
3 | import ( | ||
4 | "fmt" | ||
5 | "strconv" | ||
6 | "strings" | ||
7 | "unicode" | ||
8 | ) | ||
9 | |||
10 | type wildcardType int | ||
11 | |||
12 | const ( | ||
13 | noneWildcard wildcardType = iota | ||
14 | majorWildcard wildcardType = 1 | ||
15 | minorWildcard wildcardType = 2 | ||
16 | patchWildcard wildcardType = 3 | ||
17 | ) | ||
18 | |||
19 | func wildcardTypefromInt(i int) wildcardType { | ||
20 | switch i { | ||
21 | case 1: | ||
22 | return majorWildcard | ||
23 | case 2: | ||
24 | return minorWildcard | ||
25 | case 3: | ||
26 | return patchWildcard | ||
27 | default: | ||
28 | return noneWildcard | ||
29 | } | ||
30 | } | ||
31 | |||
32 | type comparator func(Version, Version) bool | ||
33 | |||
34 | var ( | ||
35 | compEQ comparator = func(v1 Version, v2 Version) bool { | ||
36 | return v1.Compare(v2) == 0 | ||
37 | } | ||
38 | compNE = func(v1 Version, v2 Version) bool { | ||
39 | return v1.Compare(v2) != 0 | ||
40 | } | ||
41 | compGT = func(v1 Version, v2 Version) bool { | ||
42 | return v1.Compare(v2) == 1 | ||
43 | } | ||
44 | compGE = func(v1 Version, v2 Version) bool { | ||
45 | return v1.Compare(v2) >= 0 | ||
46 | } | ||
47 | compLT = func(v1 Version, v2 Version) bool { | ||
48 | return v1.Compare(v2) == -1 | ||
49 | } | ||
50 | compLE = func(v1 Version, v2 Version) bool { | ||
51 | return v1.Compare(v2) <= 0 | ||
52 | } | ||
53 | ) | ||
54 | |||
55 | type versionRange struct { | ||
56 | v Version | ||
57 | c comparator | ||
58 | } | ||
59 | |||
60 | // rangeFunc creates a Range from the given versionRange. | ||
61 | func (vr *versionRange) rangeFunc() Range { | ||
62 | return Range(func(v Version) bool { | ||
63 | return vr.c(v, vr.v) | ||
64 | }) | ||
65 | } | ||
66 | |||
67 | // Range represents a range of versions. | ||
68 | // A Range can be used to check if a Version satisfies it: | ||
69 | // | ||
70 | // range, err := semver.ParseRange(">1.0.0 <2.0.0") | ||
71 | // range(semver.MustParse("1.1.1") // returns true | ||
72 | type Range func(Version) bool | ||
73 | |||
74 | // OR combines the existing Range with another Range using logical OR. | ||
75 | func (rf Range) OR(f Range) Range { | ||
76 | return Range(func(v Version) bool { | ||
77 | return rf(v) || f(v) | ||
78 | }) | ||
79 | } | ||
80 | |||
81 | // AND combines the existing Range with another Range using logical AND. | ||
82 | func (rf Range) AND(f Range) Range { | ||
83 | return Range(func(v Version) bool { | ||
84 | return rf(v) && f(v) | ||
85 | }) | ||
86 | } | ||
87 | |||
88 | // ParseRange parses a range and returns a Range. | ||
89 | // If the range could not be parsed an error is returned. | ||
90 | // | ||
91 | // Valid ranges are: | ||
92 | // - "<1.0.0" | ||
93 | // - "<=1.0.0" | ||
94 | // - ">1.0.0" | ||
95 | // - ">=1.0.0" | ||
96 | // - "1.0.0", "=1.0.0", "==1.0.0" | ||
97 | // - "!1.0.0", "!=1.0.0" | ||
98 | // | ||
99 | // A Range can consist of multiple ranges separated by space: | ||
100 | // Ranges can be linked by logical AND: | ||
101 | // - ">1.0.0 <2.0.0" would match between both ranges, so "1.1.1" and "1.8.7" but not "1.0.0" or "2.0.0" | ||
102 | // - ">1.0.0 <3.0.0 !2.0.3-beta.2" would match every version between 1.0.0 and 3.0.0 except 2.0.3-beta.2 | ||
103 | // | ||
104 | // Ranges can also be linked by logical OR: | ||
105 | // - "<2.0.0 || >=3.0.0" would match "1.x.x" and "3.x.x" but not "2.x.x" | ||
106 | // | ||
107 | // AND has a higher precedence than OR. It's not possible to use brackets. | ||
108 | // | ||
109 | // Ranges can be combined by both AND and OR | ||
110 | // | ||
111 | // - `>1.0.0 <2.0.0 || >3.0.0 !4.2.1` would match `1.2.3`, `1.9.9`, `3.1.1`, but not `4.2.1`, `2.1.1` | ||
112 | func ParseRange(s string) (Range, error) { | ||
113 | parts := splitAndTrim(s) | ||
114 | orParts, err := splitORParts(parts) | ||
115 | if err != nil { | ||
116 | return nil, err | ||
117 | } | ||
118 | expandedParts, err := expandWildcardVersion(orParts) | ||
119 | if err != nil { | ||
120 | return nil, err | ||
121 | } | ||
122 | var orFn Range | ||
123 | for _, p := range expandedParts { | ||
124 | var andFn Range | ||
125 | for _, ap := range p { | ||
126 | opStr, vStr, err := splitComparatorVersion(ap) | ||
127 | if err != nil { | ||
128 | return nil, err | ||
129 | } | ||
130 | vr, err := buildVersionRange(opStr, vStr) | ||
131 | if err != nil { | ||
132 | return nil, fmt.Errorf("Could not parse Range %q: %s", ap, err) | ||
133 | } | ||
134 | rf := vr.rangeFunc() | ||
135 | |||
136 | // Set function | ||
137 | if andFn == nil { | ||
138 | andFn = rf | ||
139 | } else { // Combine with existing function | ||
140 | andFn = andFn.AND(rf) | ||
141 | } | ||
142 | } | ||
143 | if orFn == nil { | ||
144 | orFn = andFn | ||
145 | } else { | ||
146 | orFn = orFn.OR(andFn) | ||
147 | } | ||
148 | |||
149 | } | ||
150 | return orFn, nil | ||
151 | } | ||
152 | |||
153 | // splitORParts splits the already cleaned parts by '||'. | ||
154 | // Checks for invalid positions of the operator and returns an | ||
155 | // error if found. | ||
156 | func splitORParts(parts []string) ([][]string, error) { | ||
157 | var ORparts [][]string | ||
158 | last := 0 | ||
159 | for i, p := range parts { | ||
160 | if p == "||" { | ||
161 | if i == 0 { | ||
162 | return nil, fmt.Errorf("First element in range is '||'") | ||
163 | } | ||
164 | ORparts = append(ORparts, parts[last:i]) | ||
165 | last = i + 1 | ||
166 | } | ||
167 | } | ||
168 | if last == len(parts) { | ||
169 | return nil, fmt.Errorf("Last element in range is '||'") | ||
170 | } | ||
171 | ORparts = append(ORparts, parts[last:]) | ||
172 | return ORparts, nil | ||
173 | } | ||
174 | |||
175 | // buildVersionRange takes a slice of 2: operator and version | ||
176 | // and builds a versionRange, otherwise an error. | ||
177 | func buildVersionRange(opStr, vStr string) (*versionRange, error) { | ||
178 | c := parseComparator(opStr) | ||
179 | if c == nil { | ||
180 | return nil, fmt.Errorf("Could not parse comparator %q in %q", opStr, strings.Join([]string{opStr, vStr}, "")) | ||
181 | } | ||
182 | v, err := Parse(vStr) | ||
183 | if err != nil { | ||
184 | return nil, fmt.Errorf("Could not parse version %q in %q: %s", vStr, strings.Join([]string{opStr, vStr}, ""), err) | ||
185 | } | ||
186 | |||
187 | return &versionRange{ | ||
188 | v: v, | ||
189 | c: c, | ||
190 | }, nil | ||
191 | |||
192 | } | ||
193 | |||
194 | // inArray checks if a byte is contained in an array of bytes | ||
195 | func inArray(s byte, list []byte) bool { | ||
196 | for _, el := range list { | ||
197 | if el == s { | ||
198 | return true | ||
199 | } | ||
200 | } | ||
201 | return false | ||
202 | } | ||
203 | |||
204 | // splitAndTrim splits a range string by spaces and cleans whitespaces | ||
205 | func splitAndTrim(s string) (result []string) { | ||
206 | last := 0 | ||
207 | var lastChar byte | ||
208 | excludeFromSplit := []byte{'>', '<', '='} | ||
209 | for i := 0; i < len(s); i++ { | ||
210 | if s[i] == ' ' && !inArray(lastChar, excludeFromSplit) { | ||
211 | if last < i-1 { | ||
212 | result = append(result, s[last:i]) | ||
213 | } | ||
214 | last = i + 1 | ||
215 | } else if s[i] != ' ' { | ||
216 | lastChar = s[i] | ||
217 | } | ||
218 | } | ||
219 | if last < len(s)-1 { | ||
220 | result = append(result, s[last:]) | ||
221 | } | ||
222 | |||
223 | for i, v := range result { | ||
224 | result[i] = strings.Replace(v, " ", "", -1) | ||
225 | } | ||
226 | |||
227 | // parts := strings.Split(s, " ") | ||
228 | // for _, x := range parts { | ||
229 | // if s := strings.TrimSpace(x); len(s) != 0 { | ||
230 | // result = append(result, s) | ||
231 | // } | ||
232 | // } | ||
233 | return | ||
234 | } | ||
235 | |||
236 | // splitComparatorVersion splits the comparator from the version. | ||
237 | // Input must be free of leading or trailing spaces. | ||
238 | func splitComparatorVersion(s string) (string, string, error) { | ||
239 | i := strings.IndexFunc(s, unicode.IsDigit) | ||
240 | if i == -1 { | ||
241 | return "", "", fmt.Errorf("Could not get version from string: %q", s) | ||
242 | } | ||
243 | return strings.TrimSpace(s[0:i]), s[i:], nil | ||
244 | } | ||
245 | |||
246 | // getWildcardType will return the type of wildcard that the | ||
247 | // passed version contains | ||
248 | func getWildcardType(vStr string) wildcardType { | ||
249 | parts := strings.Split(vStr, ".") | ||
250 | nparts := len(parts) | ||
251 | wildcard := parts[nparts-1] | ||
252 | |||
253 | possibleWildcardType := wildcardTypefromInt(nparts) | ||
254 | if wildcard == "x" { | ||
255 | return possibleWildcardType | ||
256 | } | ||
257 | |||
258 | return noneWildcard | ||
259 | } | ||
260 | |||
261 | // createVersionFromWildcard will convert a wildcard version | ||
262 | // into a regular version, replacing 'x's with '0's, handling | ||
263 | // special cases like '1.x.x' and '1.x' | ||
264 | func createVersionFromWildcard(vStr string) string { | ||
265 | // handle 1.x.x | ||
266 | vStr2 := strings.Replace(vStr, ".x.x", ".x", 1) | ||
267 | vStr2 = strings.Replace(vStr2, ".x", ".0", 1) | ||
268 | parts := strings.Split(vStr2, ".") | ||
269 | |||
270 | // handle 1.x | ||
271 | if len(parts) == 2 { | ||
272 | return vStr2 + ".0" | ||
273 | } | ||
274 | |||
275 | return vStr2 | ||
276 | } | ||
277 | |||
278 | // incrementMajorVersion will increment the major version | ||
279 | // of the passed version | ||
280 | func incrementMajorVersion(vStr string) (string, error) { | ||
281 | parts := strings.Split(vStr, ".") | ||
282 | i, err := strconv.Atoi(parts[0]) | ||
283 | if err != nil { | ||
284 | return "", err | ||
285 | } | ||
286 | parts[0] = strconv.Itoa(i + 1) | ||
287 | |||
288 | return strings.Join(parts, "."), nil | ||
289 | } | ||
290 | |||
291 | // incrementMajorVersion will increment the minor version | ||
292 | // of the passed version | ||
293 | func incrementMinorVersion(vStr string) (string, error) { | ||
294 | parts := strings.Split(vStr, ".") | ||
295 | i, err := strconv.Atoi(parts[1]) | ||
296 | if err != nil { | ||
297 | return "", err | ||
298 | } | ||
299 | parts[1] = strconv.Itoa(i + 1) | ||
300 | |||
301 | return strings.Join(parts, "."), nil | ||
302 | } | ||
303 | |||
304 | // expandWildcardVersion will expand wildcards inside versions | ||
305 | // following these rules: | ||
306 | // | ||
307 | // * when dealing with patch wildcards: | ||
308 | // >= 1.2.x will become >= 1.2.0 | ||
309 | // <= 1.2.x will become < 1.3.0 | ||
310 | // > 1.2.x will become >= 1.3.0 | ||
311 | // < 1.2.x will become < 1.2.0 | ||
312 | // != 1.2.x will become < 1.2.0 >= 1.3.0 | ||
313 | // | ||
314 | // * when dealing with minor wildcards: | ||
315 | // >= 1.x will become >= 1.0.0 | ||
316 | // <= 1.x will become < 2.0.0 | ||
317 | // > 1.x will become >= 2.0.0 | ||
318 | // < 1.0 will become < 1.0.0 | ||
319 | // != 1.x will become < 1.0.0 >= 2.0.0 | ||
320 | // | ||
321 | // * when dealing with wildcards without | ||
322 | // version operator: | ||
323 | // 1.2.x will become >= 1.2.0 < 1.3.0 | ||
324 | // 1.x will become >= 1.0.0 < 2.0.0 | ||
325 | func expandWildcardVersion(parts [][]string) ([][]string, error) { | ||
326 | var expandedParts [][]string | ||
327 | for _, p := range parts { | ||
328 | var newParts []string | ||
329 | for _, ap := range p { | ||
330 | if strings.Index(ap, "x") != -1 { | ||
331 | opStr, vStr, err := splitComparatorVersion(ap) | ||
332 | if err != nil { | ||
333 | return nil, err | ||
334 | } | ||
335 | |||
336 | versionWildcardType := getWildcardType(vStr) | ||
337 | flatVersion := createVersionFromWildcard(vStr) | ||
338 | |||
339 | var resultOperator string | ||
340 | var shouldIncrementVersion bool | ||
341 | switch opStr { | ||
342 | case ">": | ||
343 | resultOperator = ">=" | ||
344 | shouldIncrementVersion = true | ||
345 | case ">=": | ||
346 | resultOperator = ">=" | ||
347 | case "<": | ||
348 | resultOperator = "<" | ||
349 | case "<=": | ||
350 | resultOperator = "<" | ||
351 | shouldIncrementVersion = true | ||
352 | case "", "=", "==": | ||
353 | newParts = append(newParts, ">="+flatVersion) | ||
354 | resultOperator = "<" | ||
355 | shouldIncrementVersion = true | ||
356 | case "!=", "!": | ||
357 | newParts = append(newParts, "<"+flatVersion) | ||
358 | resultOperator = ">=" | ||
359 | shouldIncrementVersion = true | ||
360 | } | ||
361 | |||
362 | var resultVersion string | ||
363 | if shouldIncrementVersion { | ||
364 | switch versionWildcardType { | ||
365 | case patchWildcard: | ||
366 | resultVersion, _ = incrementMinorVersion(flatVersion) | ||
367 | case minorWildcard: | ||
368 | resultVersion, _ = incrementMajorVersion(flatVersion) | ||
369 | } | ||
370 | } else { | ||
371 | resultVersion = flatVersion | ||
372 | } | ||
373 | |||
374 | ap = resultOperator + resultVersion | ||
375 | } | ||
376 | newParts = append(newParts, ap) | ||
377 | } | ||
378 | expandedParts = append(expandedParts, newParts) | ||
379 | } | ||
380 | |||
381 | return expandedParts, nil | ||
382 | } | ||
383 | |||
384 | func parseComparator(s string) comparator { | ||
385 | switch s { | ||
386 | case "==": | ||
387 | fallthrough | ||
388 | case "": | ||
389 | fallthrough | ||
390 | case "=": | ||
391 | return compEQ | ||
392 | case ">": | ||
393 | return compGT | ||
394 | case ">=": | ||
395 | return compGE | ||
396 | case "<": | ||
397 | return compLT | ||
398 | case "<=": | ||
399 | return compLE | ||
400 | case "!": | ||
401 | fallthrough | ||
402 | case "!=": | ||
403 | return compNE | ||
404 | } | ||
405 | |||
406 | return nil | ||
407 | } | ||
408 | |||
409 | // MustParseRange is like ParseRange but panics if the range cannot be parsed. | ||
410 | func MustParseRange(s string) Range { | ||
411 | r, err := ParseRange(s) | ||
412 | if err != nil { | ||
413 | panic(`semver: ParseRange(` + s + `): ` + err.Error()) | ||
414 | } | ||
415 | return r | ||
416 | } | ||
diff --git a/vendor/github.com/blang/semver/semver.go b/vendor/github.com/blang/semver/semver.go new file mode 100644 index 0000000..8ee0842 --- /dev/null +++ b/vendor/github.com/blang/semver/semver.go | |||
@@ -0,0 +1,418 @@ | |||
1 | package semver | ||
2 | |||
3 | import ( | ||
4 | "errors" | ||
5 | "fmt" | ||
6 | "strconv" | ||
7 | "strings" | ||
8 | ) | ||
9 | |||
10 | const ( | ||
11 | numbers string = "0123456789" | ||
12 | alphas = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ-" | ||
13 | alphanum = alphas + numbers | ||
14 | ) | ||
15 | |||
16 | // SpecVersion is the latest fully supported spec version of semver | ||
17 | var SpecVersion = Version{ | ||
18 | Major: 2, | ||
19 | Minor: 0, | ||
20 | Patch: 0, | ||
21 | } | ||
22 | |||
23 | // Version represents a semver compatible version | ||
24 | type Version struct { | ||
25 | Major uint64 | ||
26 | Minor uint64 | ||
27 | Patch uint64 | ||
28 | Pre []PRVersion | ||
29 | Build []string //No Precendence | ||
30 | } | ||
31 | |||
32 | // Version to string | ||
33 | func (v Version) String() string { | ||
34 | b := make([]byte, 0, 5) | ||
35 | b = strconv.AppendUint(b, v.Major, 10) | ||
36 | b = append(b, '.') | ||
37 | b = strconv.AppendUint(b, v.Minor, 10) | ||
38 | b = append(b, '.') | ||
39 | b = strconv.AppendUint(b, v.Patch, 10) | ||
40 | |||
41 | if len(v.Pre) > 0 { | ||
42 | b = append(b, '-') | ||
43 | b = append(b, v.Pre[0].String()...) | ||
44 | |||
45 | for _, pre := range v.Pre[1:] { | ||
46 | b = append(b, '.') | ||
47 | b = append(b, pre.String()...) | ||
48 | } | ||
49 | } | ||
50 | |||
51 | if len(v.Build) > 0 { | ||
52 | b = append(b, '+') | ||
53 | b = append(b, v.Build[0]...) | ||
54 | |||
55 | for _, build := range v.Build[1:] { | ||
56 | b = append(b, '.') | ||
57 | b = append(b, build...) | ||
58 | } | ||
59 | } | ||
60 | |||
61 | return string(b) | ||
62 | } | ||
63 | |||
64 | // Equals checks if v is equal to o. | ||
65 | func (v Version) Equals(o Version) bool { | ||
66 | return (v.Compare(o) == 0) | ||
67 | } | ||
68 | |||
69 | // EQ checks if v is equal to o. | ||
70 | func (v Version) EQ(o Version) bool { | ||
71 | return (v.Compare(o) == 0) | ||
72 | } | ||
73 | |||
74 | // NE checks if v is not equal to o. | ||
75 | func (v Version) NE(o Version) bool { | ||
76 | return (v.Compare(o) != 0) | ||
77 | } | ||
78 | |||
79 | // GT checks if v is greater than o. | ||
80 | func (v Version) GT(o Version) bool { | ||
81 | return (v.Compare(o) == 1) | ||
82 | } | ||
83 | |||
84 | // GTE checks if v is greater than or equal to o. | ||
85 | func (v Version) GTE(o Version) bool { | ||
86 | return (v.Compare(o) >= 0) | ||
87 | } | ||
88 | |||
89 | // GE checks if v is greater than or equal to o. | ||
90 | func (v Version) GE(o Version) bool { | ||
91 | return (v.Compare(o) >= 0) | ||
92 | } | ||
93 | |||
94 | // LT checks if v is less than o. | ||
95 | func (v Version) LT(o Version) bool { | ||
96 | return (v.Compare(o) == -1) | ||
97 | } | ||
98 | |||
99 | // LTE checks if v is less than or equal to o. | ||
100 | func (v Version) LTE(o Version) bool { | ||
101 | return (v.Compare(o) <= 0) | ||
102 | } | ||
103 | |||
104 | // LE checks if v is less than or equal to o. | ||
105 | func (v Version) LE(o Version) bool { | ||
106 | return (v.Compare(o) <= 0) | ||
107 | } | ||
108 | |||
109 | // Compare compares Versions v to o: | ||
110 | // -1 == v is less than o | ||
111 | // 0 == v is equal to o | ||
112 | // 1 == v is greater than o | ||
113 | func (v Version) Compare(o Version) int { | ||
114 | if v.Major != o.Major { | ||
115 | if v.Major > o.Major { | ||
116 | return 1 | ||
117 | } | ||
118 | return -1 | ||
119 | } | ||
120 | if v.Minor != o.Minor { | ||
121 | if v.Minor > o.Minor { | ||
122 | return 1 | ||
123 | } | ||
124 | return -1 | ||
125 | } | ||
126 | if v.Patch != o.Patch { | ||
127 | if v.Patch > o.Patch { | ||
128 | return 1 | ||
129 | } | ||
130 | return -1 | ||
131 | } | ||
132 | |||
133 | // Quick comparison if a version has no prerelease versions | ||
134 | if len(v.Pre) == 0 && len(o.Pre) == 0 { | ||
135 | return 0 | ||
136 | } else if len(v.Pre) == 0 && len(o.Pre) > 0 { | ||
137 | return 1 | ||
138 | } else if len(v.Pre) > 0 && len(o.Pre) == 0 { | ||
139 | return -1 | ||
140 | } | ||
141 | |||
142 | i := 0 | ||
143 | for ; i < len(v.Pre) && i < len(o.Pre); i++ { | ||
144 | if comp := v.Pre[i].Compare(o.Pre[i]); comp == 0 { | ||
145 | continue | ||
146 | } else if comp == 1 { | ||
147 | return 1 | ||
148 | } else { | ||
149 | return -1 | ||
150 | } | ||
151 | } | ||
152 | |||
153 | // If all pr versions are the equal but one has further prversion, this one greater | ||
154 | if i == len(v.Pre) && i == len(o.Pre) { | ||
155 | return 0 | ||
156 | } else if i == len(v.Pre) && i < len(o.Pre) { | ||
157 | return -1 | ||
158 | } else { | ||
159 | return 1 | ||
160 | } | ||
161 | |||
162 | } | ||
163 | |||
164 | // Validate validates v and returns error in case | ||
165 | func (v Version) Validate() error { | ||
166 | // Major, Minor, Patch already validated using uint64 | ||
167 | |||
168 | for _, pre := range v.Pre { | ||
169 | if !pre.IsNum { //Numeric prerelease versions already uint64 | ||
170 | if len(pre.VersionStr) == 0 { | ||
171 | return fmt.Errorf("Prerelease can not be empty %q", pre.VersionStr) | ||
172 | } | ||
173 | if !containsOnly(pre.VersionStr, alphanum) { | ||
174 | return fmt.Errorf("Invalid character(s) found in prerelease %q", pre.VersionStr) | ||
175 | } | ||
176 | } | ||
177 | } | ||
178 | |||
179 | for _, build := range v.Build { | ||
180 | if len(build) == 0 { | ||
181 | return fmt.Errorf("Build meta data can not be empty %q", build) | ||
182 | } | ||
183 | if !containsOnly(build, alphanum) { | ||
184 | return fmt.Errorf("Invalid character(s) found in build meta data %q", build) | ||
185 | } | ||
186 | } | ||
187 | |||
188 | return nil | ||
189 | } | ||
190 | |||
191 | // New is an alias for Parse and returns a pointer, parses version string and returns a validated Version or error | ||
192 | func New(s string) (vp *Version, err error) { | ||
193 | v, err := Parse(s) | ||
194 | vp = &v | ||
195 | return | ||
196 | } | ||
197 | |||
198 | // Make is an alias for Parse, parses version string and returns a validated Version or error | ||
199 | func Make(s string) (Version, error) { | ||
200 | return Parse(s) | ||
201 | } | ||
202 | |||
203 | // ParseTolerant allows for certain version specifications that do not strictly adhere to semver | ||
204 | // specs to be parsed by this library. It does so by normalizing versions before passing them to | ||
205 | // Parse(). It currently trims spaces, removes a "v" prefix, and adds a 0 patch number to versions | ||
206 | // with only major and minor components specified | ||
207 | func ParseTolerant(s string) (Version, error) { | ||
208 | s = strings.TrimSpace(s) | ||
209 | s = strings.TrimPrefix(s, "v") | ||
210 | |||
211 | // Split into major.minor.(patch+pr+meta) | ||
212 | parts := strings.SplitN(s, ".", 3) | ||
213 | if len(parts) < 3 { | ||
214 | if strings.ContainsAny(parts[len(parts)-1], "+-") { | ||
215 | return Version{}, errors.New("Short version cannot contain PreRelease/Build meta data") | ||
216 | } | ||
217 | for len(parts) < 3 { | ||
218 | parts = append(parts, "0") | ||
219 | } | ||
220 | s = strings.Join(parts, ".") | ||
221 | } | ||
222 | |||
223 | return Parse(s) | ||
224 | } | ||
225 | |||
226 | // Parse parses version string and returns a validated Version or error | ||
227 | func Parse(s string) (Version, error) { | ||
228 | if len(s) == 0 { | ||
229 | return Version{}, errors.New("Version string empty") | ||
230 | } | ||
231 | |||
232 | // Split into major.minor.(patch+pr+meta) | ||
233 | parts := strings.SplitN(s, ".", 3) | ||
234 | if len(parts) != 3 { | ||
235 | return Version{}, errors.New("No Major.Minor.Patch elements found") | ||
236 | } | ||
237 | |||
238 | // Major | ||
239 | if !containsOnly(parts[0], numbers) { | ||
240 | return Version{}, fmt.Errorf("Invalid character(s) found in major number %q", parts[0]) | ||
241 | } | ||
242 | if hasLeadingZeroes(parts[0]) { | ||
243 | return Version{}, fmt.Errorf("Major number must not contain leading zeroes %q", parts[0]) | ||
244 | } | ||
245 | major, err := strconv.ParseUint(parts[0], 10, 64) | ||
246 | if err != nil { | ||
247 | return Version{}, err | ||
248 | } | ||
249 | |||
250 | // Minor | ||
251 | if !containsOnly(parts[1], numbers) { | ||
252 | return Version{}, fmt.Errorf("Invalid character(s) found in minor number %q", parts[1]) | ||
253 | } | ||
254 | if hasLeadingZeroes(parts[1]) { | ||
255 | return Version{}, fmt.Errorf("Minor number must not contain leading zeroes %q", parts[1]) | ||
256 | } | ||
257 | minor, err := strconv.ParseUint(parts[1], 10, 64) | ||
258 | if err != nil { | ||
259 | return Version{}, err | ||
260 | } | ||
261 | |||
262 | v := Version{} | ||
263 | v.Major = major | ||
264 | v.Minor = minor | ||
265 | |||
266 | var build, prerelease []string | ||
267 | patchStr := parts[2] | ||
268 | |||
269 | if buildIndex := strings.IndexRune(patchStr, '+'); buildIndex != -1 { | ||
270 | build = strings.Split(patchStr[buildIndex+1:], ".") | ||
271 | patchStr = patchStr[:buildIndex] | ||
272 | } | ||
273 | |||
274 | if preIndex := strings.IndexRune(patchStr, '-'); preIndex != -1 { | ||
275 | prerelease = strings.Split(patchStr[preIndex+1:], ".") | ||
276 | patchStr = patchStr[:preIndex] | ||
277 | } | ||
278 | |||
279 | if !containsOnly(patchStr, numbers) { | ||
280 | return Version{}, fmt.Errorf("Invalid character(s) found in patch number %q", patchStr) | ||
281 | } | ||
282 | if hasLeadingZeroes(patchStr) { | ||
283 | return Version{}, fmt.Errorf("Patch number must not contain leading zeroes %q", patchStr) | ||
284 | } | ||
285 | patch, err := strconv.ParseUint(patchStr, 10, 64) | ||
286 | if err != nil { | ||
287 | return Version{}, err | ||
288 | } | ||
289 | |||
290 | v.Patch = patch | ||
291 | |||
292 | // Prerelease | ||
293 | for _, prstr := range prerelease { | ||
294 | parsedPR, err := NewPRVersion(prstr) | ||
295 | if err != nil { | ||
296 | return Version{}, err | ||
297 | } | ||
298 | v.Pre = append(v.Pre, parsedPR) | ||
299 | } | ||
300 | |||
301 | // Build meta data | ||
302 | for _, str := range build { | ||
303 | if len(str) == 0 { | ||
304 | return Version{}, errors.New("Build meta data is empty") | ||
305 | } | ||
306 | if !containsOnly(str, alphanum) { | ||
307 | return Version{}, fmt.Errorf("Invalid character(s) found in build meta data %q", str) | ||
308 | } | ||
309 | v.Build = append(v.Build, str) | ||
310 | } | ||
311 | |||
312 | return v, nil | ||
313 | } | ||
314 | |||
315 | // MustParse is like Parse but panics if the version cannot be parsed. | ||
316 | func MustParse(s string) Version { | ||
317 | v, err := Parse(s) | ||
318 | if err != nil { | ||
319 | panic(`semver: Parse(` + s + `): ` + err.Error()) | ||
320 | } | ||
321 | return v | ||
322 | } | ||
323 | |||
324 | // PRVersion represents a PreRelease Version | ||
325 | type PRVersion struct { | ||
326 | VersionStr string | ||
327 | VersionNum uint64 | ||
328 | IsNum bool | ||
329 | } | ||
330 | |||
331 | // NewPRVersion creates a new valid prerelease version | ||
332 | func NewPRVersion(s string) (PRVersion, error) { | ||
333 | if len(s) == 0 { | ||
334 | return PRVersion{}, errors.New("Prerelease is empty") | ||
335 | } | ||
336 | v := PRVersion{} | ||
337 | if containsOnly(s, numbers) { | ||
338 | if hasLeadingZeroes(s) { | ||
339 | return PRVersion{}, fmt.Errorf("Numeric PreRelease version must not contain leading zeroes %q", s) | ||
340 | } | ||
341 | num, err := strconv.ParseUint(s, 10, 64) | ||
342 | |||
343 | // Might never be hit, but just in case | ||
344 | if err != nil { | ||
345 | return PRVersion{}, err | ||
346 | } | ||
347 | v.VersionNum = num | ||
348 | v.IsNum = true | ||
349 | } else if containsOnly(s, alphanum) { | ||
350 | v.VersionStr = s | ||
351 | v.IsNum = false | ||
352 | } else { | ||
353 | return PRVersion{}, fmt.Errorf("Invalid character(s) found in prerelease %q", s) | ||
354 | } | ||
355 | return v, nil | ||
356 | } | ||
357 | |||
358 | // IsNumeric checks if prerelease-version is numeric | ||
359 | func (v PRVersion) IsNumeric() bool { | ||
360 | return v.IsNum | ||
361 | } | ||
362 | |||
363 | // Compare compares two PreRelease Versions v and o: | ||
364 | // -1 == v is less than o | ||
365 | // 0 == v is equal to o | ||
366 | // 1 == v is greater than o | ||
367 | func (v PRVersion) Compare(o PRVersion) int { | ||
368 | if v.IsNum && !o.IsNum { | ||
369 | return -1 | ||
370 | } else if !v.IsNum && o.IsNum { | ||
371 | return 1 | ||
372 | } else if v.IsNum && o.IsNum { | ||
373 | if v.VersionNum == o.VersionNum { | ||
374 | return 0 | ||
375 | } else if v.VersionNum > o.VersionNum { | ||
376 | return 1 | ||
377 | } else { | ||
378 | return -1 | ||
379 | } | ||
380 | } else { // both are Alphas | ||
381 | if v.VersionStr == o.VersionStr { | ||
382 | return 0 | ||
383 | } else if v.VersionStr > o.VersionStr { | ||
384 | return 1 | ||
385 | } else { | ||
386 | return -1 | ||
387 | } | ||
388 | } | ||
389 | } | ||
390 | |||
391 | // PreRelease version to string | ||
392 | func (v PRVersion) String() string { | ||
393 | if v.IsNum { | ||
394 | return strconv.FormatUint(v.VersionNum, 10) | ||
395 | } | ||
396 | return v.VersionStr | ||
397 | } | ||
398 | |||
399 | func containsOnly(s string, set string) bool { | ||
400 | return strings.IndexFunc(s, func(r rune) bool { | ||
401 | return !strings.ContainsRune(set, r) | ||
402 | }) == -1 | ||
403 | } | ||
404 | |||
405 | func hasLeadingZeroes(s string) bool { | ||
406 | return len(s) > 1 && s[0] == '0' | ||
407 | } | ||
408 | |||
409 | // NewBuildVersion creates a new valid build version | ||
410 | func NewBuildVersion(s string) (string, error) { | ||
411 | if len(s) == 0 { | ||
412 | return "", errors.New("Buildversion is empty") | ||
413 | } | ||
414 | if !containsOnly(s, alphanum) { | ||
415 | return "", fmt.Errorf("Invalid character(s) found in build meta data %q", s) | ||
416 | } | ||
417 | return s, nil | ||
418 | } | ||
diff --git a/vendor/github.com/blang/semver/sort.go b/vendor/github.com/blang/semver/sort.go new file mode 100644 index 0000000..e18f880 --- /dev/null +++ b/vendor/github.com/blang/semver/sort.go | |||
@@ -0,0 +1,28 @@ | |||
1 | package semver | ||
2 | |||
3 | import ( | ||
4 | "sort" | ||
5 | ) | ||
6 | |||
7 | // Versions represents multiple versions. | ||
8 | type Versions []Version | ||
9 | |||
10 | // Len returns length of version collection | ||
11 | func (s Versions) Len() int { | ||
12 | return len(s) | ||
13 | } | ||
14 | |||
15 | // Swap swaps two versions inside the collection by its indices | ||
16 | func (s Versions) Swap(i, j int) { | ||
17 | s[i], s[j] = s[j], s[i] | ||
18 | } | ||
19 | |||
20 | // Less checks if version at index i is less than version at index j | ||
21 | func (s Versions) Less(i, j int) bool { | ||
22 | return s[i].LT(s[j]) | ||
23 | } | ||
24 | |||
25 | // Sort sorts a slice of versions | ||
26 | func Sort(versions []Version) { | ||
27 | sort.Sort(Versions(versions)) | ||
28 | } | ||
diff --git a/vendor/github.com/blang/semver/sql.go b/vendor/github.com/blang/semver/sql.go new file mode 100644 index 0000000..eb4d802 --- /dev/null +++ b/vendor/github.com/blang/semver/sql.go | |||
@@ -0,0 +1,30 @@ | |||
1 | package semver | ||
2 | |||
3 | import ( | ||
4 | "database/sql/driver" | ||
5 | "fmt" | ||
6 | ) | ||
7 | |||
8 | // Scan implements the database/sql.Scanner interface. | ||
9 | func (v *Version) Scan(src interface{}) (err error) { | ||
10 | var str string | ||
11 | switch src := src.(type) { | ||
12 | case string: | ||
13 | str = src | ||
14 | case []byte: | ||
15 | str = string(src) | ||
16 | default: | ||
17 | return fmt.Errorf("Version.Scan: cannot convert %T to string.", src) | ||
18 | } | ||
19 | |||
20 | if t, err := Parse(str); err == nil { | ||
21 | *v = t | ||
22 | } | ||
23 | |||
24 | return | ||
25 | } | ||
26 | |||
27 | // Value implements the database/sql/driver.Valuer interface. | ||
28 | func (v Version) Value() (driver.Value, error) { | ||
29 | return v.String(), nil | ||
30 | } | ||
diff --git a/vendor/github.com/hashicorp/go-cleanhttp/LICENSE b/vendor/github.com/hashicorp/go-cleanhttp/LICENSE new file mode 100644 index 0000000..e87a115 --- /dev/null +++ b/vendor/github.com/hashicorp/go-cleanhttp/LICENSE | |||
@@ -0,0 +1,363 @@ | |||
1 | Mozilla Public License, version 2.0 | ||
2 | |||
3 | 1. Definitions | ||
4 | |||
5 | 1.1. "Contributor" | ||
6 | |||
7 | means each individual or legal entity that creates, contributes to the | ||
8 | creation of, or owns Covered Software. | ||
9 | |||
10 | 1.2. "Contributor Version" | ||
11 | |||
12 | means the combination of the Contributions of others (if any) used by a | ||
13 | Contributor and that particular Contributor's Contribution. | ||
14 | |||
15 | 1.3. "Contribution" | ||
16 | |||
17 | means Covered Software of a particular Contributor. | ||
18 | |||
19 | 1.4. "Covered Software" | ||
20 | |||
21 | means Source Code Form to which the initial Contributor has attached the | ||
22 | notice in Exhibit A, the Executable Form of such Source Code Form, and | ||
23 | Modifications of such Source Code Form, in each case including portions | ||
24 | thereof. | ||
25 | |||
26 | 1.5. "Incompatible With Secondary Licenses" | ||
27 | means | ||
28 | |||
29 | a. that the initial Contributor has attached the notice described in | ||
30 | Exhibit B to the Covered Software; or | ||
31 | |||
32 | b. that the Covered Software was made available under the terms of | ||
33 | version 1.1 or earlier of the License, but not also under the terms of | ||
34 | a Secondary License. | ||
35 | |||
36 | 1.6. "Executable Form" | ||
37 | |||
38 | means any form of the work other than Source Code Form. | ||
39 | |||
40 | 1.7. "Larger Work" | ||
41 | |||
42 | means a work that combines Covered Software with other material, in a | ||
43 | separate file or files, that is not Covered Software. | ||
44 | |||
45 | 1.8. "License" | ||
46 | |||
47 | means this document. | ||
48 | |||
49 | 1.9. "Licensable" | ||
50 | |||
51 | means having the right to grant, to the maximum extent possible, whether | ||
52 | at the time of the initial grant or subsequently, any and all of the | ||
53 | rights conveyed by this License. | ||
54 | |||
55 | 1.10. "Modifications" | ||
56 | |||
57 | means any of the following: | ||
58 | |||
59 | a. any file in Source Code Form that results from an addition to, | ||
60 | deletion from, or modification of the contents of Covered Software; or | ||
61 | |||
62 | b. any new file in Source Code Form that contains any Covered Software. | ||
63 | |||
64 | 1.11. "Patent Claims" of a Contributor | ||
65 | |||
66 | means any patent claim(s), including without limitation, method, | ||
67 | process, and apparatus claims, in any patent Licensable by such | ||
68 | Contributor that would be infringed, but for the grant of the License, | ||
69 | by the making, using, selling, offering for sale, having made, import, | ||
70 | or transfer of either its Contributions or its Contributor Version. | ||
71 | |||
72 | 1.12. "Secondary License" | ||
73 | |||
74 | means either the GNU General Public License, Version 2.0, the GNU Lesser | ||
75 | General Public License, Version 2.1, the GNU Affero General Public | ||
76 | License, Version 3.0, or any later versions of those licenses. | ||
77 | |||
78 | 1.13. "Source Code Form" | ||
79 | |||
80 | means the form of the work preferred for making modifications. | ||
81 | |||
82 | 1.14. "You" (or "Your") | ||
83 | |||
84 | means an individual or a legal entity exercising rights under this | ||
85 | License. For legal entities, "You" includes any entity that controls, is | ||
86 | controlled by, or is under common control with You. For purposes of this | ||
87 | definition, "control" means (a) the power, direct or indirect, to cause | ||
88 | the direction or management of such entity, whether by contract or | ||
89 | otherwise, or (b) ownership of more than fifty percent (50%) of the | ||
90 | outstanding shares or beneficial ownership of such entity. | ||
91 | |||
92 | |||
93 | 2. License Grants and Conditions | ||
94 | |||
95 | 2.1. Grants | ||
96 | |||
97 | Each Contributor hereby grants You a world-wide, royalty-free, | ||
98 | non-exclusive license: | ||
99 | |||
100 | a. under intellectual property rights (other than patent or trademark) | ||
101 | Licensable by such Contributor to use, reproduce, make available, | ||
102 | modify, display, perform, distribute, and otherwise exploit its | ||
103 | Contributions, either on an unmodified basis, with Modifications, or | ||
104 | as part of a Larger Work; and | ||
105 | |||
106 | b. under Patent Claims of such Contributor to make, use, sell, offer for | ||
107 | sale, have made, import, and otherwise transfer either its | ||
108 | Contributions or its Contributor Version. | ||
109 | |||
110 | 2.2. Effective Date | ||
111 | |||
112 | The licenses granted in Section 2.1 with respect to any Contribution | ||
113 | become effective for each Contribution on the date the Contributor first | ||
114 | distributes such Contribution. | ||
115 | |||
116 | 2.3. Limitations on Grant Scope | ||
117 | |||
118 | The licenses granted in this Section 2 are the only rights granted under | ||
119 | this License. No additional rights or licenses will be implied from the | ||
120 | distribution or licensing of Covered Software under this License. | ||
121 | Notwithstanding Section 2.1(b) above, no patent license is granted by a | ||
122 | Contributor: | ||
123 | |||
124 | a. for any code that a Contributor has removed from Covered Software; or | ||
125 | |||
126 | b. for infringements caused by: (i) Your and any other third party's | ||
127 | modifications of Covered Software, or (ii) the combination of its | ||
128 | Contributions with other software (except as part of its Contributor | ||
129 | Version); or | ||
130 | |||
131 | c. under Patent Claims infringed by Covered Software in the absence of | ||
132 | its Contributions. | ||
133 | |||
134 | This License does not grant any rights in the trademarks, service marks, | ||
135 | or logos of any Contributor (except as may be necessary to comply with | ||
136 | the notice requirements in Section 3.4). | ||
137 | |||
138 | 2.4. Subsequent Licenses | ||
139 | |||
140 | No Contributor makes additional grants as a result of Your choice to | ||
141 | distribute the Covered Software under a subsequent version of this | ||
142 | License (see Section 10.2) or under the terms of a Secondary License (if | ||
143 | permitted under the terms of Section 3.3). | ||
144 | |||
145 | 2.5. Representation | ||
146 | |||
147 | Each Contributor represents that the Contributor believes its | ||
148 | Contributions are its original creation(s) or it has sufficient rights to | ||
149 | grant the rights to its Contributions conveyed by this License. | ||
150 | |||
151 | 2.6. Fair Use | ||
152 | |||
153 | This License is not intended to limit any rights You have under | ||
154 | applicable copyright doctrines of fair use, fair dealing, or other | ||
155 | equivalents. | ||
156 | |||
157 | 2.7. Conditions | ||
158 | |||
159 | Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in | ||
160 | Section 2.1. | ||
161 | |||
162 | |||
163 | 3. Responsibilities | ||
164 | |||
165 | 3.1. Distribution of Source Form | ||
166 | |||
167 | All distribution of Covered Software in Source Code Form, including any | ||
168 | Modifications that You create or to which You contribute, must be under | ||
169 | the terms of this License. You must inform recipients that the Source | ||
170 | Code Form of the Covered Software is governed by the terms of this | ||
171 | License, and how they can obtain a copy of this License. You may not | ||
172 | attempt to alter or restrict the recipients' rights in the Source Code | ||
173 | Form. | ||
174 | |||
175 | 3.2. Distribution of Executable Form | ||
176 | |||
177 | If You distribute Covered Software in Executable Form then: | ||
178 | |||
179 | a. such Covered Software must also be made available in Source Code Form, | ||
180 | as described in Section 3.1, and You must inform recipients of the | ||
181 | Executable Form how they can obtain a copy of such Source Code Form by | ||
182 | reasonable means in a timely manner, at a charge no more than the cost | ||
183 | of distribution to the recipient; and | ||
184 | |||
185 | b. You may distribute such Executable Form under the terms of this | ||
186 | License, or sublicense it under different terms, provided that the | ||
187 | license for the Executable Form does not attempt to limit or alter the | ||
188 | recipients' rights in the Source Code Form under this License. | ||
189 | |||
190 | 3.3. Distribution of a Larger Work | ||
191 | |||
192 | You may create and distribute a Larger Work under terms of Your choice, | ||
193 | provided that You also comply with the requirements of this License for | ||
194 | the Covered Software. If the Larger Work is a combination of Covered | ||
195 | Software with a work governed by one or more Secondary Licenses, and the | ||
196 | Covered Software is not Incompatible With Secondary Licenses, this | ||
197 | License permits You to additionally distribute such Covered Software | ||
198 | under the terms of such Secondary License(s), so that the recipient of | ||
199 | the Larger Work may, at their option, further distribute the Covered | ||
200 | Software under the terms of either this License or such Secondary | ||
201 | License(s). | ||
202 | |||
203 | 3.4. Notices | ||
204 | |||
205 | You may not remove or alter the substance of any license notices | ||
206 | (including copyright notices, patent notices, disclaimers of warranty, or | ||
207 | limitations of liability) contained within the Source Code Form of the | ||
208 | Covered Software, except that You may alter any license notices to the | ||
209 | extent required to remedy known factual inaccuracies. | ||
210 | |||
211 | 3.5. Application of Additional Terms | ||
212 | |||
213 | You may choose to offer, and to charge a fee for, warranty, support, | ||
214 | indemnity or liability obligations to one or more recipients of Covered | ||
215 | Software. However, You may do so only on Your own behalf, and not on | ||
216 | behalf of any Contributor. You must make it absolutely clear that any | ||
217 | such warranty, support, indemnity, or liability obligation is offered by | ||
218 | You alone, and You hereby agree to indemnify every Contributor for any | ||
219 | liability incurred by such Contributor as a result of warranty, support, | ||
220 | indemnity or liability terms You offer. You may include additional | ||
221 | disclaimers of warranty and limitations of liability specific to any | ||
222 | jurisdiction. | ||
223 | |||
224 | 4. Inability to Comply Due to Statute or Regulation | ||
225 | |||
226 | If it is impossible for You to comply with any of the terms of this License | ||
227 | with respect to some or all of the Covered Software due to statute, | ||
228 | judicial order, or regulation then You must: (a) comply with the terms of | ||
229 | this License to the maximum extent possible; and (b) describe the | ||
230 | limitations and the code they affect. Such description must be placed in a | ||
231 | text file included with all distributions of the Covered Software under | ||
232 | this License. Except to the extent prohibited by statute or regulation, | ||
233 | such description must be sufficiently detailed for a recipient of ordinary | ||
234 | skill to be able to understand it. | ||
235 | |||
236 | 5. Termination | ||
237 | |||
238 | 5.1. The rights granted under this License will terminate automatically if You | ||
239 | fail to comply with any of its terms. However, if You become compliant, | ||
240 | then the rights granted under this License from a particular Contributor | ||
241 | are reinstated (a) provisionally, unless and until such Contributor | ||
242 | explicitly and finally terminates Your grants, and (b) on an ongoing | ||
243 | basis, if such Contributor fails to notify You of the non-compliance by | ||
244 | some reasonable means prior to 60 days after You have come back into | ||
245 | compliance. Moreover, Your grants from a particular Contributor are | ||
246 | reinstated on an ongoing basis if such Contributor notifies You of the | ||
247 | non-compliance by some reasonable means, this is the first time You have | ||
248 | received notice of non-compliance with this License from such | ||
249 | Contributor, and You become compliant prior to 30 days after Your receipt | ||
250 | of the notice. | ||
251 | |||
252 | 5.2. If You initiate litigation against any entity by asserting a patent | ||
253 | infringement claim (excluding declaratory judgment actions, | ||
254 | counter-claims, and cross-claims) alleging that a Contributor Version | ||
255 | directly or indirectly infringes any patent, then the rights granted to | ||
256 | You by any and all Contributors for the Covered Software under Section | ||
257 | 2.1 of this License shall terminate. | ||
258 | |||
259 | 5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user | ||
260 | license agreements (excluding distributors and resellers) which have been | ||
261 | validly granted by You or Your distributors under this License prior to | ||
262 | termination shall survive termination. | ||
263 | |||
264 | 6. Disclaimer of Warranty | ||
265 | |||
266 | Covered Software is provided under this License on an "as is" basis, | ||
267 | without warranty of any kind, either expressed, implied, or statutory, | ||
268 | including, without limitation, warranties that the Covered Software is free | ||
269 | of defects, merchantable, fit for a particular purpose or non-infringing. | ||
270 | The entire risk as to the quality and performance of the Covered Software | ||
271 | is with You. Should any Covered Software prove defective in any respect, | ||
272 | You (not any Contributor) assume the cost of any necessary servicing, | ||
273 | repair, or correction. This disclaimer of warranty constitutes an essential | ||
274 | part of this License. No use of any Covered Software is authorized under | ||
275 | this License except under this disclaimer. | ||
276 | |||
277 | 7. Limitation of Liability | ||
278 | |||
279 | Under no circumstances and under no legal theory, whether tort (including | ||
280 | negligence), contract, or otherwise, shall any Contributor, or anyone who | ||
281 | distributes Covered Software as permitted above, be liable to You for any | ||
282 | direct, indirect, special, incidental, or consequential damages of any | ||
283 | character including, without limitation, damages for lost profits, loss of | ||
284 | goodwill, work stoppage, computer failure or malfunction, or any and all | ||
285 | other commercial damages or losses, even if such party shall have been | ||
286 | informed of the possibility of such damages. This limitation of liability | ||
287 | shall not apply to liability for death or personal injury resulting from | ||
288 | such party's negligence to the extent applicable law prohibits such | ||
289 | limitation. Some jurisdictions do not allow the exclusion or limitation of | ||
290 | incidental or consequential damages, so this exclusion and limitation may | ||
291 | not apply to You. | ||
292 | |||
293 | 8. Litigation | ||
294 | |||
295 | Any litigation relating to this License may be brought only in the courts | ||
296 | of a jurisdiction where the defendant maintains its principal place of | ||
297 | business and such litigation shall be governed by laws of that | ||
298 | jurisdiction, without reference to its conflict-of-law provisions. Nothing | ||
299 | in this Section shall prevent a party's ability to bring cross-claims or | ||
300 | counter-claims. | ||
301 | |||
302 | 9. Miscellaneous | ||
303 | |||
304 | This License represents the complete agreement concerning the subject | ||
305 | matter hereof. If any provision of this License is held to be | ||
306 | unenforceable, such provision shall be reformed only to the extent | ||
307 | necessary to make it enforceable. Any law or regulation which provides that | ||
308 | the language of a contract shall be construed against the drafter shall not | ||
309 | be used to construe this License against a Contributor. | ||
310 | |||
311 | |||
312 | 10. Versions of the License | ||
313 | |||
314 | 10.1. New Versions | ||
315 | |||
316 | Mozilla Foundation is the license steward. Except as provided in Section | ||
317 | 10.3, no one other than the license steward has the right to modify or | ||
318 | publish new versions of this License. Each version will be given a | ||
319 | distinguishing version number. | ||
320 | |||
321 | 10.2. Effect of New Versions | ||
322 | |||
323 | You may distribute the Covered Software under the terms of the version | ||
324 | of the License under which You originally received the Covered Software, | ||
325 | or under the terms of any subsequent version published by the license | ||
326 | steward. | ||
327 | |||
328 | 10.3. Modified Versions | ||
329 | |||
330 | If you create software not governed by this License, and you want to | ||
331 | create a new license for such software, you may create and use a | ||
332 | modified version of this License if you rename the license and remove | ||
333 | any references to the name of the license steward (except to note that | ||
334 | such modified license differs from this License). | ||
335 | |||
336 | 10.4. Distributing Source Code Form that is Incompatible With Secondary | ||
337 | Licenses If You choose to distribute Source Code Form that is | ||
338 | Incompatible With Secondary Licenses under the terms of this version of | ||
339 | the License, the notice described in Exhibit B of this License must be | ||
340 | attached. | ||
341 | |||
342 | Exhibit A - Source Code Form License Notice | ||
343 | |||
344 | This Source Code Form is subject to the | ||
345 | terms of the Mozilla Public License, v. | ||
346 | 2.0. If a copy of the MPL was not | ||
347 | distributed with this file, You can | ||
348 | obtain one at | ||
349 | http://mozilla.org/MPL/2.0/. | ||
350 | |||
351 | If it is not possible or desirable to put the notice in a particular file, | ||
352 | then You may include the notice in a location (such as a LICENSE file in a | ||
353 | relevant directory) where a recipient would be likely to look for such a | ||
354 | notice. | ||
355 | |||
356 | You may add additional accurate notices of copyright ownership. | ||
357 | |||
358 | Exhibit B - "Incompatible With Secondary Licenses" Notice | ||
359 | |||
360 | This Source Code Form is "Incompatible | ||
361 | With Secondary Licenses", as defined by | ||
362 | the Mozilla Public License, v. 2.0. | ||
363 | |||
diff --git a/vendor/github.com/hashicorp/go-cleanhttp/README.md b/vendor/github.com/hashicorp/go-cleanhttp/README.md new file mode 100644 index 0000000..036e531 --- /dev/null +++ b/vendor/github.com/hashicorp/go-cleanhttp/README.md | |||
@@ -0,0 +1,30 @@ | |||
1 | # cleanhttp | ||
2 | |||
3 | Functions for accessing "clean" Go http.Client values | ||
4 | |||
5 | ------------- | ||
6 | |||
7 | The Go standard library contains a default `http.Client` called | ||
8 | `http.DefaultClient`. It is a common idiom in Go code to start with | ||
9 | `http.DefaultClient` and tweak it as necessary, and in fact, this is | ||
10 | encouraged; from the `http` package documentation: | ||
11 | |||
12 | > The Client's Transport typically has internal state (cached TCP connections), | ||
13 | so Clients should be reused instead of created as needed. Clients are safe for | ||
14 | concurrent use by multiple goroutines. | ||
15 | |||
16 | Unfortunately, this is a shared value, and it is not uncommon for libraries to | ||
17 | assume that they are free to modify it at will. With enough dependencies, it | ||
18 | can be very easy to encounter strange problems and race conditions due to | ||
19 | manipulation of this shared value across libraries and goroutines (clients are | ||
20 | safe for concurrent use, but writing values to the client struct itself is not | ||
21 | protected). | ||
22 | |||
23 | Making things worse is the fact that a bare `http.Client` will use a default | ||
24 | `http.Transport` called `http.DefaultTransport`, which is another global value | ||
25 | that behaves the same way. So it is not simply enough to replace | ||
26 | `http.DefaultClient` with `&http.Client{}`. | ||
27 | |||
28 | This repository provides some simple functions to get a "clean" `http.Client` | ||
29 | -- one that uses the same default values as the Go standard library, but | ||
30 | returns a client that does not share any state with other clients. | ||
diff --git a/vendor/github.com/hashicorp/go-cleanhttp/cleanhttp.go b/vendor/github.com/hashicorp/go-cleanhttp/cleanhttp.go new file mode 100644 index 0000000..7d8a57c --- /dev/null +++ b/vendor/github.com/hashicorp/go-cleanhttp/cleanhttp.go | |||
@@ -0,0 +1,56 @@ | |||
1 | package cleanhttp | ||
2 | |||
3 | import ( | ||
4 | "net" | ||
5 | "net/http" | ||
6 | "runtime" | ||
7 | "time" | ||
8 | ) | ||
9 | |||
10 | // DefaultTransport returns a new http.Transport with similar default values to | ||
11 | // http.DefaultTransport, but with idle connections and keepalives disabled. | ||
12 | func DefaultTransport() *http.Transport { | ||
13 | transport := DefaultPooledTransport() | ||
14 | transport.DisableKeepAlives = true | ||
15 | transport.MaxIdleConnsPerHost = -1 | ||
16 | return transport | ||
17 | } | ||
18 | |||
19 | // DefaultPooledTransport returns a new http.Transport with similar default | ||
20 | // values to http.DefaultTransport. Do not use this for transient transports as | ||
21 | // it can leak file descriptors over time. Only use this for transports that | ||
22 | // will be re-used for the same host(s). | ||
23 | func DefaultPooledTransport() *http.Transport { | ||
24 | transport := &http.Transport{ | ||
25 | Proxy: http.ProxyFromEnvironment, | ||
26 | DialContext: (&net.Dialer{ | ||
27 | Timeout: 30 * time.Second, | ||
28 | KeepAlive: 30 * time.Second, | ||
29 | }).DialContext, | ||
30 | MaxIdleConns: 100, | ||
31 | IdleConnTimeout: 90 * time.Second, | ||
32 | TLSHandshakeTimeout: 10 * time.Second, | ||
33 | ExpectContinueTimeout: 1 * time.Second, | ||
34 | MaxIdleConnsPerHost: runtime.GOMAXPROCS(0) + 1, | ||
35 | } | ||
36 | return transport | ||
37 | } | ||
38 | |||
39 | // DefaultClient returns a new http.Client with similar default values to | ||
40 | // http.Client, but with a non-shared Transport, idle connections disabled, and | ||
41 | // keepalives disabled. | ||
42 | func DefaultClient() *http.Client { | ||
43 | return &http.Client{ | ||
44 | Transport: DefaultTransport(), | ||
45 | } | ||
46 | } | ||
47 | |||
48 | // DefaultPooledClient returns a new http.Client with similar default values to | ||
49 | // http.Client, but with a shared Transport. Do not use this function for | ||
50 | // transient clients as it can leak file descriptors over time. Only use this | ||
51 | // for clients that will be re-used for the same host(s). | ||
52 | func DefaultPooledClient() *http.Client { | ||
53 | return &http.Client{ | ||
54 | Transport: DefaultPooledTransport(), | ||
55 | } | ||
56 | } | ||
diff --git a/vendor/github.com/hashicorp/go-cleanhttp/doc.go b/vendor/github.com/hashicorp/go-cleanhttp/doc.go new file mode 100644 index 0000000..0584109 --- /dev/null +++ b/vendor/github.com/hashicorp/go-cleanhttp/doc.go | |||
@@ -0,0 +1,20 @@ | |||
1 | // Package cleanhttp offers convenience utilities for acquiring "clean" | ||
2 | // http.Transport and http.Client structs. | ||
3 | // | ||
4 | // Values set on http.DefaultClient and http.DefaultTransport affect all | ||
5 | // callers. This can have detrimental effects, esepcially in TLS contexts, | ||
6 | // where client or root certificates set to talk to multiple endpoints can end | ||
7 | // up displacing each other, leading to hard-to-debug issues. This package | ||
8 | // provides non-shared http.Client and http.Transport structs to ensure that | ||
9 | // the configuration will not be overwritten by other parts of the application | ||
10 | // or dependencies. | ||
11 | // | ||
12 | // The DefaultClient and DefaultTransport functions disable idle connections | ||
13 | // and keepalives. Without ensuring that idle connections are closed before | ||
14 | // garbage collection, short-term clients/transports can leak file descriptors, | ||
15 | // eventually leading to "too many open files" errors. If you will be | ||
16 | // connecting to the same hosts repeatedly from the same client, you can use | ||
17 | // DefaultPooledClient to receive a client that has connection pooling | ||
18 | // semantics similar to http.DefaultClient. | ||
19 | // | ||
20 | package cleanhttp | ||
diff --git a/vendor/github.com/hashicorp/terraform/config/config.go b/vendor/github.com/hashicorp/terraform/config/config.go index a157824..3f756dc 100644 --- a/vendor/github.com/hashicorp/terraform/config/config.go +++ b/vendor/github.com/hashicorp/terraform/config/config.go | |||
@@ -12,6 +12,7 @@ import ( | |||
12 | "github.com/hashicorp/hil" | 12 | "github.com/hashicorp/hil" |
13 | "github.com/hashicorp/hil/ast" | 13 | "github.com/hashicorp/hil/ast" |
14 | "github.com/hashicorp/terraform/helper/hilmapstructure" | 14 | "github.com/hashicorp/terraform/helper/hilmapstructure" |
15 | "github.com/hashicorp/terraform/plugin/discovery" | ||
15 | "github.com/mitchellh/reflectwalk" | 16 | "github.com/mitchellh/reflectwalk" |
16 | ) | 17 | ) |
17 | 18 | ||
@@ -64,6 +65,7 @@ type Module struct { | |||
64 | type ProviderConfig struct { | 65 | type ProviderConfig struct { |
65 | Name string | 66 | Name string |
66 | Alias string | 67 | Alias string |
68 | Version string | ||
67 | RawConfig *RawConfig | 69 | RawConfig *RawConfig |
68 | } | 70 | } |
69 | 71 | ||
@@ -238,6 +240,33 @@ func (r *Resource) Id() string { | |||
238 | } | 240 | } |
239 | } | 241 | } |
240 | 242 | ||
243 | // ProviderFullName returns the full name of the provider for this resource, | ||
244 | // which may either be specified explicitly using the "provider" meta-argument | ||
245 | // or implied by the prefix on the resource type name. | ||
246 | func (r *Resource) ProviderFullName() string { | ||
247 | return ResourceProviderFullName(r.Type, r.Provider) | ||
248 | } | ||
249 | |||
250 | // ResourceProviderFullName returns the full (dependable) name of the | ||
251 | // provider for a hypothetical resource with the given resource type and | ||
252 | // explicit provider string. If the explicit provider string is empty then | ||
253 | // the provider name is inferred from the resource type name. | ||
254 | func ResourceProviderFullName(resourceType, explicitProvider string) string { | ||
255 | if explicitProvider != "" { | ||
256 | return explicitProvider | ||
257 | } | ||
258 | |||
259 | idx := strings.IndexRune(resourceType, '_') | ||
260 | if idx == -1 { | ||
261 | // If no underscores, the resource name is assumed to be | ||
262 | // also the provider name, e.g. if the provider exposes | ||
263 | // only a single resource of each type. | ||
264 | return resourceType | ||
265 | } | ||
266 | |||
267 | return resourceType[:idx] | ||
268 | } | ||
269 | |||
241 | // Validate does some basic semantic checking of the configuration. | 270 | // Validate does some basic semantic checking of the configuration. |
242 | func (c *Config) Validate() error { | 271 | func (c *Config) Validate() error { |
243 | if c == nil { | 272 | if c == nil { |
@@ -349,7 +378,8 @@ func (c *Config) Validate() error { | |||
349 | } | 378 | } |
350 | } | 379 | } |
351 | 380 | ||
352 | // Check that providers aren't declared multiple times. | 381 | // Check that providers aren't declared multiple times and that their |
382 | // version constraints, where present, are syntactically valid. | ||
353 | providerSet := make(map[string]struct{}) | 383 | providerSet := make(map[string]struct{}) |
354 | for _, p := range c.ProviderConfigs { | 384 | for _, p := range c.ProviderConfigs { |
355 | name := p.FullName() | 385 | name := p.FullName() |
@@ -360,6 +390,16 @@ func (c *Config) Validate() error { | |||
360 | continue | 390 | continue |
361 | } | 391 | } |
362 | 392 | ||
393 | if p.Version != "" { | ||
394 | _, err := discovery.ConstraintStr(p.Version).Parse() | ||
395 | if err != nil { | ||
396 | errs = append(errs, fmt.Errorf( | ||
397 | "provider.%s: invalid version constraint %q: %s", | ||
398 | name, p.Version, err, | ||
399 | )) | ||
400 | } | ||
401 | } | ||
402 | |||
363 | providerSet[name] = struct{}{} | 403 | providerSet[name] = struct{}{} |
364 | } | 404 | } |
365 | 405 | ||
diff --git a/vendor/github.com/hashicorp/terraform/config/interpolate_funcs.go b/vendor/github.com/hashicorp/terraform/config/interpolate_funcs.go index 7b7b3f2..a298cf2 100644 --- a/vendor/github.com/hashicorp/terraform/config/interpolate_funcs.go +++ b/vendor/github.com/hashicorp/terraform/config/interpolate_funcs.go | |||
@@ -70,6 +70,7 @@ func Funcs() map[string]ast.Function { | |||
70 | "coalescelist": interpolationFuncCoalesceList(), | 70 | "coalescelist": interpolationFuncCoalesceList(), |
71 | "compact": interpolationFuncCompact(), | 71 | "compact": interpolationFuncCompact(), |
72 | "concat": interpolationFuncConcat(), | 72 | "concat": interpolationFuncConcat(), |
73 | "contains": interpolationFuncContains(), | ||
73 | "dirname": interpolationFuncDirname(), | 74 | "dirname": interpolationFuncDirname(), |
74 | "distinct": interpolationFuncDistinct(), | 75 | "distinct": interpolationFuncDistinct(), |
75 | "element": interpolationFuncElement(), | 76 | "element": interpolationFuncElement(), |
@@ -356,6 +357,22 @@ func interpolationFuncCoalesceList() ast.Function { | |||
356 | } | 357 | } |
357 | } | 358 | } |
358 | 359 | ||
360 | // interpolationFuncContains returns true if an element is in the list | ||
361 | // and return false otherwise | ||
362 | func interpolationFuncContains() ast.Function { | ||
363 | return ast.Function{ | ||
364 | ArgTypes: []ast.Type{ast.TypeList, ast.TypeString}, | ||
365 | ReturnType: ast.TypeBool, | ||
366 | Callback: func(args []interface{}) (interface{}, error) { | ||
367 | _, err := interpolationFuncIndex().Callback(args) | ||
368 | if err != nil { | ||
369 | return false, nil | ||
370 | } | ||
371 | return true, nil | ||
372 | }, | ||
373 | } | ||
374 | } | ||
375 | |||
359 | // interpolationFuncConcat implements the "concat" function that concatenates | 376 | // interpolationFuncConcat implements the "concat" function that concatenates |
360 | // multiple lists. | 377 | // multiple lists. |
361 | func interpolationFuncConcat() ast.Function { | 378 | func interpolationFuncConcat() ast.Function { |
diff --git a/vendor/github.com/hashicorp/terraform/config/loader.go b/vendor/github.com/hashicorp/terraform/config/loader.go index 0bfa89c..5dd7d46 100644 --- a/vendor/github.com/hashicorp/terraform/config/loader.go +++ b/vendor/github.com/hashicorp/terraform/config/loader.go | |||
@@ -194,7 +194,7 @@ func dirFiles(dir string) ([]string, []string, error) { | |||
194 | // Only care about files that are valid to load | 194 | // Only care about files that are valid to load |
195 | name := fi.Name() | 195 | name := fi.Name() |
196 | extValue := ext(name) | 196 | extValue := ext(name) |
197 | if extValue == "" || isIgnoredFile(name) { | 197 | if extValue == "" || IsIgnoredFile(name) { |
198 | continue | 198 | continue |
199 | } | 199 | } |
200 | 200 | ||
@@ -215,9 +215,9 @@ func dirFiles(dir string) ([]string, []string, error) { | |||
215 | return files, overrides, nil | 215 | return files, overrides, nil |
216 | } | 216 | } |
217 | 217 | ||
218 | // isIgnoredFile returns true or false depending on whether the | 218 | // IsIgnoredFile returns true or false depending on whether the |
219 | // provided file name is a file that should be ignored. | 219 | // provided file name is a file that should be ignored. |
220 | func isIgnoredFile(name string) bool { | 220 | func IsIgnoredFile(name string) bool { |
221 | return strings.HasPrefix(name, ".") || // Unix-like hidden files | 221 | return strings.HasPrefix(name, ".") || // Unix-like hidden files |
222 | strings.HasSuffix(name, "~") || // vim | 222 | strings.HasSuffix(name, "~") || // vim |
223 | strings.HasPrefix(name, "#") && strings.HasSuffix(name, "#") // emacs | 223 | strings.HasPrefix(name, "#") && strings.HasSuffix(name, "#") // emacs |
diff --git a/vendor/github.com/hashicorp/terraform/config/loader_hcl.go b/vendor/github.com/hashicorp/terraform/config/loader_hcl.go index 9abb196..e85e493 100644 --- a/vendor/github.com/hashicorp/terraform/config/loader_hcl.go +++ b/vendor/github.com/hashicorp/terraform/config/loader_hcl.go | |||
@@ -17,6 +17,20 @@ type hclConfigurable struct { | |||
17 | Root *ast.File | 17 | Root *ast.File |
18 | } | 18 | } |
19 | 19 | ||
20 | var ReservedResourceFields = []string{ | ||
21 | "connection", | ||
22 | "count", | ||
23 | "depends_on", | ||
24 | "lifecycle", | ||
25 | "provider", | ||
26 | "provisioner", | ||
27 | } | ||
28 | |||
29 | var ReservedProviderFields = []string{ | ||
30 | "alias", | ||
31 | "version", | ||
32 | } | ||
33 | |||
20 | func (t *hclConfigurable) Config() (*Config, error) { | 34 | func (t *hclConfigurable) Config() (*Config, error) { |
21 | validKeys := map[string]struct{}{ | 35 | validKeys := map[string]struct{}{ |
22 | "atlas": struct{}{}, | 36 | "atlas": struct{}{}, |
@@ -562,6 +576,7 @@ func loadProvidersHcl(list *ast.ObjectList) ([]*ProviderConfig, error) { | |||
562 | } | 576 | } |
563 | 577 | ||
564 | delete(config, "alias") | 578 | delete(config, "alias") |
579 | delete(config, "version") | ||
565 | 580 | ||
566 | rawConfig, err := NewRawConfig(config) | 581 | rawConfig, err := NewRawConfig(config) |
567 | if err != nil { | 582 | if err != nil { |
@@ -583,9 +598,22 @@ func loadProvidersHcl(list *ast.ObjectList) ([]*ProviderConfig, error) { | |||
583 | } | 598 | } |
584 | } | 599 | } |
585 | 600 | ||
601 | // If we have a version field then extract it | ||
602 | var version string | ||
603 | if a := listVal.Filter("version"); len(a.Items) > 0 { | ||
604 | err := hcl.DecodeObject(&version, a.Items[0].Val) | ||
605 | if err != nil { | ||
606 | return nil, fmt.Errorf( | ||
607 | "Error reading version for provider[%s]: %s", | ||
608 | n, | ||
609 | err) | ||
610 | } | ||
611 | } | ||
612 | |||
586 | result = append(result, &ProviderConfig{ | 613 | result = append(result, &ProviderConfig{ |
587 | Name: n, | 614 | Name: n, |
588 | Alias: alias, | 615 | Alias: alias, |
616 | Version: version, | ||
589 | RawConfig: rawConfig, | 617 | RawConfig: rawConfig, |
590 | }) | 618 | }) |
591 | } | 619 | } |
diff --git a/vendor/github.com/hashicorp/terraform/config/module/tree.go b/vendor/github.com/hashicorp/terraform/config/module/tree.go index b6f90fd..4b0b153 100644 --- a/vendor/github.com/hashicorp/terraform/config/module/tree.go +++ b/vendor/github.com/hashicorp/terraform/config/module/tree.go | |||
@@ -92,6 +92,25 @@ func (t *Tree) Children() map[string]*Tree { | |||
92 | return t.children | 92 | return t.children |
93 | } | 93 | } |
94 | 94 | ||
95 | // DeepEach calls the provided callback for the receiver and then all of | ||
96 | // its descendents in the tree, allowing an operation to be performed on | ||
97 | // all modules in the tree. | ||
98 | // | ||
99 | // Parents will be visited before their children but otherwise the order is | ||
100 | // not defined. | ||
101 | func (t *Tree) DeepEach(cb func(*Tree)) { | ||
102 | t.lock.RLock() | ||
103 | defer t.lock.RUnlock() | ||
104 | t.deepEach(cb) | ||
105 | } | ||
106 | |||
107 | func (t *Tree) deepEach(cb func(*Tree)) { | ||
108 | cb(t) | ||
109 | for _, c := range t.children { | ||
110 | c.deepEach(cb) | ||
111 | } | ||
112 | } | ||
113 | |||
95 | // Loaded says whether or not this tree has been loaded or not yet. | 114 | // Loaded says whether or not this tree has been loaded or not yet. |
96 | func (t *Tree) Loaded() bool { | 115 | func (t *Tree) Loaded() bool { |
97 | t.lock.RLock() | 116 | t.lock.RLock() |
diff --git a/vendor/github.com/hashicorp/terraform/config/providers.go b/vendor/github.com/hashicorp/terraform/config/providers.go new file mode 100644 index 0000000..7a50782 --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/config/providers.go | |||
@@ -0,0 +1,103 @@ | |||
1 | package config | ||
2 | |||
3 | import "github.com/blang/semver" | ||
4 | |||
5 | // ProviderVersionConstraint presents a constraint for a particular | ||
6 | // provider, identified by its full name. | ||
7 | type ProviderVersionConstraint struct { | ||
8 | Constraint string | ||
9 | ProviderType string | ||
10 | } | ||
11 | |||
12 | // ProviderVersionConstraints is a map from provider full name to its associated | ||
13 | // ProviderVersionConstraint, as produced by Config.RequiredProviders. | ||
14 | type ProviderVersionConstraints map[string]ProviderVersionConstraint | ||
15 | |||
16 | // RequiredProviders returns the ProviderVersionConstraints for this | ||
17 | // module. | ||
18 | // | ||
19 | // This includes both providers that are explicitly requested by provider | ||
20 | // blocks and those that are used implicitly by instantiating one of their | ||
21 | // resource types. In the latter case, the returned semver Range will | ||
22 | // accept any version of the provider. | ||
23 | func (c *Config) RequiredProviders() ProviderVersionConstraints { | ||
24 | ret := make(ProviderVersionConstraints, len(c.ProviderConfigs)) | ||
25 | |||
26 | configs := c.ProviderConfigsByFullName() | ||
27 | |||
28 | // In order to find the *implied* dependencies (those without explicit | ||
29 | // "provider" blocks) we need to walk over all of the resources and | ||
30 | // cross-reference with the provider configs. | ||
31 | for _, rc := range c.Resources { | ||
32 | providerName := rc.ProviderFullName() | ||
33 | var providerType string | ||
34 | |||
35 | // Default to (effectively) no constraint whatsoever, but we might | ||
36 | // override if there's an explicit constraint in config. | ||
37 | constraint := ">=0.0.0" | ||
38 | |||
39 | config, ok := configs[providerName] | ||
40 | if ok { | ||
41 | if config.Version != "" { | ||
42 | constraint = config.Version | ||
43 | } | ||
44 | providerType = config.Name | ||
45 | } else { | ||
46 | providerType = providerName | ||
47 | } | ||
48 | |||
49 | ret[providerName] = ProviderVersionConstraint{ | ||
50 | ProviderType: providerType, | ||
51 | Constraint: constraint, | ||
52 | } | ||
53 | } | ||
54 | |||
55 | return ret | ||
56 | } | ||
57 | |||
58 | // RequiredRanges returns a semver.Range for each distinct provider type in | ||
59 | // the constraint map. If the same provider type appears more than once | ||
60 | // (e.g. because aliases are in use) then their respective constraints are | ||
61 | // combined such that they must *all* apply. | ||
62 | // | ||
63 | // The result of this method can be passed to the | ||
64 | // PluginMetaSet.ConstrainVersions method within the plugin/discovery | ||
65 | // package in order to filter down the available plugins to those which | ||
66 | // satisfy the given constraints. | ||
67 | // | ||
68 | // This function will panic if any of the constraints within cannot be | ||
69 | // parsed as semver ranges. This is guaranteed to never happen for a | ||
70 | // constraint set that was built from a configuration that passed validation. | ||
71 | func (cons ProviderVersionConstraints) RequiredRanges() map[string]semver.Range { | ||
72 | ret := make(map[string]semver.Range, len(cons)) | ||
73 | |||
74 | for _, con := range cons { | ||
75 | spec := semver.MustParseRange(con.Constraint) | ||
76 | if existing, exists := ret[con.ProviderType]; exists { | ||
77 | ret[con.ProviderType] = existing.AND(spec) | ||
78 | } else { | ||
79 | ret[con.ProviderType] = spec | ||
80 | } | ||
81 | } | ||
82 | |||
83 | return ret | ||
84 | } | ||
85 | |||
86 | // ProviderConfigsByFullName returns a map from provider full names (as | ||
87 | // returned by ProviderConfig.FullName()) to the corresponding provider | ||
88 | // configs. | ||
89 | // | ||
90 | // This function returns no new information than what's already in | ||
91 | // c.ProviderConfigs, but returns it in a more convenient shape. If there | ||
92 | // is more than one provider config with the same full name then the result | ||
93 | // is undefined, but that is guaranteed not to happen for any config that | ||
94 | // has passed validation. | ||
95 | func (c *Config) ProviderConfigsByFullName() map[string]*ProviderConfig { | ||
96 | ret := make(map[string]*ProviderConfig, len(c.ProviderConfigs)) | ||
97 | |||
98 | for _, pc := range c.ProviderConfigs { | ||
99 | ret[pc.FullName()] = pc | ||
100 | } | ||
101 | |||
102 | return ret | ||
103 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/flatmap/expand.go b/vendor/github.com/hashicorp/terraform/flatmap/expand.go index e0b81b6..1449065 100644 --- a/vendor/github.com/hashicorp/terraform/flatmap/expand.go +++ b/vendor/github.com/hashicorp/terraform/flatmap/expand.go | |||
@@ -60,6 +60,11 @@ func expandArray(m map[string]string, prefix string) []interface{} { | |||
60 | return []interface{}{} | 60 | return []interface{}{} |
61 | } | 61 | } |
62 | 62 | ||
63 | // NOTE: "num" is not necessarily accurate, e.g. if a user tampers | ||
64 | // with state, so the following code should not crash when given a | ||
65 | // number of items more or less than what's given in num. The | ||
66 | // num key is mainly just a hint that this is a list or set. | ||
67 | |||
63 | // The Schema "Set" type stores its values in an array format, but | 68 | // The Schema "Set" type stores its values in an array format, but |
64 | // using numeric hash values instead of ordinal keys. Take the set | 69 | // using numeric hash values instead of ordinal keys. Take the set |
65 | // of keys regardless of value, and expand them in numeric order. | 70 | // of keys regardless of value, and expand them in numeric order. |
@@ -101,7 +106,7 @@ func expandArray(m map[string]string, prefix string) []interface{} { | |||
101 | } | 106 | } |
102 | sort.Ints(keysList) | 107 | sort.Ints(keysList) |
103 | 108 | ||
104 | result := make([]interface{}, num) | 109 | result := make([]interface{}, len(keysList)) |
105 | for i, key := range keysList { | 110 | for i, key := range keysList { |
106 | keyString := strconv.Itoa(key) | 111 | keyString := strconv.Itoa(key) |
107 | if computed[keyString] { | 112 | if computed[keyString] { |
diff --git a/vendor/github.com/hashicorp/terraform/helper/resource/id.go b/vendor/github.com/hashicorp/terraform/helper/resource/id.go index 629582b..1cde67c 100644 --- a/vendor/github.com/hashicorp/terraform/helper/resource/id.go +++ b/vendor/github.com/hashicorp/terraform/helper/resource/id.go | |||
@@ -1,21 +1,17 @@ | |||
1 | package resource | 1 | package resource |
2 | 2 | ||
3 | import ( | 3 | import ( |
4 | "crypto/rand" | ||
5 | "fmt" | 4 | "fmt" |
6 | "math/big" | 5 | "strings" |
7 | "sync" | 6 | "sync" |
7 | "time" | ||
8 | ) | 8 | ) |
9 | 9 | ||
10 | const UniqueIdPrefix = `terraform-` | 10 | const UniqueIdPrefix = `terraform-` |
11 | 11 | ||
12 | // idCounter is a randomly seeded monotonic counter for generating ordered | 12 | // idCounter is a monotonic counter for generating ordered unique ids. |
13 | // unique ids. It uses a big.Int so we can easily increment a long numeric | ||
14 | // string. The max possible hex value here with 12 random bytes is | ||
15 | // "01000000000000000000000000", so there's no chance of rollover during | ||
16 | // operation. | ||
17 | var idMutex sync.Mutex | 13 | var idMutex sync.Mutex |
18 | var idCounter = big.NewInt(0).SetBytes(randomBytes(12)) | 14 | var idCounter uint32 |
19 | 15 | ||
20 | // Helper for a resource to generate a unique identifier w/ default prefix | 16 | // Helper for a resource to generate a unique identifier w/ default prefix |
21 | func UniqueId() string { | 17 | func UniqueId() string { |
@@ -25,15 +21,20 @@ func UniqueId() string { | |||
25 | // Helper for a resource to generate a unique identifier w/ given prefix | 21 | // Helper for a resource to generate a unique identifier w/ given prefix |
26 | // | 22 | // |
27 | // After the prefix, the ID consists of an incrementing 26 digit value (to match | 23 | // After the prefix, the ID consists of an incrementing 26 digit value (to match |
28 | // previous timestamp output). | 24 | // previous timestamp output). After the prefix, the ID consists of a timestamp |
25 | // and an incrementing 8 hex digit value The timestamp means that multiple IDs | ||
26 | // created with the same prefix will sort in the order of their creation, even | ||
27 | // across multiple terraform executions, as long as the clock is not turned back | ||
28 | // between calls, and as long as any given terraform execution generates fewer | ||
29 | // than 4 billion IDs. | ||
29 | func PrefixedUniqueId(prefix string) string { | 30 | func PrefixedUniqueId(prefix string) string { |
31 | // Be precise to 4 digits of fractional seconds, but remove the dot before the | ||
32 | // fractional seconds. | ||
33 | timestamp := strings.Replace( | ||
34 | time.Now().UTC().Format("20060102150405.0000"), ".", "", 1) | ||
35 | |||
30 | idMutex.Lock() | 36 | idMutex.Lock() |
31 | defer idMutex.Unlock() | 37 | defer idMutex.Unlock() |
32 | return fmt.Sprintf("%s%026x", prefix, idCounter.Add(idCounter, big.NewInt(1))) | 38 | idCounter++ |
33 | } | 39 | return fmt.Sprintf("%s%s%08x", prefix, timestamp, idCounter) |
34 | |||
35 | func randomBytes(n int) []byte { | ||
36 | b := make([]byte, n) | ||
37 | rand.Read(b) | ||
38 | return b | ||
39 | } | 40 | } |
diff --git a/vendor/github.com/hashicorp/terraform/helper/resource/testing.go b/vendor/github.com/hashicorp/terraform/helper/resource/testing.go index ebdbde2..d7de1a0 100644 --- a/vendor/github.com/hashicorp/terraform/helper/resource/testing.go +++ b/vendor/github.com/hashicorp/terraform/helper/resource/testing.go | |||
@@ -383,11 +383,11 @@ func Test(t TestT, c TestCase) { | |||
383 | c.PreCheck() | 383 | c.PreCheck() |
384 | } | 384 | } |
385 | 385 | ||
386 | ctxProviders, err := testProviderFactories(c) | 386 | providerResolver, err := testProviderResolver(c) |
387 | if err != nil { | 387 | if err != nil { |
388 | t.Fatal(err) | 388 | t.Fatal(err) |
389 | } | 389 | } |
390 | opts := terraform.ContextOpts{Providers: ctxProviders} | 390 | opts := terraform.ContextOpts{ProviderResolver: providerResolver} |
391 | 391 | ||
392 | // A single state variable to track the lifecycle, starting with no state | 392 | // A single state variable to track the lifecycle, starting with no state |
393 | var state *terraform.State | 393 | var state *terraform.State |
@@ -400,15 +400,22 @@ func Test(t TestT, c TestCase) { | |||
400 | var err error | 400 | var err error |
401 | log.Printf("[WARN] Test: Executing step %d", i) | 401 | log.Printf("[WARN] Test: Executing step %d", i) |
402 | 402 | ||
403 | // Determine the test mode to execute | 403 | if step.Config == "" && !step.ImportState { |
404 | if step.Config != "" { | ||
405 | state, err = testStepConfig(opts, state, step) | ||
406 | } else if step.ImportState { | ||
407 | state, err = testStepImportState(opts, state, step) | ||
408 | } else { | ||
409 | err = fmt.Errorf( | 404 | err = fmt.Errorf( |
410 | "unknown test mode for step. Please see TestStep docs\n\n%#v", | 405 | "unknown test mode for step. Please see TestStep docs\n\n%#v", |
411 | step) | 406 | step) |
407 | } else { | ||
408 | if step.ImportState { | ||
409 | if step.Config == "" { | ||
410 | step.Config = testProviderConfig(c) | ||
411 | } | ||
412 | |||
413 | // Can optionally set step.Config in addition to | ||
414 | // step.ImportState, to provide config for the import. | ||
415 | state, err = testStepImportState(opts, state, step) | ||
416 | } else { | ||
417 | state, err = testStepConfig(opts, state, step) | ||
418 | } | ||
412 | } | 419 | } |
413 | 420 | ||
414 | // If there was an error, exit | 421 | // If there was an error, exit |
@@ -496,16 +503,29 @@ func Test(t TestT, c TestCase) { | |||
496 | } | 503 | } |
497 | } | 504 | } |
498 | 505 | ||
499 | // testProviderFactories is a helper to build the ResourceProviderFactory map | 506 | // testProviderConfig takes the list of Providers in a TestCase and returns a |
507 | // config with only empty provider blocks. This is useful for Import, where no | ||
508 | // config is provided, but the providers must be defined. | ||
509 | func testProviderConfig(c TestCase) string { | ||
510 | var lines []string | ||
511 | for p := range c.Providers { | ||
512 | lines = append(lines, fmt.Sprintf("provider %q {}\n", p)) | ||
513 | } | ||
514 | |||
515 | return strings.Join(lines, "") | ||
516 | } | ||
517 | |||
518 | // testProviderResolver is a helper to build a ResourceProviderResolver | ||
500 | // with pre instantiated ResourceProviders, so that we can reset them for the | 519 | // with pre instantiated ResourceProviders, so that we can reset them for the |
501 | // test, while only calling the factory function once. | 520 | // test, while only calling the factory function once. |
502 | // Any errors are stored so that they can be returned by the factory in | 521 | // Any errors are stored so that they can be returned by the factory in |
503 | // terraform to match non-test behavior. | 522 | // terraform to match non-test behavior. |
504 | func testProviderFactories(c TestCase) (map[string]terraform.ResourceProviderFactory, error) { | 523 | func testProviderResolver(c TestCase) (terraform.ResourceProviderResolver, error) { |
505 | ctxProviders := c.ProviderFactories // make(map[string]terraform.ResourceProviderFactory) | 524 | ctxProviders := c.ProviderFactories |
506 | if ctxProviders == nil { | 525 | if ctxProviders == nil { |
507 | ctxProviders = make(map[string]terraform.ResourceProviderFactory) | 526 | ctxProviders = make(map[string]terraform.ResourceProviderFactory) |
508 | } | 527 | } |
528 | |||
509 | // add any fixed providers | 529 | // add any fixed providers |
510 | for k, p := range c.Providers { | 530 | for k, p := range c.Providers { |
511 | ctxProviders[k] = terraform.ResourceProviderFactoryFixed(p) | 531 | ctxProviders[k] = terraform.ResourceProviderFactoryFixed(p) |
@@ -527,7 +547,7 @@ func testProviderFactories(c TestCase) (map[string]terraform.ResourceProviderFac | |||
527 | } | 547 | } |
528 | } | 548 | } |
529 | 549 | ||
530 | return ctxProviders, nil | 550 | return terraform.ResourceProviderResolverFixed(ctxProviders), nil |
531 | } | 551 | } |
532 | 552 | ||
533 | // UnitTest is a helper to force the acceptance testing harness to run in the | 553 | // UnitTest is a helper to force the acceptance testing harness to run in the |
diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/provider.go b/vendor/github.com/hashicorp/terraform/helper/schema/provider.go index d52d2f5..fb28b41 100644 --- a/vendor/github.com/hashicorp/terraform/helper/schema/provider.go +++ b/vendor/github.com/hashicorp/terraform/helper/schema/provider.go | |||
@@ -8,6 +8,7 @@ import ( | |||
8 | "sync" | 8 | "sync" |
9 | 9 | ||
10 | "github.com/hashicorp/go-multierror" | 10 | "github.com/hashicorp/go-multierror" |
11 | "github.com/hashicorp/terraform/config" | ||
11 | "github.com/hashicorp/terraform/terraform" | 12 | "github.com/hashicorp/terraform/terraform" |
12 | ) | 13 | ) |
13 | 14 | ||
@@ -89,6 +90,13 @@ func (p *Provider) InternalValidate() error { | |||
89 | validationErrors = multierror.Append(validationErrors, err) | 90 | validationErrors = multierror.Append(validationErrors, err) |
90 | } | 91 | } |
91 | 92 | ||
93 | // Provider-specific checks | ||
94 | for k, _ := range sm { | ||
95 | if isReservedProviderFieldName(k) { | ||
96 | return fmt.Errorf("%s is a reserved field name for a provider", k) | ||
97 | } | ||
98 | } | ||
99 | |||
92 | for k, r := range p.ResourcesMap { | 100 | for k, r := range p.ResourcesMap { |
93 | if err := r.InternalValidate(nil, true); err != nil { | 101 | if err := r.InternalValidate(nil, true); err != nil { |
94 | validationErrors = multierror.Append(validationErrors, fmt.Errorf("resource %s: %s", k, err)) | 102 | validationErrors = multierror.Append(validationErrors, fmt.Errorf("resource %s: %s", k, err)) |
@@ -104,6 +112,15 @@ func (p *Provider) InternalValidate() error { | |||
104 | return validationErrors | 112 | return validationErrors |
105 | } | 113 | } |
106 | 114 | ||
115 | func isReservedProviderFieldName(name string) bool { | ||
116 | for _, reservedName := range config.ReservedProviderFields { | ||
117 | if name == reservedName { | ||
118 | return true | ||
119 | } | ||
120 | } | ||
121 | return false | ||
122 | } | ||
123 | |||
107 | // Meta returns the metadata associated with this provider that was | 124 | // Meta returns the metadata associated with this provider that was |
108 | // returned by the Configure call. It will be nil until Configure is called. | 125 | // returned by the Configure call. It will be nil until Configure is called. |
109 | func (p *Provider) Meta() interface{} { | 126 | func (p *Provider) Meta() interface{} { |
diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/provisioner.go b/vendor/github.com/hashicorp/terraform/helper/schema/provisioner.go index 856c675..476192e 100644 --- a/vendor/github.com/hashicorp/terraform/helper/schema/provisioner.go +++ b/vendor/github.com/hashicorp/terraform/helper/schema/provisioner.go | |||
@@ -43,7 +43,7 @@ type Provisioner struct { | |||
43 | 43 | ||
44 | // ValidateFunc is a function for extended validation. This is optional | 44 | // ValidateFunc is a function for extended validation. This is optional |
45 | // and should be used when individual field validation is not enough. | 45 | // and should be used when individual field validation is not enough. |
46 | ValidateFunc func(*ResourceData) ([]string, []error) | 46 | ValidateFunc func(*terraform.ResourceConfig) ([]string, []error) |
47 | 47 | ||
48 | stopCtx context.Context | 48 | stopCtx context.Context |
49 | stopCtxCancel context.CancelFunc | 49 | stopCtxCancel context.CancelFunc |
@@ -121,32 +121,6 @@ func (p *Provisioner) Stop() error { | |||
121 | return nil | 121 | return nil |
122 | } | 122 | } |
123 | 123 | ||
124 | func (p *Provisioner) Validate(config *terraform.ResourceConfig) ([]string, []error) { | ||
125 | if err := p.InternalValidate(); err != nil { | ||
126 | return nil, []error{fmt.Errorf( | ||
127 | "Internal validation of the provisioner failed! This is always a bug\n"+ | ||
128 | "with the provisioner itself, and not a user issue. Please report\n"+ | ||
129 | "this bug:\n\n%s", err)} | ||
130 | } | ||
131 | w := []string{} | ||
132 | e := []error{} | ||
133 | if p.Schema != nil { | ||
134 | w2, e2 := schemaMap(p.Schema).Validate(config) | ||
135 | w = append(w, w2...) | ||
136 | e = append(e, e2...) | ||
137 | } | ||
138 | if p.ValidateFunc != nil { | ||
139 | data := &ResourceData{ | ||
140 | schema: p.Schema, | ||
141 | config: config, | ||
142 | } | ||
143 | w2, e2 := p.ValidateFunc(data) | ||
144 | w = append(w, w2...) | ||
145 | e = append(e, e2...) | ||
146 | } | ||
147 | return w, e | ||
148 | } | ||
149 | |||
150 | // Apply implementation of terraform.ResourceProvisioner interface. | 124 | // Apply implementation of terraform.ResourceProvisioner interface. |
151 | func (p *Provisioner) Apply( | 125 | func (p *Provisioner) Apply( |
152 | o terraform.UIOutput, | 126 | o terraform.UIOutput, |
@@ -204,3 +178,27 @@ func (p *Provisioner) Apply( | |||
204 | ctx = context.WithValue(ctx, ProvRawStateKey, s) | 178 | ctx = context.WithValue(ctx, ProvRawStateKey, s) |
205 | return p.ApplyFunc(ctx) | 179 | return p.ApplyFunc(ctx) |
206 | } | 180 | } |
181 | |||
182 | // Validate implements the terraform.ResourceProvisioner interface. | ||
183 | func (p *Provisioner) Validate(c *terraform.ResourceConfig) (ws []string, es []error) { | ||
184 | if err := p.InternalValidate(); err != nil { | ||
185 | return nil, []error{fmt.Errorf( | ||
186 | "Internal validation of the provisioner failed! This is always a bug\n"+ | ||
187 | "with the provisioner itself, and not a user issue. Please report\n"+ | ||
188 | "this bug:\n\n%s", err)} | ||
189 | } | ||
190 | |||
191 | if p.Schema != nil { | ||
192 | w, e := schemaMap(p.Schema).Validate(c) | ||
193 | ws = append(ws, w...) | ||
194 | es = append(es, e...) | ||
195 | } | ||
196 | |||
197 | if p.ValidateFunc != nil { | ||
198 | w, e := p.ValidateFunc(c) | ||
199 | ws = append(ws, w...) | ||
200 | es = append(es, e...) | ||
201 | } | ||
202 | |||
203 | return ws, es | ||
204 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/resource.go b/vendor/github.com/hashicorp/terraform/helper/schema/resource.go index c810558..ddba109 100644 --- a/vendor/github.com/hashicorp/terraform/helper/schema/resource.go +++ b/vendor/github.com/hashicorp/terraform/helper/schema/resource.go | |||
@@ -6,6 +6,7 @@ import ( | |||
6 | "log" | 6 | "log" |
7 | "strconv" | 7 | "strconv" |
8 | 8 | ||
9 | "github.com/hashicorp/terraform/config" | ||
9 | "github.com/hashicorp/terraform/terraform" | 10 | "github.com/hashicorp/terraform/terraform" |
10 | ) | 11 | ) |
11 | 12 | ||
@@ -142,6 +143,12 @@ func (r *Resource) Apply( | |||
142 | if err := rt.DiffDecode(d); err != nil { | 143 | if err := rt.DiffDecode(d); err != nil { |
143 | log.Printf("[ERR] Error decoding ResourceTimeout: %s", err) | 144 | log.Printf("[ERR] Error decoding ResourceTimeout: %s", err) |
144 | } | 145 | } |
146 | } else if s != nil { | ||
147 | if _, ok := s.Meta[TimeoutKey]; ok { | ||
148 | if err := rt.StateDecode(s); err != nil { | ||
149 | log.Printf("[ERR] Error decoding ResourceTimeout: %s", err) | ||
150 | } | ||
151 | } | ||
145 | } else { | 152 | } else { |
146 | log.Printf("[DEBUG] No meta timeoutkey found in Apply()") | 153 | log.Printf("[DEBUG] No meta timeoutkey found in Apply()") |
147 | } | 154 | } |
@@ -388,9 +395,25 @@ func (r *Resource) InternalValidate(topSchemaMap schemaMap, writable bool) error | |||
388 | } | 395 | } |
389 | } | 396 | } |
390 | 397 | ||
398 | // Resource-specific checks | ||
399 | for k, _ := range tsm { | ||
400 | if isReservedResourceFieldName(k) { | ||
401 | return fmt.Errorf("%s is a reserved field name for a resource", k) | ||
402 | } | ||
403 | } | ||
404 | |||
391 | return schemaMap(r.Schema).InternalValidate(tsm) | 405 | return schemaMap(r.Schema).InternalValidate(tsm) |
392 | } | 406 | } |
393 | 407 | ||
408 | func isReservedResourceFieldName(name string) bool { | ||
409 | for _, reservedName := range config.ReservedResourceFields { | ||
410 | if name == reservedName { | ||
411 | return true | ||
412 | } | ||
413 | } | ||
414 | return false | ||
415 | } | ||
416 | |||
394 | // Data returns a ResourceData struct for this Resource. Each return value | 417 | // Data returns a ResourceData struct for this Resource. Each return value |
395 | // is a separate copy and can be safely modified differently. | 418 | // is a separate copy and can be safely modified differently. |
396 | // | 419 | // |
diff --git a/vendor/github.com/hashicorp/terraform/helper/schema/schema.go b/vendor/github.com/hashicorp/terraform/helper/schema/schema.go index 632672a..acb5618 100644 --- a/vendor/github.com/hashicorp/terraform/helper/schema/schema.go +++ b/vendor/github.com/hashicorp/terraform/helper/schema/schema.go | |||
@@ -15,6 +15,7 @@ import ( | |||
15 | "fmt" | 15 | "fmt" |
16 | "os" | 16 | "os" |
17 | "reflect" | 17 | "reflect" |
18 | "regexp" | ||
18 | "sort" | 19 | "sort" |
19 | "strconv" | 20 | "strconv" |
20 | "strings" | 21 | "strings" |
@@ -661,7 +662,13 @@ func (m schemaMap) InternalValidate(topSchemaMap schemaMap) error { | |||
661 | if v.ValidateFunc != nil { | 662 | if v.ValidateFunc != nil { |
662 | switch v.Type { | 663 | switch v.Type { |
663 | case TypeList, TypeSet: | 664 | case TypeList, TypeSet: |
664 | return fmt.Errorf("ValidateFunc is not yet supported on lists or sets.") | 665 | return fmt.Errorf("%s: ValidateFunc is not yet supported on lists or sets.", k) |
666 | } | ||
667 | } | ||
668 | |||
669 | if v.Deprecated == "" && v.Removed == "" { | ||
670 | if !isValidFieldName(k) { | ||
671 | return fmt.Errorf("%s: Field name may only contain lowercase alphanumeric characters & underscores.", k) | ||
665 | } | 672 | } |
666 | } | 673 | } |
667 | } | 674 | } |
@@ -669,6 +676,11 @@ func (m schemaMap) InternalValidate(topSchemaMap schemaMap) error { | |||
669 | return nil | 676 | return nil |
670 | } | 677 | } |
671 | 678 | ||
679 | func isValidFieldName(name string) bool { | ||
680 | re := regexp.MustCompile("^[a-z0-9_]+$") | ||
681 | return re.MatchString(name) | ||
682 | } | ||
683 | |||
672 | func (m schemaMap) diff( | 684 | func (m schemaMap) diff( |
673 | k string, | 685 | k string, |
674 | schema *Schema, | 686 | schema *Schema, |
diff --git a/vendor/github.com/hashicorp/terraform/helper/shadow/closer.go b/vendor/github.com/hashicorp/terraform/helper/shadow/closer.go index 7edd5e7..edc1e2a 100644 --- a/vendor/github.com/hashicorp/terraform/helper/shadow/closer.go +++ b/vendor/github.com/hashicorp/terraform/helper/shadow/closer.go | |||
@@ -39,6 +39,8 @@ func (w *closeWalker) Struct(reflect.Value) error { | |||
39 | return nil | 39 | return nil |
40 | } | 40 | } |
41 | 41 | ||
42 | var closerType = reflect.TypeOf((*io.Closer)(nil)).Elem() | ||
43 | |||
42 | func (w *closeWalker) StructField(f reflect.StructField, v reflect.Value) error { | 44 | func (w *closeWalker) StructField(f reflect.StructField, v reflect.Value) error { |
43 | // Not sure why this would be but lets avoid some panics | 45 | // Not sure why this would be but lets avoid some panics |
44 | if !v.IsValid() { | 46 | if !v.IsValid() { |
@@ -56,17 +58,18 @@ func (w *closeWalker) StructField(f reflect.StructField, v reflect.Value) error | |||
56 | return nil | 58 | return nil |
57 | } | 59 | } |
58 | 60 | ||
59 | // We're looking for an io.Closer | 61 | var closer io.Closer |
60 | raw := v.Interface() | 62 | if v.Type().Implements(closerType) { |
61 | if raw == nil { | 63 | closer = v.Interface().(io.Closer) |
62 | return nil | 64 | } else if v.CanAddr() { |
65 | // The Close method may require a pointer receiver, but we only have a value. | ||
66 | v := v.Addr() | ||
67 | if v.Type().Implements(closerType) { | ||
68 | closer = v.Interface().(io.Closer) | ||
69 | } | ||
63 | } | 70 | } |
64 | 71 | ||
65 | closer, ok := raw.(io.Closer) | 72 | if closer == nil { |
66 | if !ok && v.CanAddr() { | ||
67 | closer, ok = v.Addr().Interface().(io.Closer) | ||
68 | } | ||
69 | if !ok { | ||
70 | return reflectwalk.SkipEntry | 73 | return reflectwalk.SkipEntry |
71 | } | 74 | } |
72 | 75 | ||
diff --git a/vendor/github.com/hashicorp/terraform/helper/shadow/value.go b/vendor/github.com/hashicorp/terraform/helper/shadow/value.go index 2413335..178b7e7 100644 --- a/vendor/github.com/hashicorp/terraform/helper/shadow/value.go +++ b/vendor/github.com/hashicorp/terraform/helper/shadow/value.go | |||
@@ -26,6 +26,14 @@ type Value struct { | |||
26 | valueSet bool | 26 | valueSet bool |
27 | } | 27 | } |
28 | 28 | ||
29 | func (v *Value) Lock() { | ||
30 | v.lock.Lock() | ||
31 | } | ||
32 | |||
33 | func (v *Value) Unlock() { | ||
34 | v.lock.Unlock() | ||
35 | } | ||
36 | |||
29 | // Close closes the value. This can never fail. For a definition of | 37 | // Close closes the value. This can never fail. For a definition of |
30 | // "close" see the struct docs. | 38 | // "close" see the struct docs. |
31 | func (w *Value) Close() error { | 39 | func (w *Value) Close() error { |
diff --git a/vendor/github.com/hashicorp/terraform/moduledeps/dependencies.go b/vendor/github.com/hashicorp/terraform/moduledeps/dependencies.go new file mode 100644 index 0000000..87c8431 --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/moduledeps/dependencies.go | |||
@@ -0,0 +1,43 @@ | |||
1 | package moduledeps | ||
2 | |||
3 | import ( | ||
4 | "github.com/hashicorp/terraform/plugin/discovery" | ||
5 | ) | ||
6 | |||
7 | // Providers describes a set of provider dependencies for a given module. | ||
8 | // | ||
9 | // Each named provider instance can have one version constraint. | ||
10 | type Providers map[ProviderInstance]ProviderDependency | ||
11 | |||
12 | // ProviderDependency describes the dependency for a particular provider | ||
13 | // instance, including both the set of allowed versions and the reason for | ||
14 | // the dependency. | ||
15 | type ProviderDependency struct { | ||
16 | Constraints discovery.Constraints | ||
17 | Reason ProviderDependencyReason | ||
18 | } | ||
19 | |||
20 | // ProviderDependencyReason is an enumeration of reasons why a dependency might be | ||
21 | // present. | ||
22 | type ProviderDependencyReason int | ||
23 | |||
24 | const ( | ||
25 | // ProviderDependencyExplicit means that there is an explicit "provider" | ||
26 | // block in the configuration for this module. | ||
27 | ProviderDependencyExplicit ProviderDependencyReason = iota | ||
28 | |||
29 | // ProviderDependencyImplicit means that there is no explicit "provider" | ||
30 | // block but there is at least one resource that uses this provider. | ||
31 | ProviderDependencyImplicit | ||
32 | |||
33 | // ProviderDependencyInherited is a special case of | ||
34 | // ProviderDependencyImplicit where a parent module has defined a | ||
35 | // configuration for the provider that has been inherited by at least one | ||
36 | // resource in this module. | ||
37 | ProviderDependencyInherited | ||
38 | |||
39 | // ProviderDependencyFromState means that this provider is not currently | ||
40 | // referenced by configuration at all, but some existing instances in | ||
41 | // the state still depend on it. | ||
42 | ProviderDependencyFromState | ||
43 | ) | ||
diff --git a/vendor/github.com/hashicorp/terraform/moduledeps/doc.go b/vendor/github.com/hashicorp/terraform/moduledeps/doc.go new file mode 100644 index 0000000..7eff083 --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/moduledeps/doc.go | |||
@@ -0,0 +1,7 @@ | |||
1 | // Package moduledeps contains types that can be used to describe the | ||
2 | // providers required for all of the modules in a module tree. | ||
3 | // | ||
4 | // It does not itself contain the functionality for populating such | ||
5 | // data structures; that's in Terraform core, since this package intentionally | ||
6 | // does not depend on terraform core to avoid package dependency cycles. | ||
7 | package moduledeps | ||
diff --git a/vendor/github.com/hashicorp/terraform/moduledeps/module.go b/vendor/github.com/hashicorp/terraform/moduledeps/module.go new file mode 100644 index 0000000..d6cbaf5 --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/moduledeps/module.go | |||
@@ -0,0 +1,204 @@ | |||
1 | package moduledeps | ||
2 | |||
3 | import ( | ||
4 | "sort" | ||
5 | "strings" | ||
6 | |||
7 | "github.com/hashicorp/terraform/plugin/discovery" | ||
8 | ) | ||
9 | |||
10 | // Module represents the dependencies of a single module, as well being | ||
11 | // a node in a tree of such structures representing the dependencies of | ||
12 | // an entire configuration. | ||
13 | type Module struct { | ||
14 | Name string | ||
15 | Providers Providers | ||
16 | Children []*Module | ||
17 | } | ||
18 | |||
19 | // WalkFunc is a callback type for use with Module.WalkTree | ||
20 | type WalkFunc func(path []string, parent *Module, current *Module) error | ||
21 | |||
22 | // WalkTree calls the given callback once for the receiver and then | ||
23 | // once for each descendent, in an order such that parents are called | ||
24 | // before their children and siblings are called in the order they | ||
25 | // appear in the Children slice. | ||
26 | // | ||
27 | // When calling the callback, parent will be nil for the first call | ||
28 | // for the receiving module, and then set to the direct parent of | ||
29 | // each module for the subsequent calls. | ||
30 | // | ||
31 | // The path given to the callback is valid only until the callback | ||
32 | // returns, after which it will be mutated and reused. Callbacks must | ||
33 | // therefore copy the path slice if they wish to retain it. | ||
34 | // | ||
35 | // If the given callback returns an error, the walk will be aborted at | ||
36 | // that point and that error returned to the caller. | ||
37 | // | ||
38 | // This function is not thread-safe for concurrent modifications of the | ||
39 | // data structure, so it's the caller's responsibility to arrange for that | ||
40 | // should it be needed. | ||
41 | // | ||
42 | // It is safe for a callback to modify the descendents of the "current" | ||
43 | // module, including the ordering of the Children slice itself, but the | ||
44 | // callback MUST NOT modify the parent module. | ||
45 | func (m *Module) WalkTree(cb WalkFunc) error { | ||
46 | return walkModuleTree(make([]string, 0, 1), nil, m, cb) | ||
47 | } | ||
48 | |||
49 | func walkModuleTree(path []string, parent *Module, current *Module, cb WalkFunc) error { | ||
50 | path = append(path, current.Name) | ||
51 | err := cb(path, parent, current) | ||
52 | if err != nil { | ||
53 | return err | ||
54 | } | ||
55 | |||
56 | for _, child := range current.Children { | ||
57 | err := walkModuleTree(path, current, child, cb) | ||
58 | if err != nil { | ||
59 | return err | ||
60 | } | ||
61 | } | ||
62 | return nil | ||
63 | } | ||
64 | |||
65 | // SortChildren sorts the Children slice into lexicographic order by | ||
66 | // name, in-place. | ||
67 | // | ||
68 | // This is primarily useful prior to calling WalkTree so that the walk | ||
69 | // will proceed in a consistent order. | ||
70 | func (m *Module) SortChildren() { | ||
71 | sort.Sort(sortModules{m.Children}) | ||
72 | } | ||
73 | |||
74 | // SortDescendents is a convenience wrapper for calling SortChildren on | ||
75 | // the receiver and all of its descendent modules. | ||
76 | func (m *Module) SortDescendents() { | ||
77 | m.WalkTree(func(path []string, parent *Module, current *Module) error { | ||
78 | current.SortChildren() | ||
79 | return nil | ||
80 | }) | ||
81 | } | ||
82 | |||
83 | type sortModules struct { | ||
84 | modules []*Module | ||
85 | } | ||
86 | |||
87 | func (s sortModules) Len() int { | ||
88 | return len(s.modules) | ||
89 | } | ||
90 | |||
91 | func (s sortModules) Less(i, j int) bool { | ||
92 | cmp := strings.Compare(s.modules[i].Name, s.modules[j].Name) | ||
93 | return cmp < 0 | ||
94 | } | ||
95 | |||
96 | func (s sortModules) Swap(i, j int) { | ||
97 | s.modules[i], s.modules[j] = s.modules[j], s.modules[i] | ||
98 | } | ||
99 | |||
100 | // PluginRequirements produces a PluginRequirements structure that can | ||
101 | // be used with discovery.PluginMetaSet.ConstrainVersions to identify | ||
102 | // suitable plugins to satisfy the module's provider dependencies. | ||
103 | // | ||
104 | // This method only considers the direct requirements of the receiver. | ||
105 | // Use AllPluginRequirements to flatten the dependencies for the | ||
106 | // entire tree of modules. | ||
107 | // | ||
108 | // Requirements returned by this method include only version constraints, | ||
109 | // and apply no particular SHA256 hash constraint. | ||
110 | func (m *Module) PluginRequirements() discovery.PluginRequirements { | ||
111 | ret := make(discovery.PluginRequirements) | ||
112 | for inst, dep := range m.Providers { | ||
113 | // m.Providers is keyed on provider names, such as "aws.foo". | ||
114 | // a PluginRequirements wants keys to be provider *types*, such | ||
115 | // as "aws". If there are multiple aliases for the same | ||
116 | // provider then we will flatten them into a single requirement | ||
117 | // by combining their constraint sets. | ||
118 | pty := inst.Type() | ||
119 | if existing, exists := ret[pty]; exists { | ||
120 | ret[pty].Versions = existing.Versions.Append(dep.Constraints) | ||
121 | } else { | ||
122 | ret[pty] = &discovery.PluginConstraints{ | ||
123 | Versions: dep.Constraints, | ||
124 | } | ||
125 | } | ||
126 | } | ||
127 | return ret | ||
128 | } | ||
129 | |||
130 | // AllPluginRequirements calls PluginRequirements for the receiver and all | ||
131 | // of its descendents, and merges the result into a single PluginRequirements | ||
132 | // structure that would satisfy all of the modules together. | ||
133 | // | ||
134 | // Requirements returned by this method include only version constraints, | ||
135 | // and apply no particular SHA256 hash constraint. | ||
136 | func (m *Module) AllPluginRequirements() discovery.PluginRequirements { | ||
137 | var ret discovery.PluginRequirements | ||
138 | m.WalkTree(func(path []string, parent *Module, current *Module) error { | ||
139 | ret = ret.Merge(current.PluginRequirements()) | ||
140 | return nil | ||
141 | }) | ||
142 | return ret | ||
143 | } | ||
144 | |||
145 | // Equal returns true if the receiver is the root of an identical tree | ||
146 | // to the other given Module. This is a deep comparison that considers | ||
147 | // the equality of all downstream modules too. | ||
148 | // | ||
149 | // The children are considered to be ordered, so callers may wish to use | ||
150 | // SortDescendents first to normalize the order of the slices of child nodes. | ||
151 | // | ||
152 | // The implementation of this function is not optimized since it is provided | ||
153 | // primarily for use in tests. | ||
154 | func (m *Module) Equal(other *Module) bool { | ||
155 | // take care of nils first | ||
156 | if m == nil && other == nil { | ||
157 | return true | ||
158 | } else if (m == nil && other != nil) || (m != nil && other == nil) { | ||
159 | return false | ||
160 | } | ||
161 | |||
162 | if m.Name != other.Name { | ||
163 | return false | ||
164 | } | ||
165 | |||
166 | if len(m.Providers) != len(other.Providers) { | ||
167 | return false | ||
168 | } | ||
169 | if len(m.Children) != len(other.Children) { | ||
170 | return false | ||
171 | } | ||
172 | |||
173 | // Can't use reflect.DeepEqual on this provider structure because | ||
174 | // the nested Constraints objects contain function pointers that | ||
175 | // never compare as equal. So we'll need to walk it the long way. | ||
176 | for inst, dep := range m.Providers { | ||
177 | if _, exists := other.Providers[inst]; !exists { | ||
178 | return false | ||
179 | } | ||
180 | |||
181 | if dep.Reason != other.Providers[inst].Reason { | ||
182 | return false | ||
183 | } | ||
184 | |||
185 | // Constraints are not too easy to compare robustly, so | ||
186 | // we'll just use their string representations as a proxy | ||
187 | // for now. | ||
188 | if dep.Constraints.String() != other.Providers[inst].Constraints.String() { | ||
189 | return false | ||
190 | } | ||
191 | } | ||
192 | |||
193 | // Above we already checked that we have the same number of children | ||
194 | // in each module, so now we just need to check that they are | ||
195 | // recursively equal. | ||
196 | for i := range m.Children { | ||
197 | if !m.Children[i].Equal(other.Children[i]) { | ||
198 | return false | ||
199 | } | ||
200 | } | ||
201 | |||
202 | // If we fall out here then they are equal | ||
203 | return true | ||
204 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/moduledeps/provider.go b/vendor/github.com/hashicorp/terraform/moduledeps/provider.go new file mode 100644 index 0000000..89ceefb --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/moduledeps/provider.go | |||
@@ -0,0 +1,30 @@ | |||
1 | package moduledeps | ||
2 | |||
3 | import ( | ||
4 | "strings" | ||
5 | ) | ||
6 | |||
7 | // ProviderInstance describes a particular provider instance by its full name, | ||
8 | // like "null" or "aws.foo". | ||
9 | type ProviderInstance string | ||
10 | |||
11 | // Type returns the provider type of this instance. For example, for an instance | ||
12 | // named "aws.foo" the type is "aws". | ||
13 | func (p ProviderInstance) Type() string { | ||
14 | t := string(p) | ||
15 | if dotPos := strings.Index(t, "."); dotPos != -1 { | ||
16 | t = t[:dotPos] | ||
17 | } | ||
18 | return t | ||
19 | } | ||
20 | |||
21 | // Alias returns the alias of this provider, if any. An instance named "aws.foo" | ||
22 | // has the alias "foo", while an instance named just "docker" has no alias, | ||
23 | // so the empty string would be returned. | ||
24 | func (p ProviderInstance) Alias() string { | ||
25 | t := string(p) | ||
26 | if dotPos := strings.Index(t, "."); dotPos != -1 { | ||
27 | return t[dotPos+1:] | ||
28 | } | ||
29 | return "" | ||
30 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/plugin/client.go b/vendor/github.com/hashicorp/terraform/plugin/client.go new file mode 100644 index 0000000..3a5cb7a --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/plugin/client.go | |||
@@ -0,0 +1,24 @@ | |||
1 | package plugin | ||
2 | |||
3 | import ( | ||
4 | "os/exec" | ||
5 | |||
6 | plugin "github.com/hashicorp/go-plugin" | ||
7 | "github.com/hashicorp/terraform/plugin/discovery" | ||
8 | ) | ||
9 | |||
10 | // ClientConfig returns a configuration object that can be used to instantiate | ||
11 | // a client for the plugin described by the given metadata. | ||
12 | func ClientConfig(m discovery.PluginMeta) *plugin.ClientConfig { | ||
13 | return &plugin.ClientConfig{ | ||
14 | Cmd: exec.Command(m.Path), | ||
15 | HandshakeConfig: Handshake, | ||
16 | Managed: true, | ||
17 | Plugins: PluginMap, | ||
18 | } | ||
19 | } | ||
20 | |||
21 | // Client returns a plugin client for the plugin described by the given metadata. | ||
22 | func Client(m discovery.PluginMeta) *plugin.Client { | ||
23 | return plugin.NewClient(ClientConfig(m)) | ||
24 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/plugin/discovery/error.go b/vendor/github.com/hashicorp/terraform/plugin/discovery/error.go new file mode 100644 index 0000000..df855a7 --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/plugin/discovery/error.go | |||
@@ -0,0 +1,30 @@ | |||
1 | package discovery | ||
2 | |||
3 | // Error is a type used to describe situations that the caller must handle | ||
4 | // since they indicate some form of user error. | ||
5 | // | ||
6 | // The functions and methods that return these specialized errors indicate so | ||
7 | // in their documentation. The Error type should not itself be used directly, | ||
8 | // but rather errors should be compared using the == operator with the | ||
9 | // error constants in this package. | ||
10 | // | ||
11 | // Values of this type are _not_ used when the error being reported is an | ||
12 | // operational error (server unavailable, etc) or indicative of a bug in | ||
13 | // this package or its caller. | ||
14 | type Error string | ||
15 | |||
16 | // ErrorNoSuitableVersion indicates that a suitable version (meeting given | ||
17 | // constraints) is not available. | ||
18 | const ErrorNoSuitableVersion = Error("no suitable version is available") | ||
19 | |||
20 | // ErrorNoVersionCompatible indicates that all of the available versions | ||
21 | // that otherwise met constraints are not compatible with the current | ||
22 | // version of Terraform. | ||
23 | const ErrorNoVersionCompatible = Error("no available version is compatible with this version of Terraform") | ||
24 | |||
25 | // ErrorNoSuchProvider indicates that no provider exists with a name given | ||
26 | const ErrorNoSuchProvider = Error("no provider exists with the given name") | ||
27 | |||
28 | func (err Error) Error() string { | ||
29 | return string(err) | ||
30 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/plugin/discovery/find.go b/vendor/github.com/hashicorp/terraform/plugin/discovery/find.go new file mode 100644 index 0000000..f5bc4c1 --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/plugin/discovery/find.go | |||
@@ -0,0 +1,168 @@ | |||
1 | package discovery | ||
2 | |||
3 | import ( | ||
4 | "io/ioutil" | ||
5 | "log" | ||
6 | "path/filepath" | ||
7 | "strings" | ||
8 | ) | ||
9 | |||
10 | // FindPlugins looks in the given directories for files whose filenames | ||
11 | // suggest that they are plugins of the given kind (e.g. "provider") and | ||
12 | // returns a PluginMetaSet representing the discovered potential-plugins. | ||
13 | // | ||
14 | // Currently this supports two different naming schemes. The current | ||
15 | // standard naming scheme is a subdirectory called $GOOS-$GOARCH containing | ||
16 | // files named terraform-$KIND-$NAME-V$VERSION. The legacy naming scheme is | ||
17 | // files directly in the given directory whose names are like | ||
18 | // terraform-$KIND-$NAME. | ||
19 | // | ||
20 | // Only one plugin will be returned for each unique plugin (name, version) | ||
21 | // pair, with preference given to files found in earlier directories. | ||
22 | // | ||
23 | // This is a convenience wrapper around FindPluginPaths and ResolvePluginsPaths. | ||
24 | func FindPlugins(kind string, dirs []string) PluginMetaSet { | ||
25 | return ResolvePluginPaths(FindPluginPaths(kind, dirs)) | ||
26 | } | ||
27 | |||
28 | // FindPluginPaths looks in the given directories for files whose filenames | ||
29 | // suggest that they are plugins of the given kind (e.g. "provider"). | ||
30 | // | ||
31 | // The return value is a list of absolute paths that appear to refer to | ||
32 | // plugins in the given directories, based only on what can be inferred | ||
33 | // from the naming scheme. The paths returned are ordered such that files | ||
34 | // in later dirs appear after files in earlier dirs in the given directory | ||
35 | // list. Within the same directory plugins are returned in a consistent but | ||
36 | // undefined order. | ||
37 | func FindPluginPaths(kind string, dirs []string) []string { | ||
38 | // This is just a thin wrapper around findPluginPaths so that we can | ||
39 | // use the latter in tests with a fake machineName so we can use our | ||
40 | // test fixtures. | ||
41 | return findPluginPaths(kind, dirs) | ||
42 | } | ||
43 | |||
44 | func findPluginPaths(kind string, dirs []string) []string { | ||
45 | prefix := "terraform-" + kind + "-" | ||
46 | |||
47 | ret := make([]string, 0, len(dirs)) | ||
48 | |||
49 | for _, dir := range dirs { | ||
50 | items, err := ioutil.ReadDir(dir) | ||
51 | if err != nil { | ||
52 | // Ignore missing dirs, non-dirs, etc | ||
53 | continue | ||
54 | } | ||
55 | |||
56 | log.Printf("[DEBUG] checking for %s in %q", kind, dir) | ||
57 | |||
58 | for _, item := range items { | ||
59 | fullName := item.Name() | ||
60 | |||
61 | if !strings.HasPrefix(fullName, prefix) { | ||
62 | log.Printf("[DEBUG] skipping %q, not a %s", fullName, kind) | ||
63 | continue | ||
64 | } | ||
65 | |||
66 | // New-style paths must have a version segment in filename | ||
67 | if strings.Contains(strings.ToLower(fullName), "_v") { | ||
68 | absPath, err := filepath.Abs(filepath.Join(dir, fullName)) | ||
69 | if err != nil { | ||
70 | log.Printf("[ERROR] plugin filepath error: %s", err) | ||
71 | continue | ||
72 | } | ||
73 | |||
74 | log.Printf("[DEBUG] found %s %q", kind, fullName) | ||
75 | ret = append(ret, filepath.Clean(absPath)) | ||
76 | continue | ||
77 | } | ||
78 | |||
79 | // Legacy style with files directly in the base directory | ||
80 | absPath, err := filepath.Abs(filepath.Join(dir, fullName)) | ||
81 | if err != nil { | ||
82 | log.Printf("[ERROR] plugin filepath error: %s", err) | ||
83 | continue | ||
84 | } | ||
85 | |||
86 | log.Printf("[WARNING] found legacy %s %q", kind, fullName) | ||
87 | |||
88 | ret = append(ret, filepath.Clean(absPath)) | ||
89 | } | ||
90 | } | ||
91 | |||
92 | return ret | ||
93 | } | ||
94 | |||
95 | // ResolvePluginPaths takes a list of paths to plugin executables (as returned | ||
96 | // by e.g. FindPluginPaths) and produces a PluginMetaSet describing the | ||
97 | // referenced plugins. | ||
98 | // | ||
99 | // If the same combination of plugin name and version appears multiple times, | ||
100 | // the earlier reference will be preferred. Several different versions of | ||
101 | // the same plugin name may be returned, in which case the methods of | ||
102 | // PluginMetaSet can be used to filter down. | ||
103 | func ResolvePluginPaths(paths []string) PluginMetaSet { | ||
104 | s := make(PluginMetaSet) | ||
105 | |||
106 | type nameVersion struct { | ||
107 | Name string | ||
108 | Version string | ||
109 | } | ||
110 | found := make(map[nameVersion]struct{}) | ||
111 | |||
112 | for _, path := range paths { | ||
113 | baseName := strings.ToLower(filepath.Base(path)) | ||
114 | if !strings.HasPrefix(baseName, "terraform-") { | ||
115 | // Should never happen with reasonable input | ||
116 | continue | ||
117 | } | ||
118 | |||
119 | baseName = baseName[10:] | ||
120 | firstDash := strings.Index(baseName, "-") | ||
121 | if firstDash == -1 { | ||
122 | // Should never happen with reasonable input | ||
123 | continue | ||
124 | } | ||
125 | |||
126 | baseName = baseName[firstDash+1:] | ||
127 | if baseName == "" { | ||
128 | // Should never happen with reasonable input | ||
129 | continue | ||
130 | } | ||
131 | |||
132 | // Trim the .exe suffix used on Windows before we start wrangling | ||
133 | // the remainder of the path. | ||
134 | if strings.HasSuffix(baseName, ".exe") { | ||
135 | baseName = baseName[:len(baseName)-4] | ||
136 | } | ||
137 | |||
138 | parts := strings.SplitN(baseName, "_v", 2) | ||
139 | name := parts[0] | ||
140 | version := VersionZero | ||
141 | if len(parts) == 2 { | ||
142 | version = parts[1] | ||
143 | } | ||
144 | |||
145 | // Auto-installed plugins contain an extra name portion representing | ||
146 | // the expected plugin version, which we must trim off. | ||
147 | if underX := strings.Index(version, "_x"); underX != -1 { | ||
148 | version = version[:underX] | ||
149 | } | ||
150 | |||
151 | if _, ok := found[nameVersion{name, version}]; ok { | ||
152 | // Skip duplicate versions of the same plugin | ||
153 | // (We do this during this step because after this we will be | ||
154 | // dealing with sets and thus lose our ordering with which to | ||
155 | // decide preference.) | ||
156 | continue | ||
157 | } | ||
158 | |||
159 | s.Add(PluginMeta{ | ||
160 | Name: name, | ||
161 | Version: VersionStr(version), | ||
162 | Path: path, | ||
163 | }) | ||
164 | found[nameVersion{name, version}] = struct{}{} | ||
165 | } | ||
166 | |||
167 | return s | ||
168 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/plugin/discovery/get.go b/vendor/github.com/hashicorp/terraform/plugin/discovery/get.go new file mode 100644 index 0000000..241b5cb --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/plugin/discovery/get.go | |||
@@ -0,0 +1,424 @@ | |||
1 | package discovery | ||
2 | |||
3 | import ( | ||
4 | "errors" | ||
5 | "fmt" | ||
6 | "io/ioutil" | ||
7 | "log" | ||
8 | "net/http" | ||
9 | "os" | ||
10 | "runtime" | ||
11 | "strconv" | ||
12 | "strings" | ||
13 | |||
14 | "golang.org/x/net/html" | ||
15 | |||
16 | cleanhttp "github.com/hashicorp/go-cleanhttp" | ||
17 | getter "github.com/hashicorp/go-getter" | ||
18 | multierror "github.com/hashicorp/go-multierror" | ||
19 | ) | ||
20 | |||
21 | // Releases are located by parsing the html listing from releases.hashicorp.com. | ||
22 | // | ||
23 | // The URL for releases follows the pattern: | ||
24 | // https://releases.hashicorp.com/terraform-provider-name/<x.y.z>/terraform-provider-name_<x.y.z>_<os>_<arch>.<ext> | ||
25 | // | ||
26 | // The plugin protocol version will be saved with the release and returned in | ||
27 | // the header X-TERRAFORM_PROTOCOL_VERSION. | ||
28 | |||
29 | const protocolVersionHeader = "x-terraform-protocol-version" | ||
30 | |||
31 | var releaseHost = "https://releases.hashicorp.com" | ||
32 | |||
33 | var httpClient = cleanhttp.DefaultClient() | ||
34 | |||
35 | // An Installer maintains a local cache of plugins by downloading plugins | ||
36 | // from an online repository. | ||
37 | type Installer interface { | ||
38 | Get(name string, req Constraints) (PluginMeta, error) | ||
39 | PurgeUnused(used map[string]PluginMeta) (removed PluginMetaSet, err error) | ||
40 | } | ||
41 | |||
42 | // ProviderInstaller is an Installer implementation that knows how to | ||
43 | // download Terraform providers from the official HashiCorp releases service | ||
44 | // into a local directory. The files downloaded are compliant with the | ||
45 | // naming scheme expected by FindPlugins, so the target directory of a | ||
46 | // provider installer can be used as one of several plugin discovery sources. | ||
47 | type ProviderInstaller struct { | ||
48 | Dir string | ||
49 | |||
50 | PluginProtocolVersion uint | ||
51 | |||
52 | // OS and Arch specify the OS and architecture that should be used when | ||
53 | // installing plugins. These use the same labels as the runtime.GOOS and | ||
54 | // runtime.GOARCH variables respectively, and indeed the values of these | ||
55 | // are used as defaults if either of these is the empty string. | ||
56 | OS string | ||
57 | Arch string | ||
58 | |||
59 | // Skip checksum and signature verification | ||
60 | SkipVerify bool | ||
61 | } | ||
62 | |||
63 | // Get is part of an implementation of type Installer, and attempts to download | ||
64 | // and install a Terraform provider matching the given constraints. | ||
65 | // | ||
66 | // This method may return one of a number of sentinel errors from this | ||
67 | // package to indicate issues that are likely to be resolvable via user action: | ||
68 | // | ||
69 | // ErrorNoSuchProvider: no provider with the given name exists in the repository. | ||
70 | // ErrorNoSuitableVersion: the provider exists but no available version matches constraints. | ||
71 | // ErrorNoVersionCompatible: a plugin was found within the constraints but it is | ||
72 | // incompatible with the current Terraform version. | ||
73 | // | ||
74 | // These errors should be recognized and handled as special cases by the caller | ||
75 | // to present a suitable user-oriented error message. | ||
76 | // | ||
77 | // All other errors indicate an internal problem that is likely _not_ solvable | ||
78 | // through user action, or at least not within Terraform's scope. Error messages | ||
79 | // are produced under the assumption that if presented to the user they will | ||
80 | // be presented alongside context about what is being installed, and thus the | ||
81 | // error messages do not redundantly include such information. | ||
82 | func (i *ProviderInstaller) Get(provider string, req Constraints) (PluginMeta, error) { | ||
83 | versions, err := i.listProviderVersions(provider) | ||
84 | // TODO: return multiple errors | ||
85 | if err != nil { | ||
86 | return PluginMeta{}, err | ||
87 | } | ||
88 | |||
89 | if len(versions) == 0 { | ||
90 | return PluginMeta{}, ErrorNoSuitableVersion | ||
91 | } | ||
92 | |||
93 | versions = allowedVersions(versions, req) | ||
94 | if len(versions) == 0 { | ||
95 | return PluginMeta{}, ErrorNoSuitableVersion | ||
96 | } | ||
97 | |||
98 | // sort them newest to oldest | ||
99 | Versions(versions).Sort() | ||
100 | |||
101 | // take the first matching plugin we find | ||
102 | for _, v := range versions { | ||
103 | url := i.providerURL(provider, v.String()) | ||
104 | |||
105 | if !i.SkipVerify { | ||
106 | sha256, err := i.getProviderChecksum(provider, v.String()) | ||
107 | if err != nil { | ||
108 | return PluginMeta{}, err | ||
109 | } | ||
110 | |||
111 | // add the checksum parameter for go-getter to verify the download for us. | ||
112 | if sha256 != "" { | ||
113 | url = url + "?checksum=sha256:" + sha256 | ||
114 | } | ||
115 | } | ||
116 | |||
117 | log.Printf("[DEBUG] fetching provider info for %s version %s", provider, v) | ||
118 | if checkPlugin(url, i.PluginProtocolVersion) { | ||
119 | log.Printf("[DEBUG] getting provider %q version %q at %s", provider, v, url) | ||
120 | err := getter.Get(i.Dir, url) | ||
121 | if err != nil { | ||
122 | return PluginMeta{}, err | ||
123 | } | ||
124 | |||
125 | // Find what we just installed | ||
126 | // (This is weird, because go-getter doesn't directly return | ||
127 | // information about what was extracted, and we just extracted | ||
128 | // the archive directly into a shared dir here.) | ||
129 | log.Printf("[DEBUG] looking for the %s %s plugin we just installed", provider, v) | ||
130 | metas := FindPlugins("provider", []string{i.Dir}) | ||
131 | log.Printf("[DEBUG] all plugins found %#v", metas) | ||
132 | metas, _ = metas.ValidateVersions() | ||
133 | metas = metas.WithName(provider).WithVersion(v) | ||
134 | log.Printf("[DEBUG] filtered plugins %#v", metas) | ||
135 | if metas.Count() == 0 { | ||
136 | // This should never happen. Suggests that the release archive | ||
137 | // contains an executable file whose name doesn't match the | ||
138 | // expected convention. | ||
139 | return PluginMeta{}, fmt.Errorf( | ||
140 | "failed to find installed plugin version %s; this is a bug in Terraform and should be reported", | ||
141 | v, | ||
142 | ) | ||
143 | } | ||
144 | |||
145 | if metas.Count() > 1 { | ||
146 | // This should also never happen, and suggests that a | ||
147 | // particular version was re-released with a different | ||
148 | // executable filename. We consider releases as immutable, so | ||
149 | // this is an error. | ||
150 | return PluginMeta{}, fmt.Errorf( | ||
151 | "multiple plugins installed for version %s; this is a bug in Terraform and should be reported", | ||
152 | v, | ||
153 | ) | ||
154 | } | ||
155 | |||
156 | // By now we know we have exactly one meta, and so "Newest" will | ||
157 | // return that one. | ||
158 | return metas.Newest(), nil | ||
159 | } | ||
160 | |||
161 | log.Printf("[INFO] incompatible ProtocolVersion for %s version %s", provider, v) | ||
162 | } | ||
163 | |||
164 | return PluginMeta{}, ErrorNoVersionCompatible | ||
165 | } | ||
166 | |||
167 | func (i *ProviderInstaller) PurgeUnused(used map[string]PluginMeta) (PluginMetaSet, error) { | ||
168 | purge := make(PluginMetaSet) | ||
169 | |||
170 | present := FindPlugins("provider", []string{i.Dir}) | ||
171 | for meta := range present { | ||
172 | chosen, ok := used[meta.Name] | ||
173 | if !ok { | ||
174 | purge.Add(meta) | ||
175 | } | ||
176 | if chosen.Path != meta.Path { | ||
177 | purge.Add(meta) | ||
178 | } | ||
179 | } | ||
180 | |||
181 | removed := make(PluginMetaSet) | ||
182 | var errs error | ||
183 | for meta := range purge { | ||
184 | path := meta.Path | ||
185 | err := os.Remove(path) | ||
186 | if err != nil { | ||
187 | errs = multierror.Append(errs, fmt.Errorf( | ||
188 | "failed to remove unused provider plugin %s: %s", | ||
189 | path, err, | ||
190 | )) | ||
191 | } else { | ||
192 | removed.Add(meta) | ||
193 | } | ||
194 | } | ||
195 | |||
196 | return removed, errs | ||
197 | } | ||
198 | |||
199 | // Plugins are referred to by the short name, but all URLs and files will use | ||
200 | // the full name prefixed with terraform-<plugin_type>- | ||
201 | func (i *ProviderInstaller) providerName(name string) string { | ||
202 | return "terraform-provider-" + name | ||
203 | } | ||
204 | |||
205 | func (i *ProviderInstaller) providerFileName(name, version string) string { | ||
206 | os := i.OS | ||
207 | arch := i.Arch | ||
208 | if os == "" { | ||
209 | os = runtime.GOOS | ||
210 | } | ||
211 | if arch == "" { | ||
212 | arch = runtime.GOARCH | ||
213 | } | ||
214 | return fmt.Sprintf("%s_%s_%s_%s.zip", i.providerName(name), version, os, arch) | ||
215 | } | ||
216 | |||
217 | // providerVersionsURL returns the path to the released versions directory for the provider: | ||
218 | // https://releases.hashicorp.com/terraform-provider-name/ | ||
219 | func (i *ProviderInstaller) providerVersionsURL(name string) string { | ||
220 | return releaseHost + "/" + i.providerName(name) + "/" | ||
221 | } | ||
222 | |||
223 | // providerURL returns the full path to the provider file, using the current OS | ||
224 | // and ARCH: | ||
225 | // .../terraform-provider-name_<x.y.z>/terraform-provider-name_<x.y.z>_<os>_<arch>.<ext> | ||
226 | func (i *ProviderInstaller) providerURL(name, version string) string { | ||
227 | return fmt.Sprintf("%s%s/%s", i.providerVersionsURL(name), version, i.providerFileName(name, version)) | ||
228 | } | ||
229 | |||
230 | func (i *ProviderInstaller) providerChecksumURL(name, version string) string { | ||
231 | fileName := fmt.Sprintf("%s_%s_SHA256SUMS", i.providerName(name), version) | ||
232 | u := fmt.Sprintf("%s%s/%s", i.providerVersionsURL(name), version, fileName) | ||
233 | return u | ||
234 | } | ||
235 | |||
236 | func (i *ProviderInstaller) getProviderChecksum(name, version string) (string, error) { | ||
237 | checksums, err := getPluginSHA256SUMs(i.providerChecksumURL(name, version)) | ||
238 | if err != nil { | ||
239 | return "", err | ||
240 | } | ||
241 | |||
242 | return checksumForFile(checksums, i.providerFileName(name, version)), nil | ||
243 | } | ||
244 | |||
245 | // Return the plugin version by making a HEAD request to the provided url. | ||
246 | // If the header is not present, we assume the latest version will be | ||
247 | // compatible, and leave the check for discovery or execution. | ||
248 | func checkPlugin(url string, pluginProtocolVersion uint) bool { | ||
249 | resp, err := httpClient.Head(url) | ||
250 | if err != nil { | ||
251 | log.Printf("[ERROR] error fetching plugin headers: %s", err) | ||
252 | return false | ||
253 | } | ||
254 | |||
255 | if resp.StatusCode != http.StatusOK { | ||
256 | log.Println("[ERROR] non-200 status fetching plugin headers:", resp.Status) | ||
257 | return false | ||
258 | } | ||
259 | |||
260 | proto := resp.Header.Get(protocolVersionHeader) | ||
261 | if proto == "" { | ||
262 | // The header isn't present, but we don't make this error fatal since | ||
263 | // the latest version will probably work. | ||
264 | log.Printf("[WARNING] missing %s from: %s", protocolVersionHeader, url) | ||
265 | return true | ||
266 | } | ||
267 | |||
268 | protoVersion, err := strconv.Atoi(proto) | ||
269 | if err != nil { | ||
270 | log.Printf("[ERROR] invalid ProtocolVersion: %s", proto) | ||
271 | return false | ||
272 | } | ||
273 | |||
274 | return protoVersion == int(pluginProtocolVersion) | ||
275 | } | ||
276 | |||
277 | // list the version available for the named plugin | ||
278 | func (i *ProviderInstaller) listProviderVersions(name string) ([]Version, error) { | ||
279 | versions, err := listPluginVersions(i.providerVersionsURL(name)) | ||
280 | if err != nil { | ||
281 | // listPluginVersions returns a verbose error message indicating | ||
282 | // what was being accessed and what failed | ||
283 | return nil, err | ||
284 | } | ||
285 | return versions, nil | ||
286 | } | ||
287 | |||
288 | var errVersionNotFound = errors.New("version not found") | ||
289 | |||
290 | // take the list of available versions for a plugin, and filter out those that | ||
291 | // don't fit the constraints. | ||
292 | func allowedVersions(available []Version, required Constraints) []Version { | ||
293 | var allowed []Version | ||
294 | |||
295 | for _, v := range available { | ||
296 | if required.Allows(v) { | ||
297 | allowed = append(allowed, v) | ||
298 | } | ||
299 | } | ||
300 | |||
301 | return allowed | ||
302 | } | ||
303 | |||
304 | // return a list of the plugin versions at the given URL | ||
305 | func listPluginVersions(url string) ([]Version, error) { | ||
306 | resp, err := httpClient.Get(url) | ||
307 | if err != nil { | ||
308 | // http library produces a verbose error message that includes the | ||
309 | // URL being accessed, etc. | ||
310 | return nil, err | ||
311 | } | ||
312 | defer resp.Body.Close() | ||
313 | |||
314 | if resp.StatusCode != http.StatusOK { | ||
315 | body, _ := ioutil.ReadAll(resp.Body) | ||
316 | log.Printf("[ERROR] failed to fetch plugin versions from %s\n%s\n%s", url, resp.Status, body) | ||
317 | |||
318 | switch resp.StatusCode { | ||
319 | case http.StatusNotFound, http.StatusForbidden: | ||
320 | // These are treated as indicative of the given name not being | ||
321 | // a valid provider name at all. | ||
322 | return nil, ErrorNoSuchProvider | ||
323 | |||
324 | default: | ||
325 | // All other errors are assumed to be operational problems. | ||
326 | return nil, fmt.Errorf("error accessing %s: %s", url, resp.Status) | ||
327 | } | ||
328 | |||
329 | } | ||
330 | |||
331 | body, err := html.Parse(resp.Body) | ||
332 | if err != nil { | ||
333 | log.Fatal(err) | ||
334 | } | ||
335 | |||
336 | names := []string{} | ||
337 | |||
338 | // all we need to do is list links on the directory listing page that look like plugins | ||
339 | var f func(*html.Node) | ||
340 | f = func(n *html.Node) { | ||
341 | if n.Type == html.ElementNode && n.Data == "a" { | ||
342 | c := n.FirstChild | ||
343 | if c != nil && c.Type == html.TextNode && strings.HasPrefix(c.Data, "terraform-") { | ||
344 | names = append(names, c.Data) | ||
345 | return | ||
346 | } | ||
347 | } | ||
348 | for c := n.FirstChild; c != nil; c = c.NextSibling { | ||
349 | f(c) | ||
350 | } | ||
351 | } | ||
352 | f(body) | ||
353 | |||
354 | return versionsFromNames(names), nil | ||
355 | } | ||
356 | |||
357 | // parse the list of directory names into a sorted list of available versions | ||
358 | func versionsFromNames(names []string) []Version { | ||
359 | var versions []Version | ||
360 | for _, name := range names { | ||
361 | parts := strings.SplitN(name, "_", 2) | ||
362 | if len(parts) == 2 && parts[1] != "" { | ||
363 | v, err := VersionStr(parts[1]).Parse() | ||
364 | if err != nil { | ||
365 | // filter invalid versions scraped from the page | ||
366 | log.Printf("[WARN] invalid version found for %q: %s", name, err) | ||
367 | continue | ||
368 | } | ||
369 | |||
370 | versions = append(versions, v) | ||
371 | } | ||
372 | } | ||
373 | |||
374 | return versions | ||
375 | } | ||
376 | |||
377 | func checksumForFile(sums []byte, name string) string { | ||
378 | for _, line := range strings.Split(string(sums), "\n") { | ||
379 | parts := strings.Fields(line) | ||
380 | if len(parts) > 1 && parts[1] == name { | ||
381 | return parts[0] | ||
382 | } | ||
383 | } | ||
384 | return "" | ||
385 | } | ||
386 | |||
387 | // fetch the SHA256SUMS file provided, and verify its signature. | ||
388 | func getPluginSHA256SUMs(sumsURL string) ([]byte, error) { | ||
389 | sigURL := sumsURL + ".sig" | ||
390 | |||
391 | sums, err := getFile(sumsURL) | ||
392 | if err != nil { | ||
393 | return nil, fmt.Errorf("error fetching checksums: %s", err) | ||
394 | } | ||
395 | |||
396 | sig, err := getFile(sigURL) | ||
397 | if err != nil { | ||
398 | return nil, fmt.Errorf("error fetching checksums signature: %s", err) | ||
399 | } | ||
400 | |||
401 | if err := verifySig(sums, sig); err != nil { | ||
402 | return nil, err | ||
403 | } | ||
404 | |||
405 | return sums, nil | ||
406 | } | ||
407 | |||
408 | func getFile(url string) ([]byte, error) { | ||
409 | resp, err := httpClient.Get(url) | ||
410 | if err != nil { | ||
411 | return nil, err | ||
412 | } | ||
413 | defer resp.Body.Close() | ||
414 | |||
415 | if resp.StatusCode != http.StatusOK { | ||
416 | return nil, fmt.Errorf("%s", resp.Status) | ||
417 | } | ||
418 | |||
419 | data, err := ioutil.ReadAll(resp.Body) | ||
420 | if err != nil { | ||
421 | return data, err | ||
422 | } | ||
423 | return data, nil | ||
424 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/plugin/discovery/meta.go b/vendor/github.com/hashicorp/terraform/plugin/discovery/meta.go new file mode 100644 index 0000000..bdcebcb --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/plugin/discovery/meta.go | |||
@@ -0,0 +1,41 @@ | |||
1 | package discovery | ||
2 | |||
3 | import ( | ||
4 | "crypto/sha256" | ||
5 | "io" | ||
6 | "os" | ||
7 | ) | ||
8 | |||
9 | // PluginMeta is metadata about a plugin, useful for launching the plugin | ||
10 | // and for understanding which plugins are available. | ||
11 | type PluginMeta struct { | ||
12 | // Name is the name of the plugin, e.g. as inferred from the plugin | ||
13 | // binary's filename, or by explicit configuration. | ||
14 | Name string | ||
15 | |||
16 | // Version is the semver version of the plugin, expressed as a string | ||
17 | // that might not be semver-valid. | ||
18 | Version VersionStr | ||
19 | |||
20 | // Path is the absolute path of the executable that can be launched | ||
21 | // to provide the RPC server for this plugin. | ||
22 | Path string | ||
23 | } | ||
24 | |||
25 | // SHA256 returns a SHA256 hash of the content of the referenced executable | ||
26 | // file, or an error if the file's contents cannot be read. | ||
27 | func (m PluginMeta) SHA256() ([]byte, error) { | ||
28 | f, err := os.Open(m.Path) | ||
29 | if err != nil { | ||
30 | return nil, err | ||
31 | } | ||
32 | defer f.Close() | ||
33 | |||
34 | h := sha256.New() | ||
35 | _, err = io.Copy(h, f) | ||
36 | if err != nil { | ||
37 | return nil, err | ||
38 | } | ||
39 | |||
40 | return h.Sum(nil), nil | ||
41 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/plugin/discovery/meta_set.go b/vendor/github.com/hashicorp/terraform/plugin/discovery/meta_set.go new file mode 100644 index 0000000..181ea1f --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/plugin/discovery/meta_set.go | |||
@@ -0,0 +1,195 @@ | |||
1 | package discovery | ||
2 | |||
3 | // A PluginMetaSet is a set of PluginMeta objects meeting a certain criteria. | ||
4 | // | ||
5 | // Methods on this type allow filtering of the set to produce subsets that | ||
6 | // meet more restrictive criteria. | ||
7 | type PluginMetaSet map[PluginMeta]struct{} | ||
8 | |||
9 | // Add inserts the given PluginMeta into the receiving set. This is a no-op | ||
10 | // if the given meta is already present. | ||
11 | func (s PluginMetaSet) Add(p PluginMeta) { | ||
12 | s[p] = struct{}{} | ||
13 | } | ||
14 | |||
15 | // Remove removes the given PluginMeta from the receiving set. This is a no-op | ||
16 | // if the given meta is not already present. | ||
17 | func (s PluginMetaSet) Remove(p PluginMeta) { | ||
18 | delete(s, p) | ||
19 | } | ||
20 | |||
21 | // Has returns true if the given meta is in the receiving set, or false | ||
22 | // otherwise. | ||
23 | func (s PluginMetaSet) Has(p PluginMeta) bool { | ||
24 | _, ok := s[p] | ||
25 | return ok | ||
26 | } | ||
27 | |||
28 | // Count returns the number of metas in the set | ||
29 | func (s PluginMetaSet) Count() int { | ||
30 | return len(s) | ||
31 | } | ||
32 | |||
33 | // ValidateVersions returns two new PluginMetaSets, separating those with | ||
34 | // versions that have syntax-valid semver versions from those that don't. | ||
35 | // | ||
36 | // Eliminating invalid versions from consideration (and possibly warning about | ||
37 | // them) is usually the first step of working with a meta set after discovery | ||
38 | // has completed. | ||
39 | func (s PluginMetaSet) ValidateVersions() (valid, invalid PluginMetaSet) { | ||
40 | valid = make(PluginMetaSet) | ||
41 | invalid = make(PluginMetaSet) | ||
42 | for p := range s { | ||
43 | if _, err := p.Version.Parse(); err == nil { | ||
44 | valid.Add(p) | ||
45 | } else { | ||
46 | invalid.Add(p) | ||
47 | } | ||
48 | } | ||
49 | return | ||
50 | } | ||
51 | |||
52 | // WithName returns the subset of metas that have the given name. | ||
53 | func (s PluginMetaSet) WithName(name string) PluginMetaSet { | ||
54 | ns := make(PluginMetaSet) | ||
55 | for p := range s { | ||
56 | if p.Name == name { | ||
57 | ns.Add(p) | ||
58 | } | ||
59 | } | ||
60 | return ns | ||
61 | } | ||
62 | |||
63 | // WithVersion returns the subset of metas that have the given version. | ||
64 | // | ||
65 | // This should be used only with the "valid" result from ValidateVersions; | ||
66 | // it will ignore any plugin metas that have a invalid version strings. | ||
67 | func (s PluginMetaSet) WithVersion(version Version) PluginMetaSet { | ||
68 | ns := make(PluginMetaSet) | ||
69 | for p := range s { | ||
70 | gotVersion, err := p.Version.Parse() | ||
71 | if err != nil { | ||
72 | continue | ||
73 | } | ||
74 | if gotVersion.Equal(version) { | ||
75 | ns.Add(p) | ||
76 | } | ||
77 | } | ||
78 | return ns | ||
79 | } | ||
80 | |||
81 | // ByName groups the metas in the set by their Names, returning a map. | ||
82 | func (s PluginMetaSet) ByName() map[string]PluginMetaSet { | ||
83 | ret := make(map[string]PluginMetaSet) | ||
84 | for p := range s { | ||
85 | if _, ok := ret[p.Name]; !ok { | ||
86 | ret[p.Name] = make(PluginMetaSet) | ||
87 | } | ||
88 | ret[p.Name].Add(p) | ||
89 | } | ||
90 | return ret | ||
91 | } | ||
92 | |||
93 | // Newest returns the one item from the set that has the newest Version value. | ||
94 | // | ||
95 | // The result is meaningful only if the set is already filtered such that | ||
96 | // all of the metas have the same Name. | ||
97 | // | ||
98 | // If there isn't at least one meta in the set then this function will panic. | ||
99 | // Use Count() to ensure that there is at least one value before calling. | ||
100 | // | ||
101 | // If any of the metas have invalid version strings then this function will | ||
102 | // panic. Use ValidateVersions() first to filter out metas with invalid | ||
103 | // versions. | ||
104 | // | ||
105 | // If two metas have the same Version then one is arbitrarily chosen. This | ||
106 | // situation should be avoided by pre-filtering the set. | ||
107 | func (s PluginMetaSet) Newest() PluginMeta { | ||
108 | if len(s) == 0 { | ||
109 | panic("can't call NewestStable on empty PluginMetaSet") | ||
110 | } | ||
111 | |||
112 | var first = true | ||
113 | var winner PluginMeta | ||
114 | var winnerVersion Version | ||
115 | for p := range s { | ||
116 | version, err := p.Version.Parse() | ||
117 | if err != nil { | ||
118 | panic(err) | ||
119 | } | ||
120 | |||
121 | if first == true || version.NewerThan(winnerVersion) { | ||
122 | winner = p | ||
123 | winnerVersion = version | ||
124 | first = false | ||
125 | } | ||
126 | } | ||
127 | |||
128 | return winner | ||
129 | } | ||
130 | |||
131 | // ConstrainVersions takes a set of requirements and attempts to | ||
132 | // return a map from name to a set of metas that have the matching | ||
133 | // name and an appropriate version. | ||
134 | // | ||
135 | // If any of the given requirements match *no* plugins then its PluginMetaSet | ||
136 | // in the returned map will be empty. | ||
137 | // | ||
138 | // All viable metas are returned, so the caller can apply any desired filtering | ||
139 | // to reduce down to a single option. For example, calling Newest() to obtain | ||
140 | // the highest available version. | ||
141 | // | ||
142 | // If any of the metas in the set have invalid version strings then this | ||
143 | // function will panic. Use ValidateVersions() first to filter out metas with | ||
144 | // invalid versions. | ||
145 | func (s PluginMetaSet) ConstrainVersions(reqd PluginRequirements) map[string]PluginMetaSet { | ||
146 | ret := make(map[string]PluginMetaSet) | ||
147 | for p := range s { | ||
148 | name := p.Name | ||
149 | allowedVersions, ok := reqd[name] | ||
150 | if !ok { | ||
151 | continue | ||
152 | } | ||
153 | if _, ok := ret[p.Name]; !ok { | ||
154 | ret[p.Name] = make(PluginMetaSet) | ||
155 | } | ||
156 | version, err := p.Version.Parse() | ||
157 | if err != nil { | ||
158 | panic(err) | ||
159 | } | ||
160 | if allowedVersions.Allows(version) { | ||
161 | ret[p.Name].Add(p) | ||
162 | } | ||
163 | } | ||
164 | return ret | ||
165 | } | ||
166 | |||
167 | // OverridePaths returns a new set where any existing plugins with the given | ||
168 | // names are removed and replaced with the single path given in the map. | ||
169 | // | ||
170 | // This is here only to continue to support the legacy way of overriding | ||
171 | // plugin binaries in the .terraformrc file. It treats all given plugins | ||
172 | // as pre-versioning (version 0.0.0). This mechanism will eventually be | ||
173 | // phased out, with vendor directories being the intended replacement. | ||
174 | func (s PluginMetaSet) OverridePaths(paths map[string]string) PluginMetaSet { | ||
175 | ret := make(PluginMetaSet) | ||
176 | for p := range s { | ||
177 | if _, ok := paths[p.Name]; ok { | ||
178 | // Skip plugins that we're overridding | ||
179 | continue | ||
180 | } | ||
181 | |||
182 | ret.Add(p) | ||
183 | } | ||
184 | |||
185 | // Now add the metadata for overriding plugins | ||
186 | for name, path := range paths { | ||
187 | ret.Add(PluginMeta{ | ||
188 | Name: name, | ||
189 | Version: VersionZero, | ||
190 | Path: path, | ||
191 | }) | ||
192 | } | ||
193 | |||
194 | return ret | ||
195 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/plugin/discovery/requirements.go b/vendor/github.com/hashicorp/terraform/plugin/discovery/requirements.go new file mode 100644 index 0000000..75430fd --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/plugin/discovery/requirements.go | |||
@@ -0,0 +1,105 @@ | |||
1 | package discovery | ||
2 | |||
3 | import ( | ||
4 | "bytes" | ||
5 | ) | ||
6 | |||
7 | // PluginRequirements describes a set of plugins (assumed to be of a consistent | ||
8 | // kind) that are required to exist and have versions within the given | ||
9 | // corresponding sets. | ||
10 | type PluginRequirements map[string]*PluginConstraints | ||
11 | |||
12 | // PluginConstraints represents an element of PluginRequirements describing | ||
13 | // the constraints for a single plugin. | ||
14 | type PluginConstraints struct { | ||
15 | // Specifies that the plugin's version must be within the given | ||
16 | // constraints. | ||
17 | Versions Constraints | ||
18 | |||
19 | // If non-nil, the hash of the on-disk plugin executable must exactly | ||
20 | // match the SHA256 hash given here. | ||
21 | SHA256 []byte | ||
22 | } | ||
23 | |||
24 | // Allows returns true if the given version is within the receiver's version | ||
25 | // constraints. | ||
26 | func (s *PluginConstraints) Allows(v Version) bool { | ||
27 | return s.Versions.Allows(v) | ||
28 | } | ||
29 | |||
30 | // AcceptsSHA256 returns true if the given executable SHA256 hash is acceptable, | ||
31 | // either because it matches the constraint or because there is no such | ||
32 | // constraint. | ||
33 | func (s *PluginConstraints) AcceptsSHA256(digest []byte) bool { | ||
34 | if s.SHA256 == nil { | ||
35 | return true | ||
36 | } | ||
37 | return bytes.Equal(s.SHA256, digest) | ||
38 | } | ||
39 | |||
40 | // Merge takes the contents of the receiver and the other given requirements | ||
41 | // object and merges them together into a single requirements structure | ||
42 | // that satisfies both sets of requirements. | ||
43 | // | ||
44 | // Note that it doesn't make sense to merge two PluginRequirements with | ||
45 | // differing required plugin SHA256 hashes, since the result will never | ||
46 | // match any plugin. | ||
47 | func (r PluginRequirements) Merge(other PluginRequirements) PluginRequirements { | ||
48 | ret := make(PluginRequirements) | ||
49 | for n, c := range r { | ||
50 | ret[n] = &PluginConstraints{ | ||
51 | Versions: Constraints{}.Append(c.Versions), | ||
52 | SHA256: c.SHA256, | ||
53 | } | ||
54 | } | ||
55 | for n, c := range other { | ||
56 | if existing, exists := ret[n]; exists { | ||
57 | ret[n].Versions = ret[n].Versions.Append(c.Versions) | ||
58 | |||
59 | if existing.SHA256 != nil { | ||
60 | if c.SHA256 != nil && !bytes.Equal(c.SHA256, existing.SHA256) { | ||
61 | // If we've been asked to merge two constraints with | ||
62 | // different SHA256 hashes then we'll produce a dummy value | ||
63 | // that can never match anything. This is a silly edge case | ||
64 | // that no reasonable caller should hit. | ||
65 | ret[n].SHA256 = []byte(invalidProviderHash) | ||
66 | } | ||
67 | } else { | ||
68 | ret[n].SHA256 = c.SHA256 // might still be nil | ||
69 | } | ||
70 | } else { | ||
71 | ret[n] = &PluginConstraints{ | ||
72 | Versions: Constraints{}.Append(c.Versions), | ||
73 | SHA256: c.SHA256, | ||
74 | } | ||
75 | } | ||
76 | } | ||
77 | return ret | ||
78 | } | ||
79 | |||
80 | // LockExecutables applies additional constraints to the receiver that | ||
81 | // require plugin executables with specific SHA256 digests. This modifies | ||
82 | // the receiver in-place, since it's intended to be applied after | ||
83 | // version constraints have been resolved. | ||
84 | // | ||
85 | // The given map must include a key for every plugin that is already | ||
86 | // required. If not, any missing keys will cause the corresponding plugin | ||
87 | // to never match, though the direct caller doesn't necessarily need to | ||
88 | // guarantee this as long as the downstream code _applying_ these constraints | ||
89 | // is able to deal with the non-match in some way. | ||
90 | func (r PluginRequirements) LockExecutables(sha256s map[string][]byte) { | ||
91 | for name, cons := range r { | ||
92 | digest := sha256s[name] | ||
93 | |||
94 | if digest == nil { | ||
95 | // Prevent any match, which will then presumably cause the | ||
96 | // downstream consumer of this requirements to report an error. | ||
97 | cons.SHA256 = []byte(invalidProviderHash) | ||
98 | continue | ||
99 | } | ||
100 | |||
101 | cons.SHA256 = digest | ||
102 | } | ||
103 | } | ||
104 | |||
105 | const invalidProviderHash = "<invalid>" | ||
diff --git a/vendor/github.com/hashicorp/terraform/plugin/discovery/signature.go b/vendor/github.com/hashicorp/terraform/plugin/discovery/signature.go new file mode 100644 index 0000000..b6686a5 --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/plugin/discovery/signature.go | |||
@@ -0,0 +1,53 @@ | |||
1 | package discovery | ||
2 | |||
3 | import ( | ||
4 | "bytes" | ||
5 | "log" | ||
6 | "strings" | ||
7 | |||
8 | "golang.org/x/crypto/openpgp" | ||
9 | ) | ||
10 | |||
11 | // Verify the data using the provided openpgp detached signature and the | ||
12 | // embedded hashicorp public key. | ||
13 | func verifySig(data, sig []byte) error { | ||
14 | el, err := openpgp.ReadArmoredKeyRing(strings.NewReader(hashiPublicKey)) | ||
15 | if err != nil { | ||
16 | log.Fatal(err) | ||
17 | } | ||
18 | |||
19 | _, err = openpgp.CheckDetachedSignature(el, bytes.NewReader(data), bytes.NewReader(sig)) | ||
20 | return err | ||
21 | } | ||
22 | |||
23 | // this is the public key that signs the checksums file for releases. | ||
24 | const hashiPublicKey = `-----BEGIN PGP PUBLIC KEY BLOCK----- | ||
25 | Version: GnuPG v1 | ||
26 | |||
27 | mQENBFMORM0BCADBRyKO1MhCirazOSVwcfTr1xUxjPvfxD3hjUwHtjsOy/bT6p9f | ||
28 | W2mRPfwnq2JB5As+paL3UGDsSRDnK9KAxQb0NNF4+eVhr/EJ18s3wwXXDMjpIifq | ||
29 | fIm2WyH3G+aRLTLPIpscUNKDyxFOUbsmgXAmJ46Re1fn8uKxKRHbfa39aeuEYWFA | ||
30 | 3drdL1WoUngvED7f+RnKBK2G6ZEpO+LDovQk19xGjiMTtPJrjMjZJ3QXqPvx5wca | ||
31 | KSZLr4lMTuoTI/ZXyZy5bD4tShiZz6KcyX27cD70q2iRcEZ0poLKHyEIDAi3TM5k | ||
32 | SwbbWBFd5RNPOR0qzrb/0p9ksKK48IIfH2FvABEBAAG0K0hhc2hpQ29ycCBTZWN1 | ||
33 | cml0eSA8c2VjdXJpdHlAaGFzaGljb3JwLmNvbT6JATgEEwECACIFAlMORM0CGwMG | ||
34 | CwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEFGFLYc0j/xMyWIIAIPhcVqiQ59n | ||
35 | Jc07gjUX0SWBJAxEG1lKxfzS4Xp+57h2xxTpdotGQ1fZwsihaIqow337YHQI3q0i | ||
36 | SqV534Ms+j/tU7X8sq11xFJIeEVG8PASRCwmryUwghFKPlHETQ8jJ+Y8+1asRydi | ||
37 | psP3B/5Mjhqv/uOK+Vy3zAyIpyDOMtIpOVfjSpCplVRdtSTFWBu9Em7j5I2HMn1w | ||
38 | sJZnJgXKpybpibGiiTtmnFLOwibmprSu04rsnP4ncdC2XRD4wIjoyA+4PKgX3sCO | ||
39 | klEzKryWYBmLkJOMDdo52LttP3279s7XrkLEE7ia0fXa2c12EQ0f0DQ1tGUvyVEW | ||
40 | WmJVccm5bq25AQ0EUw5EzQEIANaPUY04/g7AmYkOMjaCZ6iTp9hB5Rsj/4ee/ln9 | ||
41 | wArzRO9+3eejLWh53FoN1rO+su7tiXJA5YAzVy6tuolrqjM8DBztPxdLBbEi4V+j | ||
42 | 2tK0dATdBQBHEh3OJApO2UBtcjaZBT31zrG9K55D+CrcgIVEHAKY8Cb4kLBkb5wM | ||
43 | skn+DrASKU0BNIV1qRsxfiUdQHZfSqtp004nrql1lbFMLFEuiY8FZrkkQ9qduixo | ||
44 | mTT6f34/oiY+Jam3zCK7RDN/OjuWheIPGj/Qbx9JuNiwgX6yRj7OE1tjUx6d8g9y | ||
45 | 0H1fmLJbb3WZZbuuGFnK6qrE3bGeY8+AWaJAZ37wpWh1p0cAEQEAAYkBHwQYAQIA | ||
46 | CQUCUw5EzQIbDAAKCRBRhS2HNI/8TJntCAClU7TOO/X053eKF1jqNW4A1qpxctVc | ||
47 | z8eTcY8Om5O4f6a/rfxfNFKn9Qyja/OG1xWNobETy7MiMXYjaa8uUx5iFy6kMVaP | ||
48 | 0BXJ59NLZjMARGw6lVTYDTIvzqqqwLxgliSDfSnqUhubGwvykANPO+93BBx89MRG | ||
49 | unNoYGXtPlhNFrAsB1VR8+EyKLv2HQtGCPSFBhrjuzH3gxGibNDDdFQLxxuJWepJ | ||
50 | EK1UbTS4ms0NgZ2Uknqn1WRU1Ki7rE4sTy68iZtWpKQXZEJa0IGnuI2sSINGcXCJ | ||
51 | oEIgXTMyCILo34Fa/C6VCm2WBgz9zZO8/rHIiQm1J5zqz0DrDwKBUM9C | ||
52 | =LYpS | ||
53 | -----END PGP PUBLIC KEY BLOCK-----` | ||
diff --git a/vendor/github.com/hashicorp/terraform/plugin/discovery/version.go b/vendor/github.com/hashicorp/terraform/plugin/discovery/version.go new file mode 100644 index 0000000..8fad58d --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/plugin/discovery/version.go | |||
@@ -0,0 +1,72 @@ | |||
1 | package discovery | ||
2 | |||
3 | import ( | ||
4 | "fmt" | ||
5 | "sort" | ||
6 | |||
7 | version "github.com/hashicorp/go-version" | ||
8 | ) | ||
9 | |||
10 | const VersionZero = "0.0.0" | ||
11 | |||
12 | // A VersionStr is a string containing a possibly-invalid representation | ||
13 | // of a semver version number. Call Parse on it to obtain a real Version | ||
14 | // object, or discover that it is invalid. | ||
15 | type VersionStr string | ||
16 | |||
17 | // Parse transforms a VersionStr into a Version if it is | ||
18 | // syntactically valid. If it isn't then an error is returned instead. | ||
19 | func (s VersionStr) Parse() (Version, error) { | ||
20 | raw, err := version.NewVersion(string(s)) | ||
21 | if err != nil { | ||
22 | return Version{}, err | ||
23 | } | ||
24 | return Version{raw}, nil | ||
25 | } | ||
26 | |||
27 | // MustParse transforms a VersionStr into a Version if it is | ||
28 | // syntactically valid. If it isn't then it panics. | ||
29 | func (s VersionStr) MustParse() Version { | ||
30 | ret, err := s.Parse() | ||
31 | if err != nil { | ||
32 | panic(err) | ||
33 | } | ||
34 | return ret | ||
35 | } | ||
36 | |||
37 | // Version represents a version number that has been parsed from | ||
38 | // a semver string and known to be valid. | ||
39 | type Version struct { | ||
40 | // We wrap this here just because it avoids a proliferation of | ||
41 | // direct go-version imports all over the place, and keeps the | ||
42 | // version-processing details within this package. | ||
43 | raw *version.Version | ||
44 | } | ||
45 | |||
46 | func (v Version) String() string { | ||
47 | return v.raw.String() | ||
48 | } | ||
49 | |||
50 | func (v Version) NewerThan(other Version) bool { | ||
51 | return v.raw.GreaterThan(other.raw) | ||
52 | } | ||
53 | |||
54 | func (v Version) Equal(other Version) bool { | ||
55 | return v.raw.Equal(other.raw) | ||
56 | } | ||
57 | |||
58 | // MinorUpgradeConstraintStr returns a ConstraintStr that would permit | ||
59 | // minor upgrades relative to the receiving version. | ||
60 | func (v Version) MinorUpgradeConstraintStr() ConstraintStr { | ||
61 | segments := v.raw.Segments() | ||
62 | return ConstraintStr(fmt.Sprintf("~> %d.%d", segments[0], segments[1])) | ||
63 | } | ||
64 | |||
65 | type Versions []Version | ||
66 | |||
67 | // Sort sorts version from newest to oldest. | ||
68 | func (v Versions) Sort() { | ||
69 | sort.Slice(v, func(i, j int) bool { | ||
70 | return v[i].NewerThan(v[j]) | ||
71 | }) | ||
72 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/plugin/discovery/version_set.go b/vendor/github.com/hashicorp/terraform/plugin/discovery/version_set.go new file mode 100644 index 0000000..0aefd75 --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/plugin/discovery/version_set.go | |||
@@ -0,0 +1,84 @@ | |||
1 | package discovery | ||
2 | |||
3 | import ( | ||
4 | "sort" | ||
5 | |||
6 | version "github.com/hashicorp/go-version" | ||
7 | ) | ||
8 | |||
9 | // A ConstraintStr is a string containing a possibly-invalid representation | ||
10 | // of a version constraint provided in configuration. Call Parse on it to | ||
11 | // obtain a real Constraint object, or discover that it is invalid. | ||
12 | type ConstraintStr string | ||
13 | |||
14 | // Parse transforms a ConstraintStr into a Constraints if it is | ||
15 | // syntactically valid. If it isn't then an error is returned instead. | ||
16 | func (s ConstraintStr) Parse() (Constraints, error) { | ||
17 | raw, err := version.NewConstraint(string(s)) | ||
18 | if err != nil { | ||
19 | return Constraints{}, err | ||
20 | } | ||
21 | return Constraints{raw}, nil | ||
22 | } | ||
23 | |||
24 | // MustParse is like Parse but it panics if the constraint string is invalid. | ||
25 | func (s ConstraintStr) MustParse() Constraints { | ||
26 | ret, err := s.Parse() | ||
27 | if err != nil { | ||
28 | panic(err) | ||
29 | } | ||
30 | return ret | ||
31 | } | ||
32 | |||
33 | // Constraints represents a set of versions which any given Version is either | ||
34 | // a member of or not. | ||
35 | type Constraints struct { | ||
36 | raw version.Constraints | ||
37 | } | ||
38 | |||
39 | // AllVersions is a Constraints containing all versions | ||
40 | var AllVersions Constraints | ||
41 | |||
42 | func init() { | ||
43 | AllVersions = Constraints{ | ||
44 | raw: make(version.Constraints, 0), | ||
45 | } | ||
46 | } | ||
47 | |||
48 | // Allows returns true if the given version permitted by the receiving | ||
49 | // constraints set. | ||
50 | func (s Constraints) Allows(v Version) bool { | ||
51 | return s.raw.Check(v.raw) | ||
52 | } | ||
53 | |||
54 | // Append combines the receiving set with the given other set to produce | ||
55 | // a set that is the intersection of both sets, which is to say that resulting | ||
56 | // constraints contain only the versions that are members of both. | ||
57 | func (s Constraints) Append(other Constraints) Constraints { | ||
58 | raw := make(version.Constraints, 0, len(s.raw)+len(other.raw)) | ||
59 | |||
60 | // Since "raw" is a list of constraints that remove versions from the set, | ||
61 | // "Intersection" is implemented by concatenating together those lists, | ||
62 | // thus leaving behind only the versions not removed by either list. | ||
63 | raw = append(raw, s.raw...) | ||
64 | raw = append(raw, other.raw...) | ||
65 | |||
66 | // while the set is unordered, we sort these lexically for consistent output | ||
67 | sort.Slice(raw, func(i, j int) bool { | ||
68 | return raw[i].String() < raw[j].String() | ||
69 | }) | ||
70 | |||
71 | return Constraints{raw} | ||
72 | } | ||
73 | |||
74 | // String returns a string representation of the set members as a set | ||
75 | // of range constraints. | ||
76 | func (s Constraints) String() string { | ||
77 | return s.raw.String() | ||
78 | } | ||
79 | |||
80 | // Unconstrained returns true if and only if the receiver is an empty | ||
81 | // constraint set. | ||
82 | func (s Constraints) Unconstrained() bool { | ||
83 | return len(s.raw) == 0 | ||
84 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/terraform/context.go b/vendor/github.com/hashicorp/terraform/terraform/context.go index 306128e..a814a85 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/context.go +++ b/vendor/github.com/hashicorp/terraform/terraform/context.go | |||
@@ -57,12 +57,17 @@ type ContextOpts struct { | |||
57 | Parallelism int | 57 | Parallelism int |
58 | State *State | 58 | State *State |
59 | StateFutureAllowed bool | 59 | StateFutureAllowed bool |
60 | Providers map[string]ResourceProviderFactory | 60 | ProviderResolver ResourceProviderResolver |
61 | Provisioners map[string]ResourceProvisionerFactory | 61 | Provisioners map[string]ResourceProvisionerFactory |
62 | Shadow bool | 62 | Shadow bool |
63 | Targets []string | 63 | Targets []string |
64 | Variables map[string]interface{} | 64 | Variables map[string]interface{} |
65 | 65 | ||
66 | // If non-nil, will apply as additional constraints on the provider | ||
67 | // plugins that will be requested from the provider resolver. | ||
68 | ProviderSHA256s map[string][]byte | ||
69 | SkipProviderVerify bool | ||
70 | |||
66 | UIInput UIInput | 71 | UIInput UIInput |
67 | } | 72 | } |
68 | 73 | ||
@@ -102,6 +107,7 @@ type Context struct { | |||
102 | l sync.Mutex // Lock acquired during any task | 107 | l sync.Mutex // Lock acquired during any task |
103 | parallelSem Semaphore | 108 | parallelSem Semaphore |
104 | providerInputConfig map[string]map[string]interface{} | 109 | providerInputConfig map[string]map[string]interface{} |
110 | providerSHA256s map[string][]byte | ||
105 | runLock sync.Mutex | 111 | runLock sync.Mutex |
106 | runCond *sync.Cond | 112 | runCond *sync.Cond |
107 | runContext context.Context | 113 | runContext context.Context |
@@ -166,7 +172,6 @@ func NewContext(opts *ContextOpts) (*Context, error) { | |||
166 | // set by environment variables if necessary. This includes | 172 | // set by environment variables if necessary. This includes |
167 | // values taken from -var-file in addition. | 173 | // values taken from -var-file in addition. |
168 | variables := make(map[string]interface{}) | 174 | variables := make(map[string]interface{}) |
169 | |||
170 | if opts.Module != nil { | 175 | if opts.Module != nil { |
171 | var err error | 176 | var err error |
172 | variables, err = Variables(opts.Module, opts.Variables) | 177 | variables, err = Variables(opts.Module, opts.Variables) |
@@ -175,6 +180,23 @@ func NewContext(opts *ContextOpts) (*Context, error) { | |||
175 | } | 180 | } |
176 | } | 181 | } |
177 | 182 | ||
183 | // Bind available provider plugins to the constraints in config | ||
184 | var providers map[string]ResourceProviderFactory | ||
185 | if opts.ProviderResolver != nil { | ||
186 | var err error | ||
187 | deps := ModuleTreeDependencies(opts.Module, state) | ||
188 | reqd := deps.AllPluginRequirements() | ||
189 | if opts.ProviderSHA256s != nil && !opts.SkipProviderVerify { | ||
190 | reqd.LockExecutables(opts.ProviderSHA256s) | ||
191 | } | ||
192 | providers, err = resourceProviderFactories(opts.ProviderResolver, reqd) | ||
193 | if err != nil { | ||
194 | return nil, err | ||
195 | } | ||
196 | } else { | ||
197 | providers = make(map[string]ResourceProviderFactory) | ||
198 | } | ||
199 | |||
178 | diff := opts.Diff | 200 | diff := opts.Diff |
179 | if diff == nil { | 201 | if diff == nil { |
180 | diff = &Diff{} | 202 | diff = &Diff{} |
@@ -182,7 +204,7 @@ func NewContext(opts *ContextOpts) (*Context, error) { | |||
182 | 204 | ||
183 | return &Context{ | 205 | return &Context{ |
184 | components: &basicComponentFactory{ | 206 | components: &basicComponentFactory{ |
185 | providers: opts.Providers, | 207 | providers: providers, |
186 | provisioners: opts.Provisioners, | 208 | provisioners: opts.Provisioners, |
187 | }, | 209 | }, |
188 | destroy: opts.Destroy, | 210 | destroy: opts.Destroy, |
@@ -198,6 +220,7 @@ func NewContext(opts *ContextOpts) (*Context, error) { | |||
198 | 220 | ||
199 | parallelSem: NewSemaphore(par), | 221 | parallelSem: NewSemaphore(par), |
200 | providerInputConfig: make(map[string]map[string]interface{}), | 222 | providerInputConfig: make(map[string]map[string]interface{}), |
223 | providerSHA256s: opts.ProviderSHA256s, | ||
201 | sh: sh, | 224 | sh: sh, |
202 | }, nil | 225 | }, nil |
203 | } | 226 | } |
@@ -509,6 +532,9 @@ func (c *Context) Plan() (*Plan, error) { | |||
509 | Vars: c.variables, | 532 | Vars: c.variables, |
510 | State: c.state, | 533 | State: c.state, |
511 | Targets: c.targets, | 534 | Targets: c.targets, |
535 | |||
536 | TerraformVersion: VersionString(), | ||
537 | ProviderSHA256s: c.providerSHA256s, | ||
512 | } | 538 | } |
513 | 539 | ||
514 | var operation walkOperation | 540 | var operation walkOperation |
diff --git a/vendor/github.com/hashicorp/terraform/terraform/diff.go b/vendor/github.com/hashicorp/terraform/terraform/diff.go index a9fae6c..fd1687e 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/diff.go +++ b/vendor/github.com/hashicorp/terraform/terraform/diff.go | |||
@@ -28,7 +28,7 @@ const ( | |||
28 | // multiVal matches the index key to a flatmapped set, list or map | 28 | // multiVal matches the index key to a flatmapped set, list or map |
29 | var multiVal = regexp.MustCompile(`\.(#|%)$`) | 29 | var multiVal = regexp.MustCompile(`\.(#|%)$`) |
30 | 30 | ||
31 | // Diff trackes the changes that are necessary to apply a configuration | 31 | // Diff tracks the changes that are necessary to apply a configuration |
32 | // to an existing infrastructure. | 32 | // to an existing infrastructure. |
33 | type Diff struct { | 33 | type Diff struct { |
34 | // Modules contains all the modules that have a diff | 34 | // Modules contains all the modules that have a diff |
@@ -370,7 +370,7 @@ type InstanceDiff struct { | |||
370 | 370 | ||
371 | // Meta is a simple K/V map that is stored in a diff and persisted to | 371 | // Meta is a simple K/V map that is stored in a diff and persisted to |
372 | // plans but otherwise is completely ignored by Terraform core. It is | 372 | // plans but otherwise is completely ignored by Terraform core. It is |
373 | // mean to be used for additional data a resource may want to pass through. | 373 | // meant to be used for additional data a resource may want to pass through. |
374 | // The value here must only contain Go primitives and collections. | 374 | // The value here must only contain Go primitives and collections. |
375 | Meta map[string]interface{} | 375 | Meta map[string]interface{} |
376 | } | 376 | } |
@@ -551,7 +551,7 @@ func (d *InstanceDiff) SetDestroyDeposed(b bool) { | |||
551 | } | 551 | } |
552 | 552 | ||
553 | // These methods are properly locked, for use outside other InstanceDiff | 553 | // These methods are properly locked, for use outside other InstanceDiff |
554 | // methods but everywhere else within in the terraform package. | 554 | // methods but everywhere else within the terraform package. |
555 | // TODO refactor the locking scheme | 555 | // TODO refactor the locking scheme |
556 | func (d *InstanceDiff) SetTainted(b bool) { | 556 | func (d *InstanceDiff) SetTainted(b bool) { |
557 | d.mu.Lock() | 557 | d.mu.Lock() |
diff --git a/vendor/github.com/hashicorp/terraform/terraform/eval_diff.go b/vendor/github.com/hashicorp/terraform/terraform/eval_diff.go index 6f09526..c35f908 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/eval_diff.go +++ b/vendor/github.com/hashicorp/terraform/terraform/eval_diff.go | |||
@@ -81,6 +81,12 @@ type EvalDiff struct { | |||
81 | // Resource is needed to fetch the ignore_changes list so we can | 81 | // Resource is needed to fetch the ignore_changes list so we can |
82 | // filter user-requested ignored attributes from the diff. | 82 | // filter user-requested ignored attributes from the diff. |
83 | Resource *config.Resource | 83 | Resource *config.Resource |
84 | |||
85 | // Stub is used to flag the generated InstanceDiff as a stub. This is used to | ||
86 | // ensure that the node exists to perform interpolations and generate | ||
87 | // computed paths off of, but not as an actual diff where resouces should be | ||
88 | // counted, and not as a diff that should be acted on. | ||
89 | Stub bool | ||
84 | } | 90 | } |
85 | 91 | ||
86 | // TODO: test | 92 | // TODO: test |
@@ -90,11 +96,13 @@ func (n *EvalDiff) Eval(ctx EvalContext) (interface{}, error) { | |||
90 | provider := *n.Provider | 96 | provider := *n.Provider |
91 | 97 | ||
92 | // Call pre-diff hook | 98 | // Call pre-diff hook |
93 | err := ctx.Hook(func(h Hook) (HookAction, error) { | 99 | if !n.Stub { |
94 | return h.PreDiff(n.Info, state) | 100 | err := ctx.Hook(func(h Hook) (HookAction, error) { |
95 | }) | 101 | return h.PreDiff(n.Info, state) |
96 | if err != nil { | 102 | }) |
97 | return nil, err | 103 | if err != nil { |
104 | return nil, err | ||
105 | } | ||
98 | } | 106 | } |
99 | 107 | ||
100 | // The state for the diff must never be nil | 108 | // The state for the diff must never be nil |
@@ -158,15 +166,19 @@ func (n *EvalDiff) Eval(ctx EvalContext) (interface{}, error) { | |||
158 | } | 166 | } |
159 | 167 | ||
160 | // Call post-refresh hook | 168 | // Call post-refresh hook |
161 | err = ctx.Hook(func(h Hook) (HookAction, error) { | 169 | if !n.Stub { |
162 | return h.PostDiff(n.Info, diff) | 170 | err = ctx.Hook(func(h Hook) (HookAction, error) { |
163 | }) | 171 | return h.PostDiff(n.Info, diff) |
164 | if err != nil { | 172 | }) |
165 | return nil, err | 173 | if err != nil { |
174 | return nil, err | ||
175 | } | ||
166 | } | 176 | } |
167 | 177 | ||
168 | // Update our output | 178 | // Update our output if we care |
169 | *n.OutputDiff = diff | 179 | if n.OutputDiff != nil { |
180 | *n.OutputDiff = diff | ||
181 | } | ||
170 | 182 | ||
171 | // Update the state if we care | 183 | // Update the state if we care |
172 | if n.OutputState != nil { | 184 | if n.OutputState != nil { |
diff --git a/vendor/github.com/hashicorp/terraform/terraform/graph_builder_plan.go b/vendor/github.com/hashicorp/terraform/terraform/graph_builder_plan.go index a6a3a90..4b29bbb 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/graph_builder_plan.go +++ b/vendor/github.com/hashicorp/terraform/terraform/graph_builder_plan.go | |||
@@ -117,7 +117,15 @@ func (b *PlanGraphBuilder) Steps() []GraphTransformer { | |||
117 | &CountBoundaryTransformer{}, | 117 | &CountBoundaryTransformer{}, |
118 | 118 | ||
119 | // Target | 119 | // Target |
120 | &TargetsTransformer{Targets: b.Targets}, | 120 | &TargetsTransformer{ |
121 | Targets: b.Targets, | ||
122 | |||
123 | // Resource nodes from config have not yet been expanded for | ||
124 | // "count", so we must apply targeting without indices. Exact | ||
125 | // targeting will be dealt with later when these resources | ||
126 | // DynamicExpand. | ||
127 | IgnoreIndices: true, | ||
128 | }, | ||
121 | 129 | ||
122 | // Close opened plugin connections | 130 | // Close opened plugin connections |
123 | &CloseProviderTransformer{}, | 131 | &CloseProviderTransformer{}, |
diff --git a/vendor/github.com/hashicorp/terraform/terraform/graph_builder_refresh.go b/vendor/github.com/hashicorp/terraform/terraform/graph_builder_refresh.go index 0634f96..3d3e968 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/graph_builder_refresh.go +++ b/vendor/github.com/hashicorp/terraform/terraform/graph_builder_refresh.go | |||
@@ -144,7 +144,15 @@ func (b *RefreshGraphBuilder) Steps() []GraphTransformer { | |||
144 | &ReferenceTransformer{}, | 144 | &ReferenceTransformer{}, |
145 | 145 | ||
146 | // Target | 146 | // Target |
147 | &TargetsTransformer{Targets: b.Targets}, | 147 | &TargetsTransformer{ |
148 | Targets: b.Targets, | ||
149 | |||
150 | // Resource nodes from config have not yet been expanded for | ||
151 | // "count", so we must apply targeting without indices. Exact | ||
152 | // targeting will be dealt with later when these resources | ||
153 | // DynamicExpand. | ||
154 | IgnoreIndices: true, | ||
155 | }, | ||
148 | 156 | ||
149 | // Close opened plugin connections | 157 | // Close opened plugin connections |
150 | &CloseProviderTransformer{}, | 158 | &CloseProviderTransformer{}, |
diff --git a/vendor/github.com/hashicorp/terraform/terraform/interpolate.go b/vendor/github.com/hashicorp/terraform/terraform/interpolate.go index 0def295..22ddce6 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/interpolate.go +++ b/vendor/github.com/hashicorp/terraform/terraform/interpolate.go | |||
@@ -317,9 +317,13 @@ func (i *Interpolater) valueTerraformVar( | |||
317 | n string, | 317 | n string, |
318 | v *config.TerraformVariable, | 318 | v *config.TerraformVariable, |
319 | result map[string]ast.Variable) error { | 319 | result map[string]ast.Variable) error { |
320 | if v.Field != "env" { | 320 | |
321 | // "env" is supported for backward compatibility, but it's deprecated and | ||
322 | // so we won't advertise it as being allowed in the error message. It will | ||
323 | // be removed in a future version of Terraform. | ||
324 | if v.Field != "workspace" && v.Field != "env" { | ||
321 | return fmt.Errorf( | 325 | return fmt.Errorf( |
322 | "%s: only supported key for 'terraform.X' interpolations is 'env'", n) | 326 | "%s: only supported key for 'terraform.X' interpolations is 'workspace'", n) |
323 | } | 327 | } |
324 | 328 | ||
325 | if i.Meta == nil { | 329 | if i.Meta == nil { |
diff --git a/vendor/github.com/hashicorp/terraform/terraform/module_dependencies.go b/vendor/github.com/hashicorp/terraform/terraform/module_dependencies.go new file mode 100644 index 0000000..b9f44a0 --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/terraform/module_dependencies.go | |||
@@ -0,0 +1,156 @@ | |||
1 | package terraform | ||
2 | |||
3 | import ( | ||
4 | "github.com/hashicorp/terraform/config" | ||
5 | "github.com/hashicorp/terraform/config/module" | ||
6 | "github.com/hashicorp/terraform/moduledeps" | ||
7 | "github.com/hashicorp/terraform/plugin/discovery" | ||
8 | ) | ||
9 | |||
10 | // ModuleTreeDependencies returns the dependencies of the tree of modules | ||
11 | // described by the given configuration tree and state. | ||
12 | // | ||
13 | // Both configuration and state are required because there can be resources | ||
14 | // implied by instances in the state that no longer exist in config. | ||
15 | // | ||
16 | // This function will panic if any invalid version constraint strings are | ||
17 | // present in the configuration. This is guaranteed not to happen for any | ||
18 | // configuration that has passed a call to Config.Validate(). | ||
19 | func ModuleTreeDependencies(root *module.Tree, state *State) *moduledeps.Module { | ||
20 | |||
21 | // First we walk the configuration tree to build the overall structure | ||
22 | // and capture the explicit/implicit/inherited provider dependencies. | ||
23 | deps := moduleTreeConfigDependencies(root, nil) | ||
24 | |||
25 | // Next we walk over the resources in the state to catch any additional | ||
26 | // dependencies created by existing resources that are no longer in config. | ||
27 | // Most things we find in state will already be present in 'deps', but | ||
28 | // we're interested in the rare thing that isn't. | ||
29 | moduleTreeMergeStateDependencies(deps, state) | ||
30 | |||
31 | return deps | ||
32 | } | ||
33 | |||
34 | func moduleTreeConfigDependencies(root *module.Tree, inheritProviders map[string]*config.ProviderConfig) *moduledeps.Module { | ||
35 | if root == nil { | ||
36 | // If no config is provided, we'll make a synthetic root. | ||
37 | // This isn't necessarily correct if we're called with a nil that | ||
38 | // *isn't* at the root, but in practice that can never happen. | ||
39 | return &moduledeps.Module{ | ||
40 | Name: "root", | ||
41 | } | ||
42 | } | ||
43 | |||
44 | ret := &moduledeps.Module{ | ||
45 | Name: root.Name(), | ||
46 | } | ||
47 | |||
48 | cfg := root.Config() | ||
49 | providerConfigs := cfg.ProviderConfigsByFullName() | ||
50 | |||
51 | // Provider dependencies | ||
52 | { | ||
53 | providers := make(moduledeps.Providers, len(providerConfigs)) | ||
54 | |||
55 | // Any providerConfigs elements are *explicit* provider dependencies, | ||
56 | // which is the only situation where the user might provide an actual | ||
57 | // version constraint. We'll take care of these first. | ||
58 | for fullName, pCfg := range providerConfigs { | ||
59 | inst := moduledeps.ProviderInstance(fullName) | ||
60 | versionSet := discovery.AllVersions | ||
61 | if pCfg.Version != "" { | ||
62 | versionSet = discovery.ConstraintStr(pCfg.Version).MustParse() | ||
63 | } | ||
64 | providers[inst] = moduledeps.ProviderDependency{ | ||
65 | Constraints: versionSet, | ||
66 | Reason: moduledeps.ProviderDependencyExplicit, | ||
67 | } | ||
68 | } | ||
69 | |||
70 | // Each resource in the configuration creates an *implicit* provider | ||
71 | // dependency, though we'll only record it if there isn't already | ||
72 | // an explicit dependency on the same provider. | ||
73 | for _, rc := range cfg.Resources { | ||
74 | fullName := rc.ProviderFullName() | ||
75 | inst := moduledeps.ProviderInstance(fullName) | ||
76 | if _, exists := providers[inst]; exists { | ||
77 | // Explicit dependency already present | ||
78 | continue | ||
79 | } | ||
80 | |||
81 | reason := moduledeps.ProviderDependencyImplicit | ||
82 | if _, inherited := inheritProviders[fullName]; inherited { | ||
83 | reason = moduledeps.ProviderDependencyInherited | ||
84 | } | ||
85 | |||
86 | providers[inst] = moduledeps.ProviderDependency{ | ||
87 | Constraints: discovery.AllVersions, | ||
88 | Reason: reason, | ||
89 | } | ||
90 | } | ||
91 | |||
92 | ret.Providers = providers | ||
93 | } | ||
94 | |||
95 | childInherit := make(map[string]*config.ProviderConfig) | ||
96 | for k, v := range inheritProviders { | ||
97 | childInherit[k] = v | ||
98 | } | ||
99 | for k, v := range providerConfigs { | ||
100 | childInherit[k] = v | ||
101 | } | ||
102 | for _, c := range root.Children() { | ||
103 | ret.Children = append(ret.Children, moduleTreeConfigDependencies(c, childInherit)) | ||
104 | } | ||
105 | |||
106 | return ret | ||
107 | } | ||
108 | |||
109 | func moduleTreeMergeStateDependencies(root *moduledeps.Module, state *State) { | ||
110 | if state == nil { | ||
111 | return | ||
112 | } | ||
113 | |||
114 | findModule := func(path []string) *moduledeps.Module { | ||
115 | module := root | ||
116 | for _, name := range path[1:] { // skip initial "root" | ||
117 | var next *moduledeps.Module | ||
118 | for _, cm := range module.Children { | ||
119 | if cm.Name == name { | ||
120 | next = cm | ||
121 | break | ||
122 | } | ||
123 | } | ||
124 | |||
125 | if next == nil { | ||
126 | // If we didn't find a next node, we'll need to make one | ||
127 | next = &moduledeps.Module{ | ||
128 | Name: name, | ||
129 | } | ||
130 | module.Children = append(module.Children, next) | ||
131 | } | ||
132 | |||
133 | module = next | ||
134 | } | ||
135 | return module | ||
136 | } | ||
137 | |||
138 | for _, ms := range state.Modules { | ||
139 | module := findModule(ms.Path) | ||
140 | |||
141 | for _, is := range ms.Resources { | ||
142 | fullName := config.ResourceProviderFullName(is.Type, is.Provider) | ||
143 | inst := moduledeps.ProviderInstance(fullName) | ||
144 | if _, exists := module.Providers[inst]; !exists { | ||
145 | if module.Providers == nil { | ||
146 | module.Providers = make(moduledeps.Providers) | ||
147 | } | ||
148 | module.Providers[inst] = moduledeps.ProviderDependency{ | ||
149 | Constraints: discovery.AllVersions, | ||
150 | Reason: moduledeps.ProviderDependencyFromState, | ||
151 | } | ||
152 | } | ||
153 | } | ||
154 | } | ||
155 | |||
156 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/terraform/node_resource_refresh.go b/vendor/github.com/hashicorp/terraform/terraform/node_resource_refresh.go index 6ab9df7..cd4fe92 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/node_resource_refresh.go +++ b/vendor/github.com/hashicorp/terraform/terraform/node_resource_refresh.go | |||
@@ -45,13 +45,6 @@ func (n *NodeRefreshableManagedResource) DynamicExpand(ctx EvalContext) (*Graph, | |||
45 | Addr: n.ResourceAddr(), | 45 | Addr: n.ResourceAddr(), |
46 | }, | 46 | }, |
47 | 47 | ||
48 | // Switch up any node missing state to a plannable resource. This helps | ||
49 | // catch cases where data sources depend on the counts from this resource | ||
50 | // during a scale out. | ||
51 | &ResourceRefreshPlannableTransformer{ | ||
52 | State: state, | ||
53 | }, | ||
54 | |||
55 | // Add the count orphans to make sure these resources are accounted for | 48 | // Add the count orphans to make sure these resources are accounted for |
56 | // during a scale in. | 49 | // during a scale in. |
57 | &OrphanResourceCountTransformer{ | 50 | &OrphanResourceCountTransformer{ |
@@ -100,6 +93,9 @@ func (n *NodeRefreshableManagedResourceInstance) EvalTree() EvalNode { | |||
100 | // Eval info is different depending on what kind of resource this is | 93 | // Eval info is different depending on what kind of resource this is |
101 | switch mode := n.Addr.Mode; mode { | 94 | switch mode := n.Addr.Mode; mode { |
102 | case config.ManagedResourceMode: | 95 | case config.ManagedResourceMode: |
96 | if n.ResourceState == nil { | ||
97 | return n.evalTreeManagedResourceNoState() | ||
98 | } | ||
103 | return n.evalTreeManagedResource() | 99 | return n.evalTreeManagedResource() |
104 | 100 | ||
105 | case config.DataResourceMode: | 101 | case config.DataResourceMode: |
@@ -176,3 +172,88 @@ func (n *NodeRefreshableManagedResourceInstance) evalTreeManagedResource() EvalN | |||
176 | }, | 172 | }, |
177 | } | 173 | } |
178 | } | 174 | } |
175 | |||
176 | // evalTreeManagedResourceNoState produces an EvalSequence for refresh resource | ||
177 | // nodes that don't have state attached. An example of where this functionality | ||
178 | // is useful is when a resource that already exists in state is being scaled | ||
179 | // out, ie: has its resource count increased. In this case, the scaled out node | ||
180 | // needs to be available to other nodes (namely data sources) that may depend | ||
181 | // on it for proper interpolation, or confusing "index out of range" errors can | ||
182 | // occur. | ||
183 | // | ||
184 | // The steps in this sequence are very similar to the steps carried out in | ||
185 | // plan, but nothing is done with the diff after it is created - it is dropped, | ||
186 | // and its changes are not counted in the UI. | ||
187 | func (n *NodeRefreshableManagedResourceInstance) evalTreeManagedResourceNoState() EvalNode { | ||
188 | // Declare a bunch of variables that are used for state during | ||
189 | // evaluation. Most of this are written to by-address below. | ||
190 | var provider ResourceProvider | ||
191 | var state *InstanceState | ||
192 | var resourceConfig *ResourceConfig | ||
193 | |||
194 | addr := n.NodeAbstractResource.Addr | ||
195 | stateID := addr.stateId() | ||
196 | info := &InstanceInfo{ | ||
197 | Id: stateID, | ||
198 | Type: addr.Type, | ||
199 | ModulePath: normalizeModulePath(addr.Path), | ||
200 | } | ||
201 | |||
202 | // Build the resource for eval | ||
203 | resource := &Resource{ | ||
204 | Name: addr.Name, | ||
205 | Type: addr.Type, | ||
206 | CountIndex: addr.Index, | ||
207 | } | ||
208 | if resource.CountIndex < 0 { | ||
209 | resource.CountIndex = 0 | ||
210 | } | ||
211 | |||
212 | // Determine the dependencies for the state. | ||
213 | stateDeps := n.StateReferences() | ||
214 | |||
215 | return &EvalSequence{ | ||
216 | Nodes: []EvalNode{ | ||
217 | &EvalInterpolate{ | ||
218 | Config: n.Config.RawConfig.Copy(), | ||
219 | Resource: resource, | ||
220 | Output: &resourceConfig, | ||
221 | }, | ||
222 | &EvalGetProvider{ | ||
223 | Name: n.ProvidedBy()[0], | ||
224 | Output: &provider, | ||
225 | }, | ||
226 | // Re-run validation to catch any errors we missed, e.g. type | ||
227 | // mismatches on computed values. | ||
228 | &EvalValidateResource{ | ||
229 | Provider: &provider, | ||
230 | Config: &resourceConfig, | ||
231 | ResourceName: n.Config.Name, | ||
232 | ResourceType: n.Config.Type, | ||
233 | ResourceMode: n.Config.Mode, | ||
234 | IgnoreWarnings: true, | ||
235 | }, | ||
236 | &EvalReadState{ | ||
237 | Name: stateID, | ||
238 | Output: &state, | ||
239 | }, | ||
240 | &EvalDiff{ | ||
241 | Name: stateID, | ||
242 | Info: info, | ||
243 | Config: &resourceConfig, | ||
244 | Resource: n.Config, | ||
245 | Provider: &provider, | ||
246 | State: &state, | ||
247 | OutputState: &state, | ||
248 | Stub: true, | ||
249 | }, | ||
250 | &EvalWriteState{ | ||
251 | Name: stateID, | ||
252 | ResourceType: n.Config.Type, | ||
253 | Provider: n.Config.Provider, | ||
254 | Dependencies: stateDeps, | ||
255 | State: &state, | ||
256 | }, | ||
257 | }, | ||
258 | } | ||
259 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/terraform/plan.go b/vendor/github.com/hashicorp/terraform/terraform/plan.go index ea08845..51d6652 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/plan.go +++ b/vendor/github.com/hashicorp/terraform/terraform/plan.go | |||
@@ -6,6 +6,7 @@ import ( | |||
6 | "errors" | 6 | "errors" |
7 | "fmt" | 7 | "fmt" |
8 | "io" | 8 | "io" |
9 | "log" | ||
9 | "sync" | 10 | "sync" |
10 | 11 | ||
11 | "github.com/hashicorp/terraform/config/module" | 12 | "github.com/hashicorp/terraform/config/module" |
@@ -31,6 +32,9 @@ type Plan struct { | |||
31 | Vars map[string]interface{} | 32 | Vars map[string]interface{} |
32 | Targets []string | 33 | Targets []string |
33 | 34 | ||
35 | TerraformVersion string | ||
36 | ProviderSHA256s map[string][]byte | ||
37 | |||
34 | // Backend is the backend that this plan should use and store data with. | 38 | // Backend is the backend that this plan should use and store data with. |
35 | Backend *BackendState | 39 | Backend *BackendState |
36 | 40 | ||
@@ -40,19 +44,58 @@ type Plan struct { | |||
40 | // Context returns a Context with the data encapsulated in this plan. | 44 | // Context returns a Context with the data encapsulated in this plan. |
41 | // | 45 | // |
42 | // The following fields in opts are overridden by the plan: Config, | 46 | // The following fields in opts are overridden by the plan: Config, |
43 | // Diff, State, Variables. | 47 | // Diff, Variables. |
48 | // | ||
49 | // If State is not provided, it is set from the plan. If it _is_ provided, | ||
50 | // it must be Equal to the state stored in plan, but may have a newer | ||
51 | // serial. | ||
44 | func (p *Plan) Context(opts *ContextOpts) (*Context, error) { | 52 | func (p *Plan) Context(opts *ContextOpts) (*Context, error) { |
53 | var err error | ||
54 | opts, err = p.contextOpts(opts) | ||
55 | if err != nil { | ||
56 | return nil, err | ||
57 | } | ||
58 | return NewContext(opts) | ||
59 | } | ||
60 | |||
61 | // contextOpts mutates the given base ContextOpts in place to use input | ||
62 | // objects obtained from the receiving plan. | ||
63 | func (p *Plan) contextOpts(base *ContextOpts) (*ContextOpts, error) { | ||
64 | opts := base | ||
65 | |||
45 | opts.Diff = p.Diff | 66 | opts.Diff = p.Diff |
46 | opts.Module = p.Module | 67 | opts.Module = p.Module |
47 | opts.State = p.State | ||
48 | opts.Targets = p.Targets | 68 | opts.Targets = p.Targets |
69 | opts.ProviderSHA256s = p.ProviderSHA256s | ||
70 | |||
71 | if opts.State == nil { | ||
72 | opts.State = p.State | ||
73 | } else if !opts.State.Equal(p.State) { | ||
74 | // Even if we're overriding the state, it should be logically equal | ||
75 | // to what's in plan. The only valid change to have made by the time | ||
76 | // we get here is to have incremented the serial. | ||
77 | // | ||
78 | // Due to the fact that serialization may change the representation of | ||
79 | // the state, there is little chance that these aren't actually equal. | ||
80 | // Log the error condition for reference, but continue with the state | ||
81 | // we have. | ||
82 | log.Println("[WARNING] Plan state and ContextOpts state are not equal") | ||
83 | } | ||
84 | |||
85 | thisVersion := VersionString() | ||
86 | if p.TerraformVersion != "" && p.TerraformVersion != thisVersion { | ||
87 | return nil, fmt.Errorf( | ||
88 | "plan was created with a different version of Terraform (created with %s, but running %s)", | ||
89 | p.TerraformVersion, thisVersion, | ||
90 | ) | ||
91 | } | ||
49 | 92 | ||
50 | opts.Variables = make(map[string]interface{}) | 93 | opts.Variables = make(map[string]interface{}) |
51 | for k, v := range p.Vars { | 94 | for k, v := range p.Vars { |
52 | opts.Variables[k] = v | 95 | opts.Variables[k] = v |
53 | } | 96 | } |
54 | 97 | ||
55 | return NewContext(opts) | 98 | return opts, nil |
56 | } | 99 | } |
57 | 100 | ||
58 | func (p *Plan) String() string { | 101 | func (p *Plan) String() string { |
@@ -86,7 +129,7 @@ func (p *Plan) init() { | |||
86 | // the ability in the future to change the file format if we want for any | 129 | // the ability in the future to change the file format if we want for any |
87 | // reason. | 130 | // reason. |
88 | const planFormatMagic = "tfplan" | 131 | const planFormatMagic = "tfplan" |
89 | const planFormatVersion byte = 1 | 132 | const planFormatVersion byte = 2 |
90 | 133 | ||
91 | // ReadPlan reads a plan structure out of a reader in the format that | 134 | // ReadPlan reads a plan structure out of a reader in the format that |
92 | // was written by WritePlan. | 135 | // was written by WritePlan. |
diff --git a/vendor/github.com/hashicorp/terraform/terraform/resource_address.go b/vendor/github.com/hashicorp/terraform/terraform/resource_address.go index a8a0c95..8badca8 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/resource_address.go +++ b/vendor/github.com/hashicorp/terraform/terraform/resource_address.go | |||
@@ -8,6 +8,7 @@ import ( | |||
8 | "strings" | 8 | "strings" |
9 | 9 | ||
10 | "github.com/hashicorp/terraform/config" | 10 | "github.com/hashicorp/terraform/config" |
11 | "github.com/hashicorp/terraform/config/module" | ||
11 | ) | 12 | ) |
12 | 13 | ||
13 | // ResourceAddress is a way of identifying an individual resource (or, | 14 | // ResourceAddress is a way of identifying an individual resource (or, |
@@ -89,6 +90,51 @@ func (r *ResourceAddress) String() string { | |||
89 | return strings.Join(result, ".") | 90 | return strings.Join(result, ".") |
90 | } | 91 | } |
91 | 92 | ||
93 | // HasResourceSpec returns true if the address has a resource spec, as | ||
94 | // defined in the documentation: | ||
95 | // https://www.terraform.io/docs/internals/resource-addressing.html | ||
96 | // In particular, this returns false if the address contains only | ||
97 | // a module path, thus addressing the entire module. | ||
98 | func (r *ResourceAddress) HasResourceSpec() bool { | ||
99 | return r.Type != "" && r.Name != "" | ||
100 | } | ||
101 | |||
102 | // WholeModuleAddress returns the resource address that refers to all | ||
103 | // resources in the same module as the receiver address. | ||
104 | func (r *ResourceAddress) WholeModuleAddress() *ResourceAddress { | ||
105 | return &ResourceAddress{ | ||
106 | Path: r.Path, | ||
107 | Index: -1, | ||
108 | InstanceTypeSet: false, | ||
109 | } | ||
110 | } | ||
111 | |||
112 | // MatchesConfig returns true if the receiver matches the given | ||
113 | // configuration resource within the given configuration module. | ||
114 | // | ||
115 | // Since resource configuration blocks represent all of the instances of | ||
116 | // a multi-instance resource, the index of the address (if any) is not | ||
117 | // considered. | ||
118 | func (r *ResourceAddress) MatchesConfig(mod *module.Tree, rc *config.Resource) bool { | ||
119 | if r.HasResourceSpec() { | ||
120 | if r.Mode != rc.Mode || r.Type != rc.Type || r.Name != rc.Name { | ||
121 | return false | ||
122 | } | ||
123 | } | ||
124 | |||
125 | addrPath := r.Path | ||
126 | cfgPath := mod.Path() | ||
127 | |||
128 | // normalize | ||
129 | if len(addrPath) == 0 { | ||
130 | addrPath = nil | ||
131 | } | ||
132 | if len(cfgPath) == 0 { | ||
133 | cfgPath = nil | ||
134 | } | ||
135 | return reflect.DeepEqual(addrPath, cfgPath) | ||
136 | } | ||
137 | |||
92 | // stateId returns the ID that this resource should be entered with | 138 | // stateId returns the ID that this resource should be entered with |
93 | // in the state. This is also used for diffs. In the future, we'd like to | 139 | // in the state. This is also used for diffs. In the future, we'd like to |
94 | // move away from this string field so I don't export this. | 140 | // move away from this string field so I don't export this. |
@@ -185,7 +231,10 @@ func ParseResourceAddress(s string) (*ResourceAddress, error) { | |||
185 | 231 | ||
186 | // not allowed to say "data." without a type following | 232 | // not allowed to say "data." without a type following |
187 | if mode == config.DataResourceMode && matches["type"] == "" { | 233 | if mode == config.DataResourceMode && matches["type"] == "" { |
188 | return nil, fmt.Errorf("must target specific data instance") | 234 | return nil, fmt.Errorf( |
235 | "invalid resource address %q: must target specific data instance", | ||
236 | s, | ||
237 | ) | ||
189 | } | 238 | } |
190 | 239 | ||
191 | return &ResourceAddress{ | 240 | return &ResourceAddress{ |
@@ -199,6 +248,75 @@ func ParseResourceAddress(s string) (*ResourceAddress, error) { | |||
199 | }, nil | 248 | }, nil |
200 | } | 249 | } |
201 | 250 | ||
251 | // ParseResourceAddressForInstanceDiff creates a ResourceAddress for a | ||
252 | // resource name as described in a module diff. | ||
253 | // | ||
254 | // For historical reasons a different addressing format is used in this | ||
255 | // context. The internal format should not be shown in the UI and instead | ||
256 | // this function should be used to translate to a ResourceAddress and | ||
257 | // then, where appropriate, use the String method to produce a canonical | ||
258 | // resource address string for display in the UI. | ||
259 | // | ||
260 | // The given path slice must be empty (or nil) for the root module, and | ||
261 | // otherwise consist of a sequence of module names traversing down into | ||
262 | // the module tree. If a non-nil path is provided, the caller must not | ||
263 | // modify its underlying array after passing it to this function. | ||
264 | func ParseResourceAddressForInstanceDiff(path []string, key string) (*ResourceAddress, error) { | ||
265 | addr, err := parseResourceAddressInternal(key) | ||
266 | if err != nil { | ||
267 | return nil, err | ||
268 | } | ||
269 | addr.Path = path | ||
270 | return addr, nil | ||
271 | } | ||
272 | |||
273 | // Contains returns true if and only if the given node is contained within | ||
274 | // the receiver. | ||
275 | // | ||
276 | // Containment is defined in terms of the module and resource heirarchy: | ||
277 | // a resource is contained within its module and any ancestor modules, | ||
278 | // an indexed resource instance is contained with the unindexed resource, etc. | ||
279 | func (addr *ResourceAddress) Contains(other *ResourceAddress) bool { | ||
280 | ourPath := addr.Path | ||
281 | givenPath := other.Path | ||
282 | if len(givenPath) < len(ourPath) { | ||
283 | return false | ||
284 | } | ||
285 | for i := range ourPath { | ||
286 | if ourPath[i] != givenPath[i] { | ||
287 | return false | ||
288 | } | ||
289 | } | ||
290 | |||
291 | // If the receiver is a whole-module address then the path prefix | ||
292 | // matching is all we need. | ||
293 | if !addr.HasResourceSpec() { | ||
294 | return true | ||
295 | } | ||
296 | |||
297 | if addr.Type != other.Type || addr.Name != other.Name || addr.Mode != other.Mode { | ||
298 | return false | ||
299 | } | ||
300 | |||
301 | if addr.Index != -1 && addr.Index != other.Index { | ||
302 | return false | ||
303 | } | ||
304 | |||
305 | if addr.InstanceTypeSet && (addr.InstanceTypeSet != other.InstanceTypeSet || addr.InstanceType != other.InstanceType) { | ||
306 | return false | ||
307 | } | ||
308 | |||
309 | return true | ||
310 | } | ||
311 | |||
312 | // Equals returns true if the receiver matches the given address. | ||
313 | // | ||
314 | // The name of this method is a misnomer, since it doesn't test for exact | ||
315 | // equality. Instead, it tests that the _specified_ parts of each | ||
316 | // address match, treating any unspecified parts as wildcards. | ||
317 | // | ||
318 | // See also Contains, which takes a more heirarchical approach to comparing | ||
319 | // addresses. | ||
202 | func (addr *ResourceAddress) Equals(raw interface{}) bool { | 320 | func (addr *ResourceAddress) Equals(raw interface{}) bool { |
203 | other, ok := raw.(*ResourceAddress) | 321 | other, ok := raw.(*ResourceAddress) |
204 | if !ok { | 322 | if !ok { |
@@ -233,6 +351,58 @@ func (addr *ResourceAddress) Equals(raw interface{}) bool { | |||
233 | modeMatch | 351 | modeMatch |
234 | } | 352 | } |
235 | 353 | ||
354 | // Less returns true if and only if the receiver should be sorted before | ||
355 | // the given address when presenting a list of resource addresses to | ||
356 | // an end-user. | ||
357 | // | ||
358 | // This sort uses lexicographic sorting for most components, but uses | ||
359 | // numeric sort for indices, thus causing index 10 to sort after | ||
360 | // index 9, rather than after index 1. | ||
361 | func (addr *ResourceAddress) Less(other *ResourceAddress) bool { | ||
362 | |||
363 | switch { | ||
364 | |||
365 | case len(addr.Path) < len(other.Path): | ||
366 | return true | ||
367 | |||
368 | case !reflect.DeepEqual(addr.Path, other.Path): | ||
369 | // If the two paths are the same length but don't match, we'll just | ||
370 | // cheat and compare the string forms since it's easier than | ||
371 | // comparing all of the path segments in turn. | ||
372 | addrStr := addr.String() | ||
373 | otherStr := other.String() | ||
374 | return addrStr < otherStr | ||
375 | |||
376 | case addr.Mode == config.DataResourceMode && other.Mode != config.DataResourceMode: | ||
377 | return true | ||
378 | |||
379 | case addr.Type < other.Type: | ||
380 | return true | ||
381 | |||
382 | case addr.Name < other.Name: | ||
383 | return true | ||
384 | |||
385 | case addr.Index < other.Index: | ||
386 | // Since "Index" is -1 for an un-indexed address, this also conveniently | ||
387 | // sorts unindexed addresses before indexed ones, should they both | ||
388 | // appear for some reason. | ||
389 | return true | ||
390 | |||
391 | case other.InstanceTypeSet && !addr.InstanceTypeSet: | ||
392 | return true | ||
393 | |||
394 | case addr.InstanceType < other.InstanceType: | ||
395 | // InstanceType is actually an enum, so this is just an arbitrary | ||
396 | // sort based on the enum numeric values, and thus not particularly | ||
397 | // meaningful. | ||
398 | return true | ||
399 | |||
400 | default: | ||
401 | return false | ||
402 | |||
403 | } | ||
404 | } | ||
405 | |||
236 | func ParseResourceIndex(s string) (int, error) { | 406 | func ParseResourceIndex(s string) (int, error) { |
237 | if s == "" { | 407 | if s == "" { |
238 | return -1, nil | 408 | return -1, nil |
@@ -275,7 +445,7 @@ func tokenizeResourceAddress(s string) (map[string]string, error) { | |||
275 | // string "aws_instance.web.tainted[1]" | 445 | // string "aws_instance.web.tainted[1]" |
276 | re := regexp.MustCompile(`\A` + | 446 | re := regexp.MustCompile(`\A` + |
277 | // "module.foo.module.bar" (optional) | 447 | // "module.foo.module.bar" (optional) |
278 | `(?P<path>(?:module\.[^.]+\.?)*)` + | 448 | `(?P<path>(?:module\.(?P<module_name>[^.]+)\.?)*)` + |
279 | // possibly "data.", if targeting is a data resource | 449 | // possibly "data.", if targeting is a data resource |
280 | `(?P<data_prefix>(?:data\.)?)` + | 450 | `(?P<data_prefix>(?:data\.)?)` + |
281 | // "aws_instance.web" (optional when module path specified) | 451 | // "aws_instance.web" (optional when module path specified) |
@@ -289,7 +459,7 @@ func tokenizeResourceAddress(s string) (map[string]string, error) { | |||
289 | groupNames := re.SubexpNames() | 459 | groupNames := re.SubexpNames() |
290 | rawMatches := re.FindAllStringSubmatch(s, -1) | 460 | rawMatches := re.FindAllStringSubmatch(s, -1) |
291 | if len(rawMatches) != 1 { | 461 | if len(rawMatches) != 1 { |
292 | return nil, fmt.Errorf("Problem parsing address: %q", s) | 462 | return nil, fmt.Errorf("invalid resource address %q", s) |
293 | } | 463 | } |
294 | 464 | ||
295 | matches := make(map[string]string) | 465 | matches := make(map[string]string) |
diff --git a/vendor/github.com/hashicorp/terraform/terraform/resource_provider.go b/vendor/github.com/hashicorp/terraform/terraform/resource_provider.go index 1a68c86..7d78f67 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/resource_provider.go +++ b/vendor/github.com/hashicorp/terraform/terraform/resource_provider.go | |||
@@ -1,5 +1,12 @@ | |||
1 | package terraform | 1 | package terraform |
2 | 2 | ||
3 | import ( | ||
4 | "fmt" | ||
5 | |||
6 | multierror "github.com/hashicorp/go-multierror" | ||
7 | "github.com/hashicorp/terraform/plugin/discovery" | ||
8 | ) | ||
9 | |||
3 | // ResourceProvider is an interface that must be implemented by any | 10 | // ResourceProvider is an interface that must be implemented by any |
4 | // resource provider: the thing that creates and manages the resources in | 11 | // resource provider: the thing that creates and manages the resources in |
5 | // a Terraform configuration. | 12 | // a Terraform configuration. |
@@ -154,6 +161,18 @@ type ResourceProvider interface { | |||
154 | ReadDataApply(*InstanceInfo, *InstanceDiff) (*InstanceState, error) | 161 | ReadDataApply(*InstanceInfo, *InstanceDiff) (*InstanceState, error) |
155 | } | 162 | } |
156 | 163 | ||
164 | // ResourceProviderError may be returned when creating a Context if the | ||
165 | // required providers cannot be satisfied. This error can then be used to | ||
166 | // format a more useful message for the user. | ||
167 | type ResourceProviderError struct { | ||
168 | Errors []error | ||
169 | } | ||
170 | |||
171 | func (e *ResourceProviderError) Error() string { | ||
172 | // use multierror to format the default output | ||
173 | return multierror.Append(nil, e.Errors...).Error() | ||
174 | } | ||
175 | |||
157 | // ResourceProviderCloser is an interface that providers that can close | 176 | // ResourceProviderCloser is an interface that providers that can close |
158 | // connections that aren't needed anymore must implement. | 177 | // connections that aren't needed anymore must implement. |
159 | type ResourceProviderCloser interface { | 178 | type ResourceProviderCloser interface { |
@@ -171,6 +190,50 @@ type DataSource struct { | |||
171 | Name string | 190 | Name string |
172 | } | 191 | } |
173 | 192 | ||
193 | // ResourceProviderResolver is an interface implemented by objects that are | ||
194 | // able to resolve a given set of resource provider version constraints | ||
195 | // into ResourceProviderFactory callbacks. | ||
196 | type ResourceProviderResolver interface { | ||
197 | // Given a constraint map, return a ResourceProviderFactory for each | ||
198 | // requested provider. If some or all of the constraints cannot be | ||
199 | // satisfied, return a non-nil slice of errors describing the problems. | ||
200 | ResolveProviders(reqd discovery.PluginRequirements) (map[string]ResourceProviderFactory, []error) | ||
201 | } | ||
202 | |||
203 | // ResourceProviderResolverFunc wraps a callback function and turns it into | ||
204 | // a ResourceProviderResolver implementation, for convenience in situations | ||
205 | // where a function and its associated closure are sufficient as a resolver | ||
206 | // implementation. | ||
207 | type ResourceProviderResolverFunc func(reqd discovery.PluginRequirements) (map[string]ResourceProviderFactory, []error) | ||
208 | |||
209 | // ResolveProviders implements ResourceProviderResolver by calling the | ||
210 | // wrapped function. | ||
211 | func (f ResourceProviderResolverFunc) ResolveProviders(reqd discovery.PluginRequirements) (map[string]ResourceProviderFactory, []error) { | ||
212 | return f(reqd) | ||
213 | } | ||
214 | |||
215 | // ResourceProviderResolverFixed returns a ResourceProviderResolver that | ||
216 | // has a fixed set of provider factories provided by the caller. The returned | ||
217 | // resolver ignores version constraints entirely and just returns the given | ||
218 | // factory for each requested provider name. | ||
219 | // | ||
220 | // This function is primarily used in tests, to provide mock providers or | ||
221 | // in-process providers under test. | ||
222 | func ResourceProviderResolverFixed(factories map[string]ResourceProviderFactory) ResourceProviderResolver { | ||
223 | return ResourceProviderResolverFunc(func(reqd discovery.PluginRequirements) (map[string]ResourceProviderFactory, []error) { | ||
224 | ret := make(map[string]ResourceProviderFactory, len(reqd)) | ||
225 | var errs []error | ||
226 | for name := range reqd { | ||
227 | if factory, exists := factories[name]; exists { | ||
228 | ret[name] = factory | ||
229 | } else { | ||
230 | errs = append(errs, fmt.Errorf("provider %q is not available", name)) | ||
231 | } | ||
232 | } | ||
233 | return ret, errs | ||
234 | }) | ||
235 | } | ||
236 | |||
174 | // ResourceProviderFactory is a function type that creates a new instance | 237 | // ResourceProviderFactory is a function type that creates a new instance |
175 | // of a resource provider. | 238 | // of a resource provider. |
176 | type ResourceProviderFactory func() (ResourceProvider, error) | 239 | type ResourceProviderFactory func() (ResourceProvider, error) |
@@ -202,3 +265,21 @@ func ProviderHasDataSource(p ResourceProvider, n string) bool { | |||
202 | 265 | ||
203 | return false | 266 | return false |
204 | } | 267 | } |
268 | |||
269 | // resourceProviderFactories matches available plugins to the given version | ||
270 | // requirements to produce a map of compatible provider plugins if possible, | ||
271 | // or an error if the currently-available plugins are insufficient. | ||
272 | // | ||
273 | // This should be called only with configurations that have passed calls | ||
274 | // to config.Validate(), which ensures that all of the given version | ||
275 | // constraints are valid. It will panic if any invalid constraints are present. | ||
276 | func resourceProviderFactories(resolver ResourceProviderResolver, reqd discovery.PluginRequirements) (map[string]ResourceProviderFactory, error) { | ||
277 | ret, errs := resolver.ResolveProviders(reqd) | ||
278 | if errs != nil { | ||
279 | return nil, &ResourceProviderError{ | ||
280 | Errors: errs, | ||
281 | } | ||
282 | } | ||
283 | |||
284 | return ret, nil | ||
285 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/terraform/state.go b/vendor/github.com/hashicorp/terraform/terraform/state.go index 074b682..0c46194 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/state.go +++ b/vendor/github.com/hashicorp/terraform/terraform/state.go | |||
@@ -533,6 +533,43 @@ func (s *State) equal(other *State) bool { | |||
533 | return true | 533 | return true |
534 | } | 534 | } |
535 | 535 | ||
536 | // MarshalEqual is similar to Equal but provides a stronger definition of | ||
537 | // "equal", where two states are equal if and only if their serialized form | ||
538 | // is byte-for-byte identical. | ||
539 | // | ||
540 | // This is primarily useful for callers that are trying to save snapshots | ||
541 | // of state to persistent storage, allowing them to detect when a new | ||
542 | // snapshot must be taken. | ||
543 | // | ||
544 | // Note that the serial number and lineage are included in the serialized form, | ||
545 | // so it's the caller's responsibility to properly manage these attributes | ||
546 | // so that this method is only called on two states that have the same | ||
547 | // serial and lineage, unless detecting such differences is desired. | ||
548 | func (s *State) MarshalEqual(other *State) bool { | ||
549 | if s == nil && other == nil { | ||
550 | return true | ||
551 | } else if s == nil || other == nil { | ||
552 | return false | ||
553 | } | ||
554 | |||
555 | recvBuf := &bytes.Buffer{} | ||
556 | otherBuf := &bytes.Buffer{} | ||
557 | |||
558 | err := WriteState(s, recvBuf) | ||
559 | if err != nil { | ||
560 | // should never happen, since we're writing to a buffer | ||
561 | panic(err) | ||
562 | } | ||
563 | |||
564 | err = WriteState(other, otherBuf) | ||
565 | if err != nil { | ||
566 | // should never happen, since we're writing to a buffer | ||
567 | panic(err) | ||
568 | } | ||
569 | |||
570 | return bytes.Equal(recvBuf.Bytes(), otherBuf.Bytes()) | ||
571 | } | ||
572 | |||
536 | type StateAgeComparison int | 573 | type StateAgeComparison int |
537 | 574 | ||
538 | const ( | 575 | const ( |
@@ -603,6 +640,10 @@ func (s *State) SameLineage(other *State) bool { | |||
603 | // DeepCopy performs a deep copy of the state structure and returns | 640 | // DeepCopy performs a deep copy of the state structure and returns |
604 | // a new structure. | 641 | // a new structure. |
605 | func (s *State) DeepCopy() *State { | 642 | func (s *State) DeepCopy() *State { |
643 | if s == nil { | ||
644 | return nil | ||
645 | } | ||
646 | |||
606 | copy, err := copystructure.Config{Lock: true}.Copy(s) | 647 | copy, err := copystructure.Config{Lock: true}.Copy(s) |
607 | if err != nil { | 648 | if err != nil { |
608 | panic(err) | 649 | panic(err) |
@@ -611,30 +652,6 @@ func (s *State) DeepCopy() *State { | |||
611 | return copy.(*State) | 652 | return copy.(*State) |
612 | } | 653 | } |
613 | 654 | ||
614 | // IncrementSerialMaybe increments the serial number of this state | ||
615 | // if it different from the other state. | ||
616 | func (s *State) IncrementSerialMaybe(other *State) { | ||
617 | if s == nil { | ||
618 | return | ||
619 | } | ||
620 | if other == nil { | ||
621 | return | ||
622 | } | ||
623 | s.Lock() | ||
624 | defer s.Unlock() | ||
625 | |||
626 | if s.Serial > other.Serial { | ||
627 | return | ||
628 | } | ||
629 | if other.TFVersion != s.TFVersion || !s.equal(other) { | ||
630 | if other.Serial > s.Serial { | ||
631 | s.Serial = other.Serial | ||
632 | } | ||
633 | |||
634 | s.Serial++ | ||
635 | } | ||
636 | } | ||
637 | |||
638 | // FromFutureTerraform checks if this state was written by a Terraform | 655 | // FromFutureTerraform checks if this state was written by a Terraform |
639 | // version from the future. | 656 | // version from the future. |
640 | func (s *State) FromFutureTerraform() bool { | 657 | func (s *State) FromFutureTerraform() bool { |
@@ -660,6 +677,7 @@ func (s *State) init() { | |||
660 | if s.Version == 0 { | 677 | if s.Version == 0 { |
661 | s.Version = StateVersion | 678 | s.Version = StateVersion |
662 | } | 679 | } |
680 | |||
663 | if s.moduleByPath(rootModulePath) == nil { | 681 | if s.moduleByPath(rootModulePath) == nil { |
664 | s.addModule(rootModulePath) | 682 | s.addModule(rootModulePath) |
665 | } | 683 | } |
diff --git a/vendor/github.com/hashicorp/terraform/terraform/test_failure b/vendor/github.com/hashicorp/terraform/terraform/test_failure new file mode 100644 index 0000000..5d3ad1a --- /dev/null +++ b/vendor/github.com/hashicorp/terraform/terraform/test_failure | |||
@@ -0,0 +1,9 @@ | |||
1 | --- FAIL: TestContext2Plan_moduleProviderInherit (0.01s) | ||
2 | context_plan_test.go:552: bad: []string{"child"} | ||
3 | map[string]dag.Vertex{} | ||
4 | "module.middle.null" | ||
5 | map[string]dag.Vertex{} | ||
6 | "module.middle.module.inner.null" | ||
7 | map[string]dag.Vertex{} | ||
8 | "aws" | ||
9 | FAIL | ||
diff --git a/vendor/github.com/hashicorp/terraform/terraform/transform_resource_refresh_plannable.go b/vendor/github.com/hashicorp/terraform/terraform/transform_resource_refresh_plannable.go deleted file mode 100644 index 35358a3..0000000 --- a/vendor/github.com/hashicorp/terraform/terraform/transform_resource_refresh_plannable.go +++ /dev/null | |||
@@ -1,55 +0,0 @@ | |||
1 | package terraform | ||
2 | |||
3 | import ( | ||
4 | "fmt" | ||
5 | "log" | ||
6 | ) | ||
7 | |||
8 | // ResourceRefreshPlannableTransformer is a GraphTransformer that replaces any | ||
9 | // nodes that don't have state yet exist in config with | ||
10 | // NodePlannableResourceInstance. | ||
11 | // | ||
12 | // This transformer is used when expanding count on managed resource nodes | ||
13 | // during the refresh phase to ensure that data sources that have | ||
14 | // interpolations that depend on resources existing in the graph can be walked | ||
15 | // properly. | ||
16 | type ResourceRefreshPlannableTransformer struct { | ||
17 | // The full global state. | ||
18 | State *State | ||
19 | } | ||
20 | |||
21 | // Transform implements GraphTransformer for | ||
22 | // ResourceRefreshPlannableTransformer. | ||
23 | func (t *ResourceRefreshPlannableTransformer) Transform(g *Graph) error { | ||
24 | nextVertex: | ||
25 | for _, v := range g.Vertices() { | ||
26 | addr := v.(*NodeRefreshableManagedResourceInstance).Addr | ||
27 | |||
28 | // Find the state for this address, if there is one | ||
29 | filter := &StateFilter{State: t.State} | ||
30 | results, err := filter.Filter(addr.String()) | ||
31 | if err != nil { | ||
32 | return err | ||
33 | } | ||
34 | |||
35 | // Check to see if we have a state for this resource. If we do, skip this | ||
36 | // node. | ||
37 | for _, result := range results { | ||
38 | if _, ok := result.Value.(*ResourceState); ok { | ||
39 | continue nextVertex | ||
40 | } | ||
41 | } | ||
42 | // If we don't, convert this resource to a NodePlannableResourceInstance node | ||
43 | // with all of the data we need to make it happen. | ||
44 | log.Printf("[TRACE] No state for %s, converting to NodePlannableResourceInstance", addr.String()) | ||
45 | new := &NodePlannableResourceInstance{ | ||
46 | NodeAbstractResource: v.(*NodeRefreshableManagedResourceInstance).NodeAbstractResource, | ||
47 | } | ||
48 | // Replace the node in the graph | ||
49 | if !g.Replace(v, new) { | ||
50 | return fmt.Errorf("ResourceRefreshPlannableTransformer: Could not replace node %#v with %#v", v, new) | ||
51 | } | ||
52 | } | ||
53 | |||
54 | return nil | ||
55 | } | ||
diff --git a/vendor/github.com/hashicorp/terraform/terraform/transform_targets.go b/vendor/github.com/hashicorp/terraform/terraform/transform_targets.go index 125f9e3..4f117b4 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/transform_targets.go +++ b/vendor/github.com/hashicorp/terraform/terraform/transform_targets.go | |||
@@ -41,6 +41,12 @@ type TargetsTransformer struct { | |||
41 | // that already have the targets parsed | 41 | // that already have the targets parsed |
42 | ParsedTargets []ResourceAddress | 42 | ParsedTargets []ResourceAddress |
43 | 43 | ||
44 | // If set, the index portions of resource addresses will be ignored | ||
45 | // for comparison. This is used when transforming a graph where | ||
46 | // counted resources have not yet been expanded, since otherwise | ||
47 | // the unexpanded nodes (which never have indices) would not match. | ||
48 | IgnoreIndices bool | ||
49 | |||
44 | // Set to true when we're in a `terraform destroy` or a | 50 | // Set to true when we're in a `terraform destroy` or a |
45 | // `terraform plan -destroy` | 51 | // `terraform plan -destroy` |
46 | Destroy bool | 52 | Destroy bool |
@@ -199,7 +205,12 @@ func (t *TargetsTransformer) nodeIsTarget( | |||
199 | 205 | ||
200 | addr := r.ResourceAddr() | 206 | addr := r.ResourceAddr() |
201 | for _, targetAddr := range addrs { | 207 | for _, targetAddr := range addrs { |
202 | if targetAddr.Equals(addr) { | 208 | if t.IgnoreIndices { |
209 | // targetAddr is not a pointer, so we can safely mutate it without | ||
210 | // interfering with references elsewhere. | ||
211 | targetAddr.Index = -1 | ||
212 | } | ||
213 | if targetAddr.Contains(addr) { | ||
203 | return true | 214 | return true |
204 | } | 215 | } |
205 | } | 216 | } |
diff --git a/vendor/github.com/hashicorp/terraform/terraform/util.go b/vendor/github.com/hashicorp/terraform/terraform/util.go index f41f0d7..752241a 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/util.go +++ b/vendor/github.com/hashicorp/terraform/terraform/util.go | |||
@@ -2,7 +2,8 @@ package terraform | |||
2 | 2 | ||
3 | import ( | 3 | import ( |
4 | "sort" | 4 | "sort" |
5 | "strings" | 5 | |
6 | "github.com/hashicorp/terraform/config" | ||
6 | ) | 7 | ) |
7 | 8 | ||
8 | // Semaphore is a wrapper around a channel to provide | 9 | // Semaphore is a wrapper around a channel to provide |
@@ -47,21 +48,8 @@ func (s Semaphore) Release() { | |||
47 | } | 48 | } |
48 | } | 49 | } |
49 | 50 | ||
50 | // resourceProvider returns the provider name for the given type. | 51 | func resourceProvider(resourceType, explicitProvider string) string { |
51 | func resourceProvider(t, alias string) string { | 52 | return config.ResourceProviderFullName(resourceType, explicitProvider) |
52 | if alias != "" { | ||
53 | return alias | ||
54 | } | ||
55 | |||
56 | idx := strings.IndexRune(t, '_') | ||
57 | if idx == -1 { | ||
58 | // If no underscores, the resource name is assumed to be | ||
59 | // also the provider name, e.g. if the provider exposes | ||
60 | // only a single resource of each type. | ||
61 | return t | ||
62 | } | ||
63 | |||
64 | return t[:idx] | ||
65 | } | 53 | } |
66 | 54 | ||
67 | // strSliceContains checks if a given string is contained in a slice | 55 | // strSliceContains checks if a given string is contained in a slice |
diff --git a/vendor/github.com/hashicorp/terraform/terraform/version.go b/vendor/github.com/hashicorp/terraform/terraform/version.go index cdfb8fb..d61b11e 100644 --- a/vendor/github.com/hashicorp/terraform/terraform/version.go +++ b/vendor/github.com/hashicorp/terraform/terraform/version.go | |||
@@ -7,12 +7,12 @@ import ( | |||
7 | ) | 7 | ) |
8 | 8 | ||
9 | // The main version number that is being run at the moment. | 9 | // The main version number that is being run at the moment. |
10 | const Version = "0.9.8" | 10 | const Version = "0.10.0" |
11 | 11 | ||
12 | // A pre-release marker for the version. If this is "" (empty string) | 12 | // A pre-release marker for the version. If this is "" (empty string) |
13 | // then it means that it is a final release. Otherwise, this is a pre-release | 13 | // then it means that it is a final release. Otherwise, this is a pre-release |
14 | // such as "dev" (in development), "beta", "rc1", etc. | 14 | // such as "dev" (in development), "beta", "rc1", etc. |
15 | var VersionPrerelease = "" | 15 | var VersionPrerelease = "dev" |
16 | 16 | ||
17 | // SemVersion is an instance of version.Version. This has the secondary | 17 | // SemVersion is an instance of version.Version. This has the secondary |
18 | // benefit of verifying during tests and init time that our version is a | 18 | // benefit of verifying during tests and init time that our version is a |
diff --git a/vendor/golang.org/x/crypto/cast5/cast5.go b/vendor/golang.org/x/crypto/cast5/cast5.go new file mode 100644 index 0000000..0b4af37 --- /dev/null +++ b/vendor/golang.org/x/crypto/cast5/cast5.go | |||
@@ -0,0 +1,526 @@ | |||
1 | // Copyright 2010 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | // Package cast5 implements CAST5, as defined in RFC 2144. CAST5 is a common | ||
6 | // OpenPGP cipher. | ||
7 | package cast5 // import "golang.org/x/crypto/cast5" | ||
8 | |||
9 | import "errors" | ||
10 | |||
11 | const BlockSize = 8 | ||
12 | const KeySize = 16 | ||
13 | |||
14 | type Cipher struct { | ||
15 | masking [16]uint32 | ||
16 | rotate [16]uint8 | ||
17 | } | ||
18 | |||
19 | func NewCipher(key []byte) (c *Cipher, err error) { | ||
20 | if len(key) != KeySize { | ||
21 | return nil, errors.New("CAST5: keys must be 16 bytes") | ||
22 | } | ||
23 | |||
24 | c = new(Cipher) | ||
25 | c.keySchedule(key) | ||
26 | return | ||
27 | } | ||
28 | |||
29 | func (c *Cipher) BlockSize() int { | ||
30 | return BlockSize | ||
31 | } | ||
32 | |||
33 | func (c *Cipher) Encrypt(dst, src []byte) { | ||
34 | l := uint32(src[0])<<24 | uint32(src[1])<<16 | uint32(src[2])<<8 | uint32(src[3]) | ||
35 | r := uint32(src[4])<<24 | uint32(src[5])<<16 | uint32(src[6])<<8 | uint32(src[7]) | ||
36 | |||
37 | l, r = r, l^f1(r, c.masking[0], c.rotate[0]) | ||
38 | l, r = r, l^f2(r, c.masking[1], c.rotate[1]) | ||
39 | l, r = r, l^f3(r, c.masking[2], c.rotate[2]) | ||
40 | l, r = r, l^f1(r, c.masking[3], c.rotate[3]) | ||
41 | |||
42 | l, r = r, l^f2(r, c.masking[4], c.rotate[4]) | ||
43 | l, r = r, l^f3(r, c.masking[5], c.rotate[5]) | ||
44 | l, r = r, l^f1(r, c.masking[6], c.rotate[6]) | ||
45 | l, r = r, l^f2(r, c.masking[7], c.rotate[7]) | ||
46 | |||
47 | l, r = r, l^f3(r, c.masking[8], c.rotate[8]) | ||
48 | l, r = r, l^f1(r, c.masking[9], c.rotate[9]) | ||
49 | l, r = r, l^f2(r, c.masking[10], c.rotate[10]) | ||
50 | l, r = r, l^f3(r, c.masking[11], c.rotate[11]) | ||
51 | |||
52 | l, r = r, l^f1(r, c.masking[12], c.rotate[12]) | ||
53 | l, r = r, l^f2(r, c.masking[13], c.rotate[13]) | ||
54 | l, r = r, l^f3(r, c.masking[14], c.rotate[14]) | ||
55 | l, r = r, l^f1(r, c.masking[15], c.rotate[15]) | ||
56 | |||
57 | dst[0] = uint8(r >> 24) | ||
58 | dst[1] = uint8(r >> 16) | ||
59 | dst[2] = uint8(r >> 8) | ||
60 | dst[3] = uint8(r) | ||
61 | dst[4] = uint8(l >> 24) | ||
62 | dst[5] = uint8(l >> 16) | ||
63 | dst[6] = uint8(l >> 8) | ||
64 | dst[7] = uint8(l) | ||
65 | } | ||
66 | |||
67 | func (c *Cipher) Decrypt(dst, src []byte) { | ||
68 | l := uint32(src[0])<<24 | uint32(src[1])<<16 | uint32(src[2])<<8 | uint32(src[3]) | ||
69 | r := uint32(src[4])<<24 | uint32(src[5])<<16 | uint32(src[6])<<8 | uint32(src[7]) | ||
70 | |||
71 | l, r = r, l^f1(r, c.masking[15], c.rotate[15]) | ||
72 | l, r = r, l^f3(r, c.masking[14], c.rotate[14]) | ||
73 | l, r = r, l^f2(r, c.masking[13], c.rotate[13]) | ||
74 | l, r = r, l^f1(r, c.masking[12], c.rotate[12]) | ||
75 | |||
76 | l, r = r, l^f3(r, c.masking[11], c.rotate[11]) | ||
77 | l, r = r, l^f2(r, c.masking[10], c.rotate[10]) | ||
78 | l, r = r, l^f1(r, c.masking[9], c.rotate[9]) | ||
79 | l, r = r, l^f3(r, c.masking[8], c.rotate[8]) | ||
80 | |||
81 | l, r = r, l^f2(r, c.masking[7], c.rotate[7]) | ||
82 | l, r = r, l^f1(r, c.masking[6], c.rotate[6]) | ||
83 | l, r = r, l^f3(r, c.masking[5], c.rotate[5]) | ||
84 | l, r = r, l^f2(r, c.masking[4], c.rotate[4]) | ||
85 | |||
86 | l, r = r, l^f1(r, c.masking[3], c.rotate[3]) | ||
87 | l, r = r, l^f3(r, c.masking[2], c.rotate[2]) | ||
88 | l, r = r, l^f2(r, c.masking[1], c.rotate[1]) | ||
89 | l, r = r, l^f1(r, c.masking[0], c.rotate[0]) | ||
90 | |||
91 | dst[0] = uint8(r >> 24) | ||
92 | dst[1] = uint8(r >> 16) | ||
93 | dst[2] = uint8(r >> 8) | ||
94 | dst[3] = uint8(r) | ||
95 | dst[4] = uint8(l >> 24) | ||
96 | dst[5] = uint8(l >> 16) | ||
97 | dst[6] = uint8(l >> 8) | ||
98 | dst[7] = uint8(l) | ||
99 | } | ||
100 | |||
101 | type keyScheduleA [4][7]uint8 | ||
102 | type keyScheduleB [4][5]uint8 | ||
103 | |||
104 | // keyScheduleRound contains the magic values for a round of the key schedule. | ||
105 | // The keyScheduleA deals with the lines like: | ||
106 | // z0z1z2z3 = x0x1x2x3 ^ S5[xD] ^ S6[xF] ^ S7[xC] ^ S8[xE] ^ S7[x8] | ||
107 | // Conceptually, both x and z are in the same array, x first. The first | ||
108 | // element describes which word of this array gets written to and the | ||
109 | // second, which word gets read. So, for the line above, it's "4, 0", because | ||
110 | // it's writing to the first word of z, which, being after x, is word 4, and | ||
111 | // reading from the first word of x: word 0. | ||
112 | // | ||
113 | // Next are the indexes into the S-boxes. Now the array is treated as bytes. So | ||
114 | // "xD" is 0xd. The first byte of z is written as "16 + 0", just to be clear | ||
115 | // that it's z that we're indexing. | ||
116 | // | ||
117 | // keyScheduleB deals with lines like: | ||
118 | // K1 = S5[z8] ^ S6[z9] ^ S7[z7] ^ S8[z6] ^ S5[z2] | ||
119 | // "K1" is ignored because key words are always written in order. So the five | ||
120 | // elements are the S-box indexes. They use the same form as in keyScheduleA, | ||
121 | // above. | ||
122 | |||
123 | type keyScheduleRound struct{} | ||
124 | type keySchedule []keyScheduleRound | ||
125 | |||
126 | var schedule = []struct { | ||
127 | a keyScheduleA | ||
128 | b keyScheduleB | ||
129 | }{ | ||
130 | { | ||
131 | keyScheduleA{ | ||
132 | {4, 0, 0xd, 0xf, 0xc, 0xe, 0x8}, | ||
133 | {5, 2, 16 + 0, 16 + 2, 16 + 1, 16 + 3, 0xa}, | ||
134 | {6, 3, 16 + 7, 16 + 6, 16 + 5, 16 + 4, 9}, | ||
135 | {7, 1, 16 + 0xa, 16 + 9, 16 + 0xb, 16 + 8, 0xb}, | ||
136 | }, | ||
137 | keyScheduleB{ | ||
138 | {16 + 8, 16 + 9, 16 + 7, 16 + 6, 16 + 2}, | ||
139 | {16 + 0xa, 16 + 0xb, 16 + 5, 16 + 4, 16 + 6}, | ||
140 | {16 + 0xc, 16 + 0xd, 16 + 3, 16 + 2, 16 + 9}, | ||
141 | {16 + 0xe, 16 + 0xf, 16 + 1, 16 + 0, 16 + 0xc}, | ||
142 | }, | ||
143 | }, | ||
144 | { | ||
145 | keyScheduleA{ | ||
146 | {0, 6, 16 + 5, 16 + 7, 16 + 4, 16 + 6, 16 + 0}, | ||
147 | {1, 4, 0, 2, 1, 3, 16 + 2}, | ||
148 | {2, 5, 7, 6, 5, 4, 16 + 1}, | ||
149 | {3, 7, 0xa, 9, 0xb, 8, 16 + 3}, | ||
150 | }, | ||
151 | keyScheduleB{ | ||
152 | {3, 2, 0xc, 0xd, 8}, | ||
153 | {1, 0, 0xe, 0xf, 0xd}, | ||
154 | {7, 6, 8, 9, 3}, | ||
155 | {5, 4, 0xa, 0xb, 7}, | ||
156 | }, | ||
157 | }, | ||
158 | { | ||
159 | keyScheduleA{ | ||
160 | {4, 0, 0xd, 0xf, 0xc, 0xe, 8}, | ||
161 | {5, 2, 16 + 0, 16 + 2, 16 + 1, 16 + 3, 0xa}, | ||
162 | {6, 3, 16 + 7, 16 + 6, 16 + 5, 16 + 4, 9}, | ||
163 | {7, 1, 16 + 0xa, 16 + 9, 16 + 0xb, 16 + 8, 0xb}, | ||
164 | }, | ||
165 | keyScheduleB{ | ||
166 | {16 + 3, 16 + 2, 16 + 0xc, 16 + 0xd, 16 + 9}, | ||
167 | {16 + 1, 16 + 0, 16 + 0xe, 16 + 0xf, 16 + 0xc}, | ||
168 | {16 + 7, 16 + 6, 16 + 8, 16 + 9, 16 + 2}, | ||
169 | {16 + 5, 16 + 4, 16 + 0xa, 16 + 0xb, 16 + 6}, | ||
170 | }, | ||
171 | }, | ||
172 | { | ||
173 | keyScheduleA{ | ||
174 | {0, 6, 16 + 5, 16 + 7, 16 + 4, 16 + 6, 16 + 0}, | ||
175 | {1, 4, 0, 2, 1, 3, 16 + 2}, | ||
176 | {2, 5, 7, 6, 5, 4, 16 + 1}, | ||
177 | {3, 7, 0xa, 9, 0xb, 8, 16 + 3}, | ||
178 | }, | ||
179 | keyScheduleB{ | ||
180 | {8, 9, 7, 6, 3}, | ||
181 | {0xa, 0xb, 5, 4, 7}, | ||
182 | {0xc, 0xd, 3, 2, 8}, | ||
183 | {0xe, 0xf, 1, 0, 0xd}, | ||
184 | }, | ||
185 | }, | ||
186 | } | ||
187 | |||
188 | func (c *Cipher) keySchedule(in []byte) { | ||
189 | var t [8]uint32 | ||
190 | var k [32]uint32 | ||
191 | |||
192 | for i := 0; i < 4; i++ { | ||
193 | j := i * 4 | ||
194 | t[i] = uint32(in[j])<<24 | uint32(in[j+1])<<16 | uint32(in[j+2])<<8 | uint32(in[j+3]) | ||
195 | } | ||
196 | |||
197 | x := []byte{6, 7, 4, 5} | ||
198 | ki := 0 | ||
199 | |||
200 | for half := 0; half < 2; half++ { | ||
201 | for _, round := range schedule { | ||
202 | for j := 0; j < 4; j++ { | ||
203 | var a [7]uint8 | ||
204 | copy(a[:], round.a[j][:]) | ||
205 | w := t[a[1]] | ||
206 | w ^= sBox[4][(t[a[2]>>2]>>(24-8*(a[2]&3)))&0xff] | ||
207 | w ^= sBox[5][(t[a[3]>>2]>>(24-8*(a[3]&3)))&0xff] | ||
208 | w ^= sBox[6][(t[a[4]>>2]>>(24-8*(a[4]&3)))&0xff] | ||
209 | w ^= sBox[7][(t[a[5]>>2]>>(24-8*(a[5]&3)))&0xff] | ||
210 | w ^= sBox[x[j]][(t[a[6]>>2]>>(24-8*(a[6]&3)))&0xff] | ||
211 | t[a[0]] = w | ||
212 | } | ||
213 | |||
214 | for j := 0; j < 4; j++ { | ||
215 | var b [5]uint8 | ||
216 | copy(b[:], round.b[j][:]) | ||
217 | w := sBox[4][(t[b[0]>>2]>>(24-8*(b[0]&3)))&0xff] | ||
218 | w ^= sBox[5][(t[b[1]>>2]>>(24-8*(b[1]&3)))&0xff] | ||
219 | w ^= sBox[6][(t[b[2]>>2]>>(24-8*(b[2]&3)))&0xff] | ||
220 | w ^= sBox[7][(t[b[3]>>2]>>(24-8*(b[3]&3)))&0xff] | ||
221 | w ^= sBox[4+j][(t[b[4]>>2]>>(24-8*(b[4]&3)))&0xff] | ||
222 | k[ki] = w | ||
223 | ki++ | ||
224 | } | ||
225 | } | ||
226 | } | ||
227 | |||
228 | for i := 0; i < 16; i++ { | ||
229 | c.masking[i] = k[i] | ||
230 | c.rotate[i] = uint8(k[16+i] & 0x1f) | ||
231 | } | ||
232 | } | ||
233 | |||
234 | // These are the three 'f' functions. See RFC 2144, section 2.2. | ||
235 | func f1(d, m uint32, r uint8) uint32 { | ||
236 | t := m + d | ||
237 | I := (t << r) | (t >> (32 - r)) | ||
238 | return ((sBox[0][I>>24] ^ sBox[1][(I>>16)&0xff]) - sBox[2][(I>>8)&0xff]) + sBox[3][I&0xff] | ||
239 | } | ||
240 | |||
241 | func f2(d, m uint32, r uint8) uint32 { | ||
242 | t := m ^ d | ||
243 | I := (t << r) | (t >> (32 - r)) | ||
244 | return ((sBox[0][I>>24] - sBox[1][(I>>16)&0xff]) + sBox[2][(I>>8)&0xff]) ^ sBox[3][I&0xff] | ||
245 | } | ||
246 | |||
247 | func f3(d, m uint32, r uint8) uint32 { | ||
248 | t := m - d | ||
249 | I := (t << r) | (t >> (32 - r)) | ||
250 | return ((sBox[0][I>>24] + sBox[1][(I>>16)&0xff]) ^ sBox[2][(I>>8)&0xff]) - sBox[3][I&0xff] | ||
251 | } | ||
252 | |||
253 | var sBox = [8][256]uint32{ | ||
254 | { | ||
255 | 0x30fb40d4, 0x9fa0ff0b, 0x6beccd2f, 0x3f258c7a, 0x1e213f2f, 0x9c004dd3, 0x6003e540, 0xcf9fc949, | ||
256 | 0xbfd4af27, 0x88bbbdb5, 0xe2034090, 0x98d09675, 0x6e63a0e0, 0x15c361d2, 0xc2e7661d, 0x22d4ff8e, | ||
257 | 0x28683b6f, 0xc07fd059, 0xff2379c8, 0x775f50e2, 0x43c340d3, 0xdf2f8656, 0x887ca41a, 0xa2d2bd2d, | ||
258 | 0xa1c9e0d6, 0x346c4819, 0x61b76d87, 0x22540f2f, 0x2abe32e1, 0xaa54166b, 0x22568e3a, 0xa2d341d0, | ||
259 | 0x66db40c8, 0xa784392f, 0x004dff2f, 0x2db9d2de, 0x97943fac, 0x4a97c1d8, 0x527644b7, 0xb5f437a7, | ||
260 | 0xb82cbaef, 0xd751d159, 0x6ff7f0ed, 0x5a097a1f, 0x827b68d0, 0x90ecf52e, 0x22b0c054, 0xbc8e5935, | ||
261 | 0x4b6d2f7f, 0x50bb64a2, 0xd2664910, 0xbee5812d, 0xb7332290, 0xe93b159f, 0xb48ee411, 0x4bff345d, | ||
262 | 0xfd45c240, 0xad31973f, 0xc4f6d02e, 0x55fc8165, 0xd5b1caad, 0xa1ac2dae, 0xa2d4b76d, 0xc19b0c50, | ||
263 | 0x882240f2, 0x0c6e4f38, 0xa4e4bfd7, 0x4f5ba272, 0x564c1d2f, 0xc59c5319, 0xb949e354, 0xb04669fe, | ||
264 | 0xb1b6ab8a, 0xc71358dd, 0x6385c545, 0x110f935d, 0x57538ad5, 0x6a390493, 0xe63d37e0, 0x2a54f6b3, | ||
265 | 0x3a787d5f, 0x6276a0b5, 0x19a6fcdf, 0x7a42206a, 0x29f9d4d5, 0xf61b1891, 0xbb72275e, 0xaa508167, | ||
266 | 0x38901091, 0xc6b505eb, 0x84c7cb8c, 0x2ad75a0f, 0x874a1427, 0xa2d1936b, 0x2ad286af, 0xaa56d291, | ||
267 | 0xd7894360, 0x425c750d, 0x93b39e26, 0x187184c9, 0x6c00b32d, 0x73e2bb14, 0xa0bebc3c, 0x54623779, | ||
268 | 0x64459eab, 0x3f328b82, 0x7718cf82, 0x59a2cea6, 0x04ee002e, 0x89fe78e6, 0x3fab0950, 0x325ff6c2, | ||
269 | 0x81383f05, 0x6963c5c8, 0x76cb5ad6, 0xd49974c9, 0xca180dcf, 0x380782d5, 0xc7fa5cf6, 0x8ac31511, | ||
270 | 0x35e79e13, 0x47da91d0, 0xf40f9086, 0xa7e2419e, 0x31366241, 0x051ef495, 0xaa573b04, 0x4a805d8d, | ||
271 | 0x548300d0, 0x00322a3c, 0xbf64cddf, 0xba57a68e, 0x75c6372b, 0x50afd341, 0xa7c13275, 0x915a0bf5, | ||
272 | 0x6b54bfab, 0x2b0b1426, 0xab4cc9d7, 0x449ccd82, 0xf7fbf265, 0xab85c5f3, 0x1b55db94, 0xaad4e324, | ||
273 | 0xcfa4bd3f, 0x2deaa3e2, 0x9e204d02, 0xc8bd25ac, 0xeadf55b3, 0xd5bd9e98, 0xe31231b2, 0x2ad5ad6c, | ||
274 | 0x954329de, 0xadbe4528, 0xd8710f69, 0xaa51c90f, 0xaa786bf6, 0x22513f1e, 0xaa51a79b, 0x2ad344cc, | ||
275 | 0x7b5a41f0, 0xd37cfbad, 0x1b069505, 0x41ece491, 0xb4c332e6, 0x032268d4, 0xc9600acc, 0xce387e6d, | ||
276 | 0xbf6bb16c, 0x6a70fb78, 0x0d03d9c9, 0xd4df39de, 0xe01063da, 0x4736f464, 0x5ad328d8, 0xb347cc96, | ||
277 | 0x75bb0fc3, 0x98511bfb, 0x4ffbcc35, 0xb58bcf6a, 0xe11f0abc, 0xbfc5fe4a, 0xa70aec10, 0xac39570a, | ||
278 | 0x3f04442f, 0x6188b153, 0xe0397a2e, 0x5727cb79, 0x9ceb418f, 0x1cacd68d, 0x2ad37c96, 0x0175cb9d, | ||
279 | 0xc69dff09, 0xc75b65f0, 0xd9db40d8, 0xec0e7779, 0x4744ead4, 0xb11c3274, 0xdd24cb9e, 0x7e1c54bd, | ||
280 | 0xf01144f9, 0xd2240eb1, 0x9675b3fd, 0xa3ac3755, 0xd47c27af, 0x51c85f4d, 0x56907596, 0xa5bb15e6, | ||
281 | 0x580304f0, 0xca042cf1, 0x011a37ea, 0x8dbfaadb, 0x35ba3e4a, 0x3526ffa0, 0xc37b4d09, 0xbc306ed9, | ||
282 | 0x98a52666, 0x5648f725, 0xff5e569d, 0x0ced63d0, 0x7c63b2cf, 0x700b45e1, 0xd5ea50f1, 0x85a92872, | ||
283 | 0xaf1fbda7, 0xd4234870, 0xa7870bf3, 0x2d3b4d79, 0x42e04198, 0x0cd0ede7, 0x26470db8, 0xf881814c, | ||
284 | 0x474d6ad7, 0x7c0c5e5c, 0xd1231959, 0x381b7298, 0xf5d2f4db, 0xab838653, 0x6e2f1e23, 0x83719c9e, | ||
285 | 0xbd91e046, 0x9a56456e, 0xdc39200c, 0x20c8c571, 0x962bda1c, 0xe1e696ff, 0xb141ab08, 0x7cca89b9, | ||
286 | 0x1a69e783, 0x02cc4843, 0xa2f7c579, 0x429ef47d, 0x427b169c, 0x5ac9f049, 0xdd8f0f00, 0x5c8165bf, | ||
287 | }, | ||
288 | { | ||
289 | 0x1f201094, 0xef0ba75b, 0x69e3cf7e, 0x393f4380, 0xfe61cf7a, 0xeec5207a, 0x55889c94, 0x72fc0651, | ||
290 | 0xada7ef79, 0x4e1d7235, 0xd55a63ce, 0xde0436ba, 0x99c430ef, 0x5f0c0794, 0x18dcdb7d, 0xa1d6eff3, | ||
291 | 0xa0b52f7b, 0x59e83605, 0xee15b094, 0xe9ffd909, 0xdc440086, 0xef944459, 0xba83ccb3, 0xe0c3cdfb, | ||
292 | 0xd1da4181, 0x3b092ab1, 0xf997f1c1, 0xa5e6cf7b, 0x01420ddb, 0xe4e7ef5b, 0x25a1ff41, 0xe180f806, | ||
293 | 0x1fc41080, 0x179bee7a, 0xd37ac6a9, 0xfe5830a4, 0x98de8b7f, 0x77e83f4e, 0x79929269, 0x24fa9f7b, | ||
294 | 0xe113c85b, 0xacc40083, 0xd7503525, 0xf7ea615f, 0x62143154, 0x0d554b63, 0x5d681121, 0xc866c359, | ||
295 | 0x3d63cf73, 0xcee234c0, 0xd4d87e87, 0x5c672b21, 0x071f6181, 0x39f7627f, 0x361e3084, 0xe4eb573b, | ||
296 | 0x602f64a4, 0xd63acd9c, 0x1bbc4635, 0x9e81032d, 0x2701f50c, 0x99847ab4, 0xa0e3df79, 0xba6cf38c, | ||
297 | 0x10843094, 0x2537a95e, 0xf46f6ffe, 0xa1ff3b1f, 0x208cfb6a, 0x8f458c74, 0xd9e0a227, 0x4ec73a34, | ||
298 | 0xfc884f69, 0x3e4de8df, 0xef0e0088, 0x3559648d, 0x8a45388c, 0x1d804366, 0x721d9bfd, 0xa58684bb, | ||
299 | 0xe8256333, 0x844e8212, 0x128d8098, 0xfed33fb4, 0xce280ae1, 0x27e19ba5, 0xd5a6c252, 0xe49754bd, | ||
300 | 0xc5d655dd, 0xeb667064, 0x77840b4d, 0xa1b6a801, 0x84db26a9, 0xe0b56714, 0x21f043b7, 0xe5d05860, | ||
301 | 0x54f03084, 0x066ff472, 0xa31aa153, 0xdadc4755, 0xb5625dbf, 0x68561be6, 0x83ca6b94, 0x2d6ed23b, | ||
302 | 0xeccf01db, 0xa6d3d0ba, 0xb6803d5c, 0xaf77a709, 0x33b4a34c, 0x397bc8d6, 0x5ee22b95, 0x5f0e5304, | ||
303 | 0x81ed6f61, 0x20e74364, 0xb45e1378, 0xde18639b, 0x881ca122, 0xb96726d1, 0x8049a7e8, 0x22b7da7b, | ||
304 | 0x5e552d25, 0x5272d237, 0x79d2951c, 0xc60d894c, 0x488cb402, 0x1ba4fe5b, 0xa4b09f6b, 0x1ca815cf, | ||
305 | 0xa20c3005, 0x8871df63, 0xb9de2fcb, 0x0cc6c9e9, 0x0beeff53, 0xe3214517, 0xb4542835, 0x9f63293c, | ||
306 | 0xee41e729, 0x6e1d2d7c, 0x50045286, 0x1e6685f3, 0xf33401c6, 0x30a22c95, 0x31a70850, 0x60930f13, | ||
307 | 0x73f98417, 0xa1269859, 0xec645c44, 0x52c877a9, 0xcdff33a6, 0xa02b1741, 0x7cbad9a2, 0x2180036f, | ||
308 | 0x50d99c08, 0xcb3f4861, 0xc26bd765, 0x64a3f6ab, 0x80342676, 0x25a75e7b, 0xe4e6d1fc, 0x20c710e6, | ||
309 | 0xcdf0b680, 0x17844d3b, 0x31eef84d, 0x7e0824e4, 0x2ccb49eb, 0x846a3bae, 0x8ff77888, 0xee5d60f6, | ||
310 | 0x7af75673, 0x2fdd5cdb, 0xa11631c1, 0x30f66f43, 0xb3faec54, 0x157fd7fa, 0xef8579cc, 0xd152de58, | ||
311 | 0xdb2ffd5e, 0x8f32ce19, 0x306af97a, 0x02f03ef8, 0x99319ad5, 0xc242fa0f, 0xa7e3ebb0, 0xc68e4906, | ||
312 | 0xb8da230c, 0x80823028, 0xdcdef3c8, 0xd35fb171, 0x088a1bc8, 0xbec0c560, 0x61a3c9e8, 0xbca8f54d, | ||
313 | 0xc72feffa, 0x22822e99, 0x82c570b4, 0xd8d94e89, 0x8b1c34bc, 0x301e16e6, 0x273be979, 0xb0ffeaa6, | ||
314 | 0x61d9b8c6, 0x00b24869, 0xb7ffce3f, 0x08dc283b, 0x43daf65a, 0xf7e19798, 0x7619b72f, 0x8f1c9ba4, | ||
315 | 0xdc8637a0, 0x16a7d3b1, 0x9fc393b7, 0xa7136eeb, 0xc6bcc63e, 0x1a513742, 0xef6828bc, 0x520365d6, | ||
316 | 0x2d6a77ab, 0x3527ed4b, 0x821fd216, 0x095c6e2e, 0xdb92f2fb, 0x5eea29cb, 0x145892f5, 0x91584f7f, | ||
317 | 0x5483697b, 0x2667a8cc, 0x85196048, 0x8c4bacea, 0x833860d4, 0x0d23e0f9, 0x6c387e8a, 0x0ae6d249, | ||
318 | 0xb284600c, 0xd835731d, 0xdcb1c647, 0xac4c56ea, 0x3ebd81b3, 0x230eabb0, 0x6438bc87, 0xf0b5b1fa, | ||
319 | 0x8f5ea2b3, 0xfc184642, 0x0a036b7a, 0x4fb089bd, 0x649da589, 0xa345415e, 0x5c038323, 0x3e5d3bb9, | ||
320 | 0x43d79572, 0x7e6dd07c, 0x06dfdf1e, 0x6c6cc4ef, 0x7160a539, 0x73bfbe70, 0x83877605, 0x4523ecf1, | ||
321 | }, | ||
322 | { | ||
323 | 0x8defc240, 0x25fa5d9f, 0xeb903dbf, 0xe810c907, 0x47607fff, 0x369fe44b, 0x8c1fc644, 0xaececa90, | ||
324 | 0xbeb1f9bf, 0xeefbcaea, 0xe8cf1950, 0x51df07ae, 0x920e8806, 0xf0ad0548, 0xe13c8d83, 0x927010d5, | ||
325 | 0x11107d9f, 0x07647db9, 0xb2e3e4d4, 0x3d4f285e, 0xb9afa820, 0xfade82e0, 0xa067268b, 0x8272792e, | ||
326 | 0x553fb2c0, 0x489ae22b, 0xd4ef9794, 0x125e3fbc, 0x21fffcee, 0x825b1bfd, 0x9255c5ed, 0x1257a240, | ||
327 | 0x4e1a8302, 0xbae07fff, 0x528246e7, 0x8e57140e, 0x3373f7bf, 0x8c9f8188, 0xa6fc4ee8, 0xc982b5a5, | ||
328 | 0xa8c01db7, 0x579fc264, 0x67094f31, 0xf2bd3f5f, 0x40fff7c1, 0x1fb78dfc, 0x8e6bd2c1, 0x437be59b, | ||
329 | 0x99b03dbf, 0xb5dbc64b, 0x638dc0e6, 0x55819d99, 0xa197c81c, 0x4a012d6e, 0xc5884a28, 0xccc36f71, | ||
330 | 0xb843c213, 0x6c0743f1, 0x8309893c, 0x0feddd5f, 0x2f7fe850, 0xd7c07f7e, 0x02507fbf, 0x5afb9a04, | ||
331 | 0xa747d2d0, 0x1651192e, 0xaf70bf3e, 0x58c31380, 0x5f98302e, 0x727cc3c4, 0x0a0fb402, 0x0f7fef82, | ||
332 | 0x8c96fdad, 0x5d2c2aae, 0x8ee99a49, 0x50da88b8, 0x8427f4a0, 0x1eac5790, 0x796fb449, 0x8252dc15, | ||
333 | 0xefbd7d9b, 0xa672597d, 0xada840d8, 0x45f54504, 0xfa5d7403, 0xe83ec305, 0x4f91751a, 0x925669c2, | ||
334 | 0x23efe941, 0xa903f12e, 0x60270df2, 0x0276e4b6, 0x94fd6574, 0x927985b2, 0x8276dbcb, 0x02778176, | ||
335 | 0xf8af918d, 0x4e48f79e, 0x8f616ddf, 0xe29d840e, 0x842f7d83, 0x340ce5c8, 0x96bbb682, 0x93b4b148, | ||
336 | 0xef303cab, 0x984faf28, 0x779faf9b, 0x92dc560d, 0x224d1e20, 0x8437aa88, 0x7d29dc96, 0x2756d3dc, | ||
337 | 0x8b907cee, 0xb51fd240, 0xe7c07ce3, 0xe566b4a1, 0xc3e9615e, 0x3cf8209d, 0x6094d1e3, 0xcd9ca341, | ||
338 | 0x5c76460e, 0x00ea983b, 0xd4d67881, 0xfd47572c, 0xf76cedd9, 0xbda8229c, 0x127dadaa, 0x438a074e, | ||
339 | 0x1f97c090, 0x081bdb8a, 0x93a07ebe, 0xb938ca15, 0x97b03cff, 0x3dc2c0f8, 0x8d1ab2ec, 0x64380e51, | ||
340 | 0x68cc7bfb, 0xd90f2788, 0x12490181, 0x5de5ffd4, 0xdd7ef86a, 0x76a2e214, 0xb9a40368, 0x925d958f, | ||
341 | 0x4b39fffa, 0xba39aee9, 0xa4ffd30b, 0xfaf7933b, 0x6d498623, 0x193cbcfa, 0x27627545, 0x825cf47a, | ||
342 | 0x61bd8ba0, 0xd11e42d1, 0xcead04f4, 0x127ea392, 0x10428db7, 0x8272a972, 0x9270c4a8, 0x127de50b, | ||
343 | 0x285ba1c8, 0x3c62f44f, 0x35c0eaa5, 0xe805d231, 0x428929fb, 0xb4fcdf82, 0x4fb66a53, 0x0e7dc15b, | ||
344 | 0x1f081fab, 0x108618ae, 0xfcfd086d, 0xf9ff2889, 0x694bcc11, 0x236a5cae, 0x12deca4d, 0x2c3f8cc5, | ||
345 | 0xd2d02dfe, 0xf8ef5896, 0xe4cf52da, 0x95155b67, 0x494a488c, 0xb9b6a80c, 0x5c8f82bc, 0x89d36b45, | ||
346 | 0x3a609437, 0xec00c9a9, 0x44715253, 0x0a874b49, 0xd773bc40, 0x7c34671c, 0x02717ef6, 0x4feb5536, | ||
347 | 0xa2d02fff, 0xd2bf60c4, 0xd43f03c0, 0x50b4ef6d, 0x07478cd1, 0x006e1888, 0xa2e53f55, 0xb9e6d4bc, | ||
348 | 0xa2048016, 0x97573833, 0xd7207d67, 0xde0f8f3d, 0x72f87b33, 0xabcc4f33, 0x7688c55d, 0x7b00a6b0, | ||
349 | 0x947b0001, 0x570075d2, 0xf9bb88f8, 0x8942019e, 0x4264a5ff, 0x856302e0, 0x72dbd92b, 0xee971b69, | ||
350 | 0x6ea22fde, 0x5f08ae2b, 0xaf7a616d, 0xe5c98767, 0xcf1febd2, 0x61efc8c2, 0xf1ac2571, 0xcc8239c2, | ||
351 | 0x67214cb8, 0xb1e583d1, 0xb7dc3e62, 0x7f10bdce, 0xf90a5c38, 0x0ff0443d, 0x606e6dc6, 0x60543a49, | ||
352 | 0x5727c148, 0x2be98a1d, 0x8ab41738, 0x20e1be24, 0xaf96da0f, 0x68458425, 0x99833be5, 0x600d457d, | ||
353 | 0x282f9350, 0x8334b362, 0xd91d1120, 0x2b6d8da0, 0x642b1e31, 0x9c305a00, 0x52bce688, 0x1b03588a, | ||
354 | 0xf7baefd5, 0x4142ed9c, 0xa4315c11, 0x83323ec5, 0xdfef4636, 0xa133c501, 0xe9d3531c, 0xee353783, | ||
355 | }, | ||
356 | { | ||
357 | 0x9db30420, 0x1fb6e9de, 0xa7be7bef, 0xd273a298, 0x4a4f7bdb, 0x64ad8c57, 0x85510443, 0xfa020ed1, | ||
358 | 0x7e287aff, 0xe60fb663, 0x095f35a1, 0x79ebf120, 0xfd059d43, 0x6497b7b1, 0xf3641f63, 0x241e4adf, | ||
359 | 0x28147f5f, 0x4fa2b8cd, 0xc9430040, 0x0cc32220, 0xfdd30b30, 0xc0a5374f, 0x1d2d00d9, 0x24147b15, | ||
360 | 0xee4d111a, 0x0fca5167, 0x71ff904c, 0x2d195ffe, 0x1a05645f, 0x0c13fefe, 0x081b08ca, 0x05170121, | ||
361 | 0x80530100, 0xe83e5efe, 0xac9af4f8, 0x7fe72701, 0xd2b8ee5f, 0x06df4261, 0xbb9e9b8a, 0x7293ea25, | ||
362 | 0xce84ffdf, 0xf5718801, 0x3dd64b04, 0xa26f263b, 0x7ed48400, 0x547eebe6, 0x446d4ca0, 0x6cf3d6f5, | ||
363 | 0x2649abdf, 0xaea0c7f5, 0x36338cc1, 0x503f7e93, 0xd3772061, 0x11b638e1, 0x72500e03, 0xf80eb2bb, | ||
364 | 0xabe0502e, 0xec8d77de, 0x57971e81, 0xe14f6746, 0xc9335400, 0x6920318f, 0x081dbb99, 0xffc304a5, | ||
365 | 0x4d351805, 0x7f3d5ce3, 0xa6c866c6, 0x5d5bcca9, 0xdaec6fea, 0x9f926f91, 0x9f46222f, 0x3991467d, | ||
366 | 0xa5bf6d8e, 0x1143c44f, 0x43958302, 0xd0214eeb, 0x022083b8, 0x3fb6180c, 0x18f8931e, 0x281658e6, | ||
367 | 0x26486e3e, 0x8bd78a70, 0x7477e4c1, 0xb506e07c, 0xf32d0a25, 0x79098b02, 0xe4eabb81, 0x28123b23, | ||
368 | 0x69dead38, 0x1574ca16, 0xdf871b62, 0x211c40b7, 0xa51a9ef9, 0x0014377b, 0x041e8ac8, 0x09114003, | ||
369 | 0xbd59e4d2, 0xe3d156d5, 0x4fe876d5, 0x2f91a340, 0x557be8de, 0x00eae4a7, 0x0ce5c2ec, 0x4db4bba6, | ||
370 | 0xe756bdff, 0xdd3369ac, 0xec17b035, 0x06572327, 0x99afc8b0, 0x56c8c391, 0x6b65811c, 0x5e146119, | ||
371 | 0x6e85cb75, 0xbe07c002, 0xc2325577, 0x893ff4ec, 0x5bbfc92d, 0xd0ec3b25, 0xb7801ab7, 0x8d6d3b24, | ||
372 | 0x20c763ef, 0xc366a5fc, 0x9c382880, 0x0ace3205, 0xaac9548a, 0xeca1d7c7, 0x041afa32, 0x1d16625a, | ||
373 | 0x6701902c, 0x9b757a54, 0x31d477f7, 0x9126b031, 0x36cc6fdb, 0xc70b8b46, 0xd9e66a48, 0x56e55a79, | ||
374 | 0x026a4ceb, 0x52437eff, 0x2f8f76b4, 0x0df980a5, 0x8674cde3, 0xedda04eb, 0x17a9be04, 0x2c18f4df, | ||
375 | 0xb7747f9d, 0xab2af7b4, 0xefc34d20, 0x2e096b7c, 0x1741a254, 0xe5b6a035, 0x213d42f6, 0x2c1c7c26, | ||
376 | 0x61c2f50f, 0x6552daf9, 0xd2c231f8, 0x25130f69, 0xd8167fa2, 0x0418f2c8, 0x001a96a6, 0x0d1526ab, | ||
377 | 0x63315c21, 0x5e0a72ec, 0x49bafefd, 0x187908d9, 0x8d0dbd86, 0x311170a7, 0x3e9b640c, 0xcc3e10d7, | ||
378 | 0xd5cad3b6, 0x0caec388, 0xf73001e1, 0x6c728aff, 0x71eae2a1, 0x1f9af36e, 0xcfcbd12f, 0xc1de8417, | ||
379 | 0xac07be6b, 0xcb44a1d8, 0x8b9b0f56, 0x013988c3, 0xb1c52fca, 0xb4be31cd, 0xd8782806, 0x12a3a4e2, | ||
380 | 0x6f7de532, 0x58fd7eb6, 0xd01ee900, 0x24adffc2, 0xf4990fc5, 0x9711aac5, 0x001d7b95, 0x82e5e7d2, | ||
381 | 0x109873f6, 0x00613096, 0xc32d9521, 0xada121ff, 0x29908415, 0x7fbb977f, 0xaf9eb3db, 0x29c9ed2a, | ||
382 | 0x5ce2a465, 0xa730f32c, 0xd0aa3fe8, 0x8a5cc091, 0xd49e2ce7, 0x0ce454a9, 0xd60acd86, 0x015f1919, | ||
383 | 0x77079103, 0xdea03af6, 0x78a8565e, 0xdee356df, 0x21f05cbe, 0x8b75e387, 0xb3c50651, 0xb8a5c3ef, | ||
384 | 0xd8eeb6d2, 0xe523be77, 0xc2154529, 0x2f69efdf, 0xafe67afb, 0xf470c4b2, 0xf3e0eb5b, 0xd6cc9876, | ||
385 | 0x39e4460c, 0x1fda8538, 0x1987832f, 0xca007367, 0xa99144f8, 0x296b299e, 0x492fc295, 0x9266beab, | ||
386 | 0xb5676e69, 0x9bd3ddda, 0xdf7e052f, 0xdb25701c, 0x1b5e51ee, 0xf65324e6, 0x6afce36c, 0x0316cc04, | ||
387 | 0x8644213e, 0xb7dc59d0, 0x7965291f, 0xccd6fd43, 0x41823979, 0x932bcdf6, 0xb657c34d, 0x4edfd282, | ||
388 | 0x7ae5290c, 0x3cb9536b, 0x851e20fe, 0x9833557e, 0x13ecf0b0, 0xd3ffb372, 0x3f85c5c1, 0x0aef7ed2, | ||
389 | }, | ||
390 | { | ||
391 | 0x7ec90c04, 0x2c6e74b9, 0x9b0e66df, 0xa6337911, 0xb86a7fff, 0x1dd358f5, 0x44dd9d44, 0x1731167f, | ||
392 | 0x08fbf1fa, 0xe7f511cc, 0xd2051b00, 0x735aba00, 0x2ab722d8, 0x386381cb, 0xacf6243a, 0x69befd7a, | ||
393 | 0xe6a2e77f, 0xf0c720cd, 0xc4494816, 0xccf5c180, 0x38851640, 0x15b0a848, 0xe68b18cb, 0x4caadeff, | ||
394 | 0x5f480a01, 0x0412b2aa, 0x259814fc, 0x41d0efe2, 0x4e40b48d, 0x248eb6fb, 0x8dba1cfe, 0x41a99b02, | ||
395 | 0x1a550a04, 0xba8f65cb, 0x7251f4e7, 0x95a51725, 0xc106ecd7, 0x97a5980a, 0xc539b9aa, 0x4d79fe6a, | ||
396 | 0xf2f3f763, 0x68af8040, 0xed0c9e56, 0x11b4958b, 0xe1eb5a88, 0x8709e6b0, 0xd7e07156, 0x4e29fea7, | ||
397 | 0x6366e52d, 0x02d1c000, 0xc4ac8e05, 0x9377f571, 0x0c05372a, 0x578535f2, 0x2261be02, 0xd642a0c9, | ||
398 | 0xdf13a280, 0x74b55bd2, 0x682199c0, 0xd421e5ec, 0x53fb3ce8, 0xc8adedb3, 0x28a87fc9, 0x3d959981, | ||
399 | 0x5c1ff900, 0xfe38d399, 0x0c4eff0b, 0x062407ea, 0xaa2f4fb1, 0x4fb96976, 0x90c79505, 0xb0a8a774, | ||
400 | 0xef55a1ff, 0xe59ca2c2, 0xa6b62d27, 0xe66a4263, 0xdf65001f, 0x0ec50966, 0xdfdd55bc, 0x29de0655, | ||
401 | 0x911e739a, 0x17af8975, 0x32c7911c, 0x89f89468, 0x0d01e980, 0x524755f4, 0x03b63cc9, 0x0cc844b2, | ||
402 | 0xbcf3f0aa, 0x87ac36e9, 0xe53a7426, 0x01b3d82b, 0x1a9e7449, 0x64ee2d7e, 0xcddbb1da, 0x01c94910, | ||
403 | 0xb868bf80, 0x0d26f3fd, 0x9342ede7, 0x04a5c284, 0x636737b6, 0x50f5b616, 0xf24766e3, 0x8eca36c1, | ||
404 | 0x136e05db, 0xfef18391, 0xfb887a37, 0xd6e7f7d4, 0xc7fb7dc9, 0x3063fcdf, 0xb6f589de, 0xec2941da, | ||
405 | 0x26e46695, 0xb7566419, 0xf654efc5, 0xd08d58b7, 0x48925401, 0xc1bacb7f, 0xe5ff550f, 0xb6083049, | ||
406 | 0x5bb5d0e8, 0x87d72e5a, 0xab6a6ee1, 0x223a66ce, 0xc62bf3cd, 0x9e0885f9, 0x68cb3e47, 0x086c010f, | ||
407 | 0xa21de820, 0xd18b69de, 0xf3f65777, 0xfa02c3f6, 0x407edac3, 0xcbb3d550, 0x1793084d, 0xb0d70eba, | ||
408 | 0x0ab378d5, 0xd951fb0c, 0xded7da56, 0x4124bbe4, 0x94ca0b56, 0x0f5755d1, 0xe0e1e56e, 0x6184b5be, | ||
409 | 0x580a249f, 0x94f74bc0, 0xe327888e, 0x9f7b5561, 0xc3dc0280, 0x05687715, 0x646c6bd7, 0x44904db3, | ||
410 | 0x66b4f0a3, 0xc0f1648a, 0x697ed5af, 0x49e92ff6, 0x309e374f, 0x2cb6356a, 0x85808573, 0x4991f840, | ||
411 | 0x76f0ae02, 0x083be84d, 0x28421c9a, 0x44489406, 0x736e4cb8, 0xc1092910, 0x8bc95fc6, 0x7d869cf4, | ||
412 | 0x134f616f, 0x2e77118d, 0xb31b2be1, 0xaa90b472, 0x3ca5d717, 0x7d161bba, 0x9cad9010, 0xaf462ba2, | ||
413 | 0x9fe459d2, 0x45d34559, 0xd9f2da13, 0xdbc65487, 0xf3e4f94e, 0x176d486f, 0x097c13ea, 0x631da5c7, | ||
414 | 0x445f7382, 0x175683f4, 0xcdc66a97, 0x70be0288, 0xb3cdcf72, 0x6e5dd2f3, 0x20936079, 0x459b80a5, | ||
415 | 0xbe60e2db, 0xa9c23101, 0xeba5315c, 0x224e42f2, 0x1c5c1572, 0xf6721b2c, 0x1ad2fff3, 0x8c25404e, | ||
416 | 0x324ed72f, 0x4067b7fd, 0x0523138e, 0x5ca3bc78, 0xdc0fd66e, 0x75922283, 0x784d6b17, 0x58ebb16e, | ||
417 | 0x44094f85, 0x3f481d87, 0xfcfeae7b, 0x77b5ff76, 0x8c2302bf, 0xaaf47556, 0x5f46b02a, 0x2b092801, | ||
418 | 0x3d38f5f7, 0x0ca81f36, 0x52af4a8a, 0x66d5e7c0, 0xdf3b0874, 0x95055110, 0x1b5ad7a8, 0xf61ed5ad, | ||
419 | 0x6cf6e479, 0x20758184, 0xd0cefa65, 0x88f7be58, 0x4a046826, 0x0ff6f8f3, 0xa09c7f70, 0x5346aba0, | ||
420 | 0x5ce96c28, 0xe176eda3, 0x6bac307f, 0x376829d2, 0x85360fa9, 0x17e3fe2a, 0x24b79767, 0xf5a96b20, | ||
421 | 0xd6cd2595, 0x68ff1ebf, 0x7555442c, 0xf19f06be, 0xf9e0659a, 0xeeb9491d, 0x34010718, 0xbb30cab8, | ||
422 | 0xe822fe15, 0x88570983, 0x750e6249, 0xda627e55, 0x5e76ffa8, 0xb1534546, 0x6d47de08, 0xefe9e7d4, | ||
423 | }, | ||
424 | { | ||
425 | 0xf6fa8f9d, 0x2cac6ce1, 0x4ca34867, 0xe2337f7c, 0x95db08e7, 0x016843b4, 0xeced5cbc, 0x325553ac, | ||
426 | 0xbf9f0960, 0xdfa1e2ed, 0x83f0579d, 0x63ed86b9, 0x1ab6a6b8, 0xde5ebe39, 0xf38ff732, 0x8989b138, | ||
427 | 0x33f14961, 0xc01937bd, 0xf506c6da, 0xe4625e7e, 0xa308ea99, 0x4e23e33c, 0x79cbd7cc, 0x48a14367, | ||
428 | 0xa3149619, 0xfec94bd5, 0xa114174a, 0xeaa01866, 0xa084db2d, 0x09a8486f, 0xa888614a, 0x2900af98, | ||
429 | 0x01665991, 0xe1992863, 0xc8f30c60, 0x2e78ef3c, 0xd0d51932, 0xcf0fec14, 0xf7ca07d2, 0xd0a82072, | ||
430 | 0xfd41197e, 0x9305a6b0, 0xe86be3da, 0x74bed3cd, 0x372da53c, 0x4c7f4448, 0xdab5d440, 0x6dba0ec3, | ||
431 | 0x083919a7, 0x9fbaeed9, 0x49dbcfb0, 0x4e670c53, 0x5c3d9c01, 0x64bdb941, 0x2c0e636a, 0xba7dd9cd, | ||
432 | 0xea6f7388, 0xe70bc762, 0x35f29adb, 0x5c4cdd8d, 0xf0d48d8c, 0xb88153e2, 0x08a19866, 0x1ae2eac8, | ||
433 | 0x284caf89, 0xaa928223, 0x9334be53, 0x3b3a21bf, 0x16434be3, 0x9aea3906, 0xefe8c36e, 0xf890cdd9, | ||
434 | 0x80226dae, 0xc340a4a3, 0xdf7e9c09, 0xa694a807, 0x5b7c5ecc, 0x221db3a6, 0x9a69a02f, 0x68818a54, | ||
435 | 0xceb2296f, 0x53c0843a, 0xfe893655, 0x25bfe68a, 0xb4628abc, 0xcf222ebf, 0x25ac6f48, 0xa9a99387, | ||
436 | 0x53bddb65, 0xe76ffbe7, 0xe967fd78, 0x0ba93563, 0x8e342bc1, 0xe8a11be9, 0x4980740d, 0xc8087dfc, | ||
437 | 0x8de4bf99, 0xa11101a0, 0x7fd37975, 0xda5a26c0, 0xe81f994f, 0x9528cd89, 0xfd339fed, 0xb87834bf, | ||
438 | 0x5f04456d, 0x22258698, 0xc9c4c83b, 0x2dc156be, 0x4f628daa, 0x57f55ec5, 0xe2220abe, 0xd2916ebf, | ||
439 | 0x4ec75b95, 0x24f2c3c0, 0x42d15d99, 0xcd0d7fa0, 0x7b6e27ff, 0xa8dc8af0, 0x7345c106, 0xf41e232f, | ||
440 | 0x35162386, 0xe6ea8926, 0x3333b094, 0x157ec6f2, 0x372b74af, 0x692573e4, 0xe9a9d848, 0xf3160289, | ||
441 | 0x3a62ef1d, 0xa787e238, 0xf3a5f676, 0x74364853, 0x20951063, 0x4576698d, 0xb6fad407, 0x592af950, | ||
442 | 0x36f73523, 0x4cfb6e87, 0x7da4cec0, 0x6c152daa, 0xcb0396a8, 0xc50dfe5d, 0xfcd707ab, 0x0921c42f, | ||
443 | 0x89dff0bb, 0x5fe2be78, 0x448f4f33, 0x754613c9, 0x2b05d08d, 0x48b9d585, 0xdc049441, 0xc8098f9b, | ||
444 | 0x7dede786, 0xc39a3373, 0x42410005, 0x6a091751, 0x0ef3c8a6, 0x890072d6, 0x28207682, 0xa9a9f7be, | ||
445 | 0xbf32679d, 0xd45b5b75, 0xb353fd00, 0xcbb0e358, 0x830f220a, 0x1f8fb214, 0xd372cf08, 0xcc3c4a13, | ||
446 | 0x8cf63166, 0x061c87be, 0x88c98f88, 0x6062e397, 0x47cf8e7a, 0xb6c85283, 0x3cc2acfb, 0x3fc06976, | ||
447 | 0x4e8f0252, 0x64d8314d, 0xda3870e3, 0x1e665459, 0xc10908f0, 0x513021a5, 0x6c5b68b7, 0x822f8aa0, | ||
448 | 0x3007cd3e, 0x74719eef, 0xdc872681, 0x073340d4, 0x7e432fd9, 0x0c5ec241, 0x8809286c, 0xf592d891, | ||
449 | 0x08a930f6, 0x957ef305, 0xb7fbffbd, 0xc266e96f, 0x6fe4ac98, 0xb173ecc0, 0xbc60b42a, 0x953498da, | ||
450 | 0xfba1ae12, 0x2d4bd736, 0x0f25faab, 0xa4f3fceb, 0xe2969123, 0x257f0c3d, 0x9348af49, 0x361400bc, | ||
451 | 0xe8816f4a, 0x3814f200, 0xa3f94043, 0x9c7a54c2, 0xbc704f57, 0xda41e7f9, 0xc25ad33a, 0x54f4a084, | ||
452 | 0xb17f5505, 0x59357cbe, 0xedbd15c8, 0x7f97c5ab, 0xba5ac7b5, 0xb6f6deaf, 0x3a479c3a, 0x5302da25, | ||
453 | 0x653d7e6a, 0x54268d49, 0x51a477ea, 0x5017d55b, 0xd7d25d88, 0x44136c76, 0x0404a8c8, 0xb8e5a121, | ||
454 | 0xb81a928a, 0x60ed5869, 0x97c55b96, 0xeaec991b, 0x29935913, 0x01fdb7f1, 0x088e8dfa, 0x9ab6f6f5, | ||
455 | 0x3b4cbf9f, 0x4a5de3ab, 0xe6051d35, 0xa0e1d855, 0xd36b4cf1, 0xf544edeb, 0xb0e93524, 0xbebb8fbd, | ||
456 | 0xa2d762cf, 0x49c92f54, 0x38b5f331, 0x7128a454, 0x48392905, 0xa65b1db8, 0x851c97bd, 0xd675cf2f, | ||
457 | }, | ||
458 | { | ||
459 | 0x85e04019, 0x332bf567, 0x662dbfff, 0xcfc65693, 0x2a8d7f6f, 0xab9bc912, 0xde6008a1, 0x2028da1f, | ||
460 | 0x0227bce7, 0x4d642916, 0x18fac300, 0x50f18b82, 0x2cb2cb11, 0xb232e75c, 0x4b3695f2, 0xb28707de, | ||
461 | 0xa05fbcf6, 0xcd4181e9, 0xe150210c, 0xe24ef1bd, 0xb168c381, 0xfde4e789, 0x5c79b0d8, 0x1e8bfd43, | ||
462 | 0x4d495001, 0x38be4341, 0x913cee1d, 0x92a79c3f, 0x089766be, 0xbaeeadf4, 0x1286becf, 0xb6eacb19, | ||
463 | 0x2660c200, 0x7565bde4, 0x64241f7a, 0x8248dca9, 0xc3b3ad66, 0x28136086, 0x0bd8dfa8, 0x356d1cf2, | ||
464 | 0x107789be, 0xb3b2e9ce, 0x0502aa8f, 0x0bc0351e, 0x166bf52a, 0xeb12ff82, 0xe3486911, 0xd34d7516, | ||
465 | 0x4e7b3aff, 0x5f43671b, 0x9cf6e037, 0x4981ac83, 0x334266ce, 0x8c9341b7, 0xd0d854c0, 0xcb3a6c88, | ||
466 | 0x47bc2829, 0x4725ba37, 0xa66ad22b, 0x7ad61f1e, 0x0c5cbafa, 0x4437f107, 0xb6e79962, 0x42d2d816, | ||
467 | 0x0a961288, 0xe1a5c06e, 0x13749e67, 0x72fc081a, 0xb1d139f7, 0xf9583745, 0xcf19df58, 0xbec3f756, | ||
468 | 0xc06eba30, 0x07211b24, 0x45c28829, 0xc95e317f, 0xbc8ec511, 0x38bc46e9, 0xc6e6fa14, 0xbae8584a, | ||
469 | 0xad4ebc46, 0x468f508b, 0x7829435f, 0xf124183b, 0x821dba9f, 0xaff60ff4, 0xea2c4e6d, 0x16e39264, | ||
470 | 0x92544a8b, 0x009b4fc3, 0xaba68ced, 0x9ac96f78, 0x06a5b79a, 0xb2856e6e, 0x1aec3ca9, 0xbe838688, | ||
471 | 0x0e0804e9, 0x55f1be56, 0xe7e5363b, 0xb3a1f25d, 0xf7debb85, 0x61fe033c, 0x16746233, 0x3c034c28, | ||
472 | 0xda6d0c74, 0x79aac56c, 0x3ce4e1ad, 0x51f0c802, 0x98f8f35a, 0x1626a49f, 0xeed82b29, 0x1d382fe3, | ||
473 | 0x0c4fb99a, 0xbb325778, 0x3ec6d97b, 0x6e77a6a9, 0xcb658b5c, 0xd45230c7, 0x2bd1408b, 0x60c03eb7, | ||
474 | 0xb9068d78, 0xa33754f4, 0xf430c87d, 0xc8a71302, 0xb96d8c32, 0xebd4e7be, 0xbe8b9d2d, 0x7979fb06, | ||
475 | 0xe7225308, 0x8b75cf77, 0x11ef8da4, 0xe083c858, 0x8d6b786f, 0x5a6317a6, 0xfa5cf7a0, 0x5dda0033, | ||
476 | 0xf28ebfb0, 0xf5b9c310, 0xa0eac280, 0x08b9767a, 0xa3d9d2b0, 0x79d34217, 0x021a718d, 0x9ac6336a, | ||
477 | 0x2711fd60, 0x438050e3, 0x069908a8, 0x3d7fedc4, 0x826d2bef, 0x4eeb8476, 0x488dcf25, 0x36c9d566, | ||
478 | 0x28e74e41, 0xc2610aca, 0x3d49a9cf, 0xbae3b9df, 0xb65f8de6, 0x92aeaf64, 0x3ac7d5e6, 0x9ea80509, | ||
479 | 0xf22b017d, 0xa4173f70, 0xdd1e16c3, 0x15e0d7f9, 0x50b1b887, 0x2b9f4fd5, 0x625aba82, 0x6a017962, | ||
480 | 0x2ec01b9c, 0x15488aa9, 0xd716e740, 0x40055a2c, 0x93d29a22, 0xe32dbf9a, 0x058745b9, 0x3453dc1e, | ||
481 | 0xd699296e, 0x496cff6f, 0x1c9f4986, 0xdfe2ed07, 0xb87242d1, 0x19de7eae, 0x053e561a, 0x15ad6f8c, | ||
482 | 0x66626c1c, 0x7154c24c, 0xea082b2a, 0x93eb2939, 0x17dcb0f0, 0x58d4f2ae, 0x9ea294fb, 0x52cf564c, | ||
483 | 0x9883fe66, 0x2ec40581, 0x763953c3, 0x01d6692e, 0xd3a0c108, 0xa1e7160e, 0xe4f2dfa6, 0x693ed285, | ||
484 | 0x74904698, 0x4c2b0edd, 0x4f757656, 0x5d393378, 0xa132234f, 0x3d321c5d, 0xc3f5e194, 0x4b269301, | ||
485 | 0xc79f022f, 0x3c997e7e, 0x5e4f9504, 0x3ffafbbd, 0x76f7ad0e, 0x296693f4, 0x3d1fce6f, 0xc61e45be, | ||
486 | 0xd3b5ab34, 0xf72bf9b7, 0x1b0434c0, 0x4e72b567, 0x5592a33d, 0xb5229301, 0xcfd2a87f, 0x60aeb767, | ||
487 | 0x1814386b, 0x30bcc33d, 0x38a0c07d, 0xfd1606f2, 0xc363519b, 0x589dd390, 0x5479f8e6, 0x1cb8d647, | ||
488 | 0x97fd61a9, 0xea7759f4, 0x2d57539d, 0x569a58cf, 0xe84e63ad, 0x462e1b78, 0x6580f87e, 0xf3817914, | ||
489 | 0x91da55f4, 0x40a230f3, 0xd1988f35, 0xb6e318d2, 0x3ffa50bc, 0x3d40f021, 0xc3c0bdae, 0x4958c24c, | ||
490 | 0x518f36b2, 0x84b1d370, 0x0fedce83, 0x878ddada, 0xf2a279c7, 0x94e01be8, 0x90716f4b, 0x954b8aa3, | ||
491 | }, | ||
492 | { | ||
493 | 0xe216300d, 0xbbddfffc, 0xa7ebdabd, 0x35648095, 0x7789f8b7, 0xe6c1121b, 0x0e241600, 0x052ce8b5, | ||
494 | 0x11a9cfb0, 0xe5952f11, 0xece7990a, 0x9386d174, 0x2a42931c, 0x76e38111, 0xb12def3a, 0x37ddddfc, | ||
495 | 0xde9adeb1, 0x0a0cc32c, 0xbe197029, 0x84a00940, 0xbb243a0f, 0xb4d137cf, 0xb44e79f0, 0x049eedfd, | ||
496 | 0x0b15a15d, 0x480d3168, 0x8bbbde5a, 0x669ded42, 0xc7ece831, 0x3f8f95e7, 0x72df191b, 0x7580330d, | ||
497 | 0x94074251, 0x5c7dcdfa, 0xabbe6d63, 0xaa402164, 0xb301d40a, 0x02e7d1ca, 0x53571dae, 0x7a3182a2, | ||
498 | 0x12a8ddec, 0xfdaa335d, 0x176f43e8, 0x71fb46d4, 0x38129022, 0xce949ad4, 0xb84769ad, 0x965bd862, | ||
499 | 0x82f3d055, 0x66fb9767, 0x15b80b4e, 0x1d5b47a0, 0x4cfde06f, 0xc28ec4b8, 0x57e8726e, 0x647a78fc, | ||
500 | 0x99865d44, 0x608bd593, 0x6c200e03, 0x39dc5ff6, 0x5d0b00a3, 0xae63aff2, 0x7e8bd632, 0x70108c0c, | ||
501 | 0xbbd35049, 0x2998df04, 0x980cf42a, 0x9b6df491, 0x9e7edd53, 0x06918548, 0x58cb7e07, 0x3b74ef2e, | ||
502 | 0x522fffb1, 0xd24708cc, 0x1c7e27cd, 0xa4eb215b, 0x3cf1d2e2, 0x19b47a38, 0x424f7618, 0x35856039, | ||
503 | 0x9d17dee7, 0x27eb35e6, 0xc9aff67b, 0x36baf5b8, 0x09c467cd, 0xc18910b1, 0xe11dbf7b, 0x06cd1af8, | ||
504 | 0x7170c608, 0x2d5e3354, 0xd4de495a, 0x64c6d006, 0xbcc0c62c, 0x3dd00db3, 0x708f8f34, 0x77d51b42, | ||
505 | 0x264f620f, 0x24b8d2bf, 0x15c1b79e, 0x46a52564, 0xf8d7e54e, 0x3e378160, 0x7895cda5, 0x859c15a5, | ||
506 | 0xe6459788, 0xc37bc75f, 0xdb07ba0c, 0x0676a3ab, 0x7f229b1e, 0x31842e7b, 0x24259fd7, 0xf8bef472, | ||
507 | 0x835ffcb8, 0x6df4c1f2, 0x96f5b195, 0xfd0af0fc, 0xb0fe134c, 0xe2506d3d, 0x4f9b12ea, 0xf215f225, | ||
508 | 0xa223736f, 0x9fb4c428, 0x25d04979, 0x34c713f8, 0xc4618187, 0xea7a6e98, 0x7cd16efc, 0x1436876c, | ||
509 | 0xf1544107, 0xbedeee14, 0x56e9af27, 0xa04aa441, 0x3cf7c899, 0x92ecbae6, 0xdd67016d, 0x151682eb, | ||
510 | 0xa842eedf, 0xfdba60b4, 0xf1907b75, 0x20e3030f, 0x24d8c29e, 0xe139673b, 0xefa63fb8, 0x71873054, | ||
511 | 0xb6f2cf3b, 0x9f326442, 0xcb15a4cc, 0xb01a4504, 0xf1e47d8d, 0x844a1be5, 0xbae7dfdc, 0x42cbda70, | ||
512 | 0xcd7dae0a, 0x57e85b7a, 0xd53f5af6, 0x20cf4d8c, 0xcea4d428, 0x79d130a4, 0x3486ebfb, 0x33d3cddc, | ||
513 | 0x77853b53, 0x37effcb5, 0xc5068778, 0xe580b3e6, 0x4e68b8f4, 0xc5c8b37e, 0x0d809ea2, 0x398feb7c, | ||
514 | 0x132a4f94, 0x43b7950e, 0x2fee7d1c, 0x223613bd, 0xdd06caa2, 0x37df932b, 0xc4248289, 0xacf3ebc3, | ||
515 | 0x5715f6b7, 0xef3478dd, 0xf267616f, 0xc148cbe4, 0x9052815e, 0x5e410fab, 0xb48a2465, 0x2eda7fa4, | ||
516 | 0xe87b40e4, 0xe98ea084, 0x5889e9e1, 0xefd390fc, 0xdd07d35b, 0xdb485694, 0x38d7e5b2, 0x57720101, | ||
517 | 0x730edebc, 0x5b643113, 0x94917e4f, 0x503c2fba, 0x646f1282, 0x7523d24a, 0xe0779695, 0xf9c17a8f, | ||
518 | 0x7a5b2121, 0xd187b896, 0x29263a4d, 0xba510cdf, 0x81f47c9f, 0xad1163ed, 0xea7b5965, 0x1a00726e, | ||
519 | 0x11403092, 0x00da6d77, 0x4a0cdd61, 0xad1f4603, 0x605bdfb0, 0x9eedc364, 0x22ebe6a8, 0xcee7d28a, | ||
520 | 0xa0e736a0, 0x5564a6b9, 0x10853209, 0xc7eb8f37, 0x2de705ca, 0x8951570f, 0xdf09822b, 0xbd691a6c, | ||
521 | 0xaa12e4f2, 0x87451c0f, 0xe0f6a27a, 0x3ada4819, 0x4cf1764f, 0x0d771c2b, 0x67cdb156, 0x350d8384, | ||
522 | 0x5938fa0f, 0x42399ef3, 0x36997b07, 0x0e84093d, 0x4aa93e61, 0x8360d87b, 0x1fa98b0c, 0x1149382c, | ||
523 | 0xe97625a5, 0x0614d1b7, 0x0e25244b, 0x0c768347, 0x589e8d82, 0x0d2059d1, 0xa466bb1e, 0xf8da0a82, | ||
524 | 0x04f19130, 0xba6e4ec0, 0x99265164, 0x1ee7230d, 0x50b2ad80, 0xeaee6801, 0x8db2a283, 0xea8bf59e, | ||
525 | }, | ||
526 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/armor/armor.go b/vendor/golang.org/x/crypto/openpgp/armor/armor.go new file mode 100644 index 0000000..592d186 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/armor/armor.go | |||
@@ -0,0 +1,219 @@ | |||
1 | // Copyright 2010 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | // Package armor implements OpenPGP ASCII Armor, see RFC 4880. OpenPGP Armor is | ||
6 | // very similar to PEM except that it has an additional CRC checksum. | ||
7 | package armor // import "golang.org/x/crypto/openpgp/armor" | ||
8 | |||
9 | import ( | ||
10 | "bufio" | ||
11 | "bytes" | ||
12 | "encoding/base64" | ||
13 | "golang.org/x/crypto/openpgp/errors" | ||
14 | "io" | ||
15 | ) | ||
16 | |||
17 | // A Block represents an OpenPGP armored structure. | ||
18 | // | ||
19 | // The encoded form is: | ||
20 | // -----BEGIN Type----- | ||
21 | // Headers | ||
22 | // | ||
23 | // base64-encoded Bytes | ||
24 | // '=' base64 encoded checksum | ||
25 | // -----END Type----- | ||
26 | // where Headers is a possibly empty sequence of Key: Value lines. | ||
27 | // | ||
28 | // Since the armored data can be very large, this package presents a streaming | ||
29 | // interface. | ||
30 | type Block struct { | ||
31 | Type string // The type, taken from the preamble (i.e. "PGP SIGNATURE"). | ||
32 | Header map[string]string // Optional headers. | ||
33 | Body io.Reader // A Reader from which the contents can be read | ||
34 | lReader lineReader | ||
35 | oReader openpgpReader | ||
36 | } | ||
37 | |||
38 | var ArmorCorrupt error = errors.StructuralError("armor invalid") | ||
39 | |||
40 | const crc24Init = 0xb704ce | ||
41 | const crc24Poly = 0x1864cfb | ||
42 | const crc24Mask = 0xffffff | ||
43 | |||
44 | // crc24 calculates the OpenPGP checksum as specified in RFC 4880, section 6.1 | ||
45 | func crc24(crc uint32, d []byte) uint32 { | ||
46 | for _, b := range d { | ||
47 | crc ^= uint32(b) << 16 | ||
48 | for i := 0; i < 8; i++ { | ||
49 | crc <<= 1 | ||
50 | if crc&0x1000000 != 0 { | ||
51 | crc ^= crc24Poly | ||
52 | } | ||
53 | } | ||
54 | } | ||
55 | return crc | ||
56 | } | ||
57 | |||
58 | var armorStart = []byte("-----BEGIN ") | ||
59 | var armorEnd = []byte("-----END ") | ||
60 | var armorEndOfLine = []byte("-----") | ||
61 | |||
62 | // lineReader wraps a line based reader. It watches for the end of an armor | ||
63 | // block and records the expected CRC value. | ||
64 | type lineReader struct { | ||
65 | in *bufio.Reader | ||
66 | buf []byte | ||
67 | eof bool | ||
68 | crc uint32 | ||
69 | } | ||
70 | |||
71 | func (l *lineReader) Read(p []byte) (n int, err error) { | ||
72 | if l.eof { | ||
73 | return 0, io.EOF | ||
74 | } | ||
75 | |||
76 | if len(l.buf) > 0 { | ||
77 | n = copy(p, l.buf) | ||
78 | l.buf = l.buf[n:] | ||
79 | return | ||
80 | } | ||
81 | |||
82 | line, isPrefix, err := l.in.ReadLine() | ||
83 | if err != nil { | ||
84 | return | ||
85 | } | ||
86 | if isPrefix { | ||
87 | return 0, ArmorCorrupt | ||
88 | } | ||
89 | |||
90 | if len(line) == 5 && line[0] == '=' { | ||
91 | // This is the checksum line | ||
92 | var expectedBytes [3]byte | ||
93 | var m int | ||
94 | m, err = base64.StdEncoding.Decode(expectedBytes[0:], line[1:]) | ||
95 | if m != 3 || err != nil { | ||
96 | return | ||
97 | } | ||
98 | l.crc = uint32(expectedBytes[0])<<16 | | ||
99 | uint32(expectedBytes[1])<<8 | | ||
100 | uint32(expectedBytes[2]) | ||
101 | |||
102 | line, _, err = l.in.ReadLine() | ||
103 | if err != nil && err != io.EOF { | ||
104 | return | ||
105 | } | ||
106 | if !bytes.HasPrefix(line, armorEnd) { | ||
107 | return 0, ArmorCorrupt | ||
108 | } | ||
109 | |||
110 | l.eof = true | ||
111 | return 0, io.EOF | ||
112 | } | ||
113 | |||
114 | if len(line) > 96 { | ||
115 | return 0, ArmorCorrupt | ||
116 | } | ||
117 | |||
118 | n = copy(p, line) | ||
119 | bytesToSave := len(line) - n | ||
120 | if bytesToSave > 0 { | ||
121 | if cap(l.buf) < bytesToSave { | ||
122 | l.buf = make([]byte, 0, bytesToSave) | ||
123 | } | ||
124 | l.buf = l.buf[0:bytesToSave] | ||
125 | copy(l.buf, line[n:]) | ||
126 | } | ||
127 | |||
128 | return | ||
129 | } | ||
130 | |||
131 | // openpgpReader passes Read calls to the underlying base64 decoder, but keeps | ||
132 | // a running CRC of the resulting data and checks the CRC against the value | ||
133 | // found by the lineReader at EOF. | ||
134 | type openpgpReader struct { | ||
135 | lReader *lineReader | ||
136 | b64Reader io.Reader | ||
137 | currentCRC uint32 | ||
138 | } | ||
139 | |||
140 | func (r *openpgpReader) Read(p []byte) (n int, err error) { | ||
141 | n, err = r.b64Reader.Read(p) | ||
142 | r.currentCRC = crc24(r.currentCRC, p[:n]) | ||
143 | |||
144 | if err == io.EOF { | ||
145 | if r.lReader.crc != uint32(r.currentCRC&crc24Mask) { | ||
146 | return 0, ArmorCorrupt | ||
147 | } | ||
148 | } | ||
149 | |||
150 | return | ||
151 | } | ||
152 | |||
153 | // Decode reads a PGP armored block from the given Reader. It will ignore | ||
154 | // leading garbage. If it doesn't find a block, it will return nil, io.EOF. The | ||
155 | // given Reader is not usable after calling this function: an arbitrary amount | ||
156 | // of data may have been read past the end of the block. | ||
157 | func Decode(in io.Reader) (p *Block, err error) { | ||
158 | r := bufio.NewReaderSize(in, 100) | ||
159 | var line []byte | ||
160 | ignoreNext := false | ||
161 | |||
162 | TryNextBlock: | ||
163 | p = nil | ||
164 | |||
165 | // Skip leading garbage | ||
166 | for { | ||
167 | ignoreThis := ignoreNext | ||
168 | line, ignoreNext, err = r.ReadLine() | ||
169 | if err != nil { | ||
170 | return | ||
171 | } | ||
172 | if ignoreNext || ignoreThis { | ||
173 | continue | ||
174 | } | ||
175 | line = bytes.TrimSpace(line) | ||
176 | if len(line) > len(armorStart)+len(armorEndOfLine) && bytes.HasPrefix(line, armorStart) { | ||
177 | break | ||
178 | } | ||
179 | } | ||
180 | |||
181 | p = new(Block) | ||
182 | p.Type = string(line[len(armorStart) : len(line)-len(armorEndOfLine)]) | ||
183 | p.Header = make(map[string]string) | ||
184 | nextIsContinuation := false | ||
185 | var lastKey string | ||
186 | |||
187 | // Read headers | ||
188 | for { | ||
189 | isContinuation := nextIsContinuation | ||
190 | line, nextIsContinuation, err = r.ReadLine() | ||
191 | if err != nil { | ||
192 | p = nil | ||
193 | return | ||
194 | } | ||
195 | if isContinuation { | ||
196 | p.Header[lastKey] += string(line) | ||
197 | continue | ||
198 | } | ||
199 | line = bytes.TrimSpace(line) | ||
200 | if len(line) == 0 { | ||
201 | break | ||
202 | } | ||
203 | |||
204 | i := bytes.Index(line, []byte(": ")) | ||
205 | if i == -1 { | ||
206 | goto TryNextBlock | ||
207 | } | ||
208 | lastKey = string(line[:i]) | ||
209 | p.Header[lastKey] = string(line[i+2:]) | ||
210 | } | ||
211 | |||
212 | p.lReader.in = r | ||
213 | p.oReader.currentCRC = crc24Init | ||
214 | p.oReader.lReader = &p.lReader | ||
215 | p.oReader.b64Reader = base64.NewDecoder(base64.StdEncoding, &p.lReader) | ||
216 | p.Body = &p.oReader | ||
217 | |||
218 | return | ||
219 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/armor/encode.go b/vendor/golang.org/x/crypto/openpgp/armor/encode.go new file mode 100644 index 0000000..6f07582 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/armor/encode.go | |||
@@ -0,0 +1,160 @@ | |||
1 | // Copyright 2010 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package armor | ||
6 | |||
7 | import ( | ||
8 | "encoding/base64" | ||
9 | "io" | ||
10 | ) | ||
11 | |||
12 | var armorHeaderSep = []byte(": ") | ||
13 | var blockEnd = []byte("\n=") | ||
14 | var newline = []byte("\n") | ||
15 | var armorEndOfLineOut = []byte("-----\n") | ||
16 | |||
17 | // writeSlices writes its arguments to the given Writer. | ||
18 | func writeSlices(out io.Writer, slices ...[]byte) (err error) { | ||
19 | for _, s := range slices { | ||
20 | _, err = out.Write(s) | ||
21 | if err != nil { | ||
22 | return err | ||
23 | } | ||
24 | } | ||
25 | return | ||
26 | } | ||
27 | |||
28 | // lineBreaker breaks data across several lines, all of the same byte length | ||
29 | // (except possibly the last). Lines are broken with a single '\n'. | ||
30 | type lineBreaker struct { | ||
31 | lineLength int | ||
32 | line []byte | ||
33 | used int | ||
34 | out io.Writer | ||
35 | haveWritten bool | ||
36 | } | ||
37 | |||
38 | func newLineBreaker(out io.Writer, lineLength int) *lineBreaker { | ||
39 | return &lineBreaker{ | ||
40 | lineLength: lineLength, | ||
41 | line: make([]byte, lineLength), | ||
42 | used: 0, | ||
43 | out: out, | ||
44 | } | ||
45 | } | ||
46 | |||
47 | func (l *lineBreaker) Write(b []byte) (n int, err error) { | ||
48 | n = len(b) | ||
49 | |||
50 | if n == 0 { | ||
51 | return | ||
52 | } | ||
53 | |||
54 | if l.used == 0 && l.haveWritten { | ||
55 | _, err = l.out.Write([]byte{'\n'}) | ||
56 | if err != nil { | ||
57 | return | ||
58 | } | ||
59 | } | ||
60 | |||
61 | if l.used+len(b) < l.lineLength { | ||
62 | l.used += copy(l.line[l.used:], b) | ||
63 | return | ||
64 | } | ||
65 | |||
66 | l.haveWritten = true | ||
67 | _, err = l.out.Write(l.line[0:l.used]) | ||
68 | if err != nil { | ||
69 | return | ||
70 | } | ||
71 | excess := l.lineLength - l.used | ||
72 | l.used = 0 | ||
73 | |||
74 | _, err = l.out.Write(b[0:excess]) | ||
75 | if err != nil { | ||
76 | return | ||
77 | } | ||
78 | |||
79 | _, err = l.Write(b[excess:]) | ||
80 | return | ||
81 | } | ||
82 | |||
83 | func (l *lineBreaker) Close() (err error) { | ||
84 | if l.used > 0 { | ||
85 | _, err = l.out.Write(l.line[0:l.used]) | ||
86 | if err != nil { | ||
87 | return | ||
88 | } | ||
89 | } | ||
90 | |||
91 | return | ||
92 | } | ||
93 | |||
94 | // encoding keeps track of a running CRC24 over the data which has been written | ||
95 | // to it and outputs a OpenPGP checksum when closed, followed by an armor | ||
96 | // trailer. | ||
97 | // | ||
98 | // It's built into a stack of io.Writers: | ||
99 | // encoding -> base64 encoder -> lineBreaker -> out | ||
100 | type encoding struct { | ||
101 | out io.Writer | ||
102 | breaker *lineBreaker | ||
103 | b64 io.WriteCloser | ||
104 | crc uint32 | ||
105 | blockType []byte | ||
106 | } | ||
107 | |||
108 | func (e *encoding) Write(data []byte) (n int, err error) { | ||
109 | e.crc = crc24(e.crc, data) | ||
110 | return e.b64.Write(data) | ||
111 | } | ||
112 | |||
113 | func (e *encoding) Close() (err error) { | ||
114 | err = e.b64.Close() | ||
115 | if err != nil { | ||
116 | return | ||
117 | } | ||
118 | e.breaker.Close() | ||
119 | |||
120 | var checksumBytes [3]byte | ||
121 | checksumBytes[0] = byte(e.crc >> 16) | ||
122 | checksumBytes[1] = byte(e.crc >> 8) | ||
123 | checksumBytes[2] = byte(e.crc) | ||
124 | |||
125 | var b64ChecksumBytes [4]byte | ||
126 | base64.StdEncoding.Encode(b64ChecksumBytes[:], checksumBytes[:]) | ||
127 | |||
128 | return writeSlices(e.out, blockEnd, b64ChecksumBytes[:], newline, armorEnd, e.blockType, armorEndOfLine) | ||
129 | } | ||
130 | |||
131 | // Encode returns a WriteCloser which will encode the data written to it in | ||
132 | // OpenPGP armor. | ||
133 | func Encode(out io.Writer, blockType string, headers map[string]string) (w io.WriteCloser, err error) { | ||
134 | bType := []byte(blockType) | ||
135 | err = writeSlices(out, armorStart, bType, armorEndOfLineOut) | ||
136 | if err != nil { | ||
137 | return | ||
138 | } | ||
139 | |||
140 | for k, v := range headers { | ||
141 | err = writeSlices(out, []byte(k), armorHeaderSep, []byte(v), newline) | ||
142 | if err != nil { | ||
143 | return | ||
144 | } | ||
145 | } | ||
146 | |||
147 | _, err = out.Write(newline) | ||
148 | if err != nil { | ||
149 | return | ||
150 | } | ||
151 | |||
152 | e := &encoding{ | ||
153 | out: out, | ||
154 | breaker: newLineBreaker(out, 64), | ||
155 | crc: crc24Init, | ||
156 | blockType: bType, | ||
157 | } | ||
158 | e.b64 = base64.NewEncoder(base64.StdEncoding, e.breaker) | ||
159 | return e, nil | ||
160 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/canonical_text.go b/vendor/golang.org/x/crypto/openpgp/canonical_text.go new file mode 100644 index 0000000..e601e38 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/canonical_text.go | |||
@@ -0,0 +1,59 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package openpgp | ||
6 | |||
7 | import "hash" | ||
8 | |||
9 | // NewCanonicalTextHash reformats text written to it into the canonical | ||
10 | // form and then applies the hash h. See RFC 4880, section 5.2.1. | ||
11 | func NewCanonicalTextHash(h hash.Hash) hash.Hash { | ||
12 | return &canonicalTextHash{h, 0} | ||
13 | } | ||
14 | |||
15 | type canonicalTextHash struct { | ||
16 | h hash.Hash | ||
17 | s int | ||
18 | } | ||
19 | |||
20 | var newline = []byte{'\r', '\n'} | ||
21 | |||
22 | func (cth *canonicalTextHash) Write(buf []byte) (int, error) { | ||
23 | start := 0 | ||
24 | |||
25 | for i, c := range buf { | ||
26 | switch cth.s { | ||
27 | case 0: | ||
28 | if c == '\r' { | ||
29 | cth.s = 1 | ||
30 | } else if c == '\n' { | ||
31 | cth.h.Write(buf[start:i]) | ||
32 | cth.h.Write(newline) | ||
33 | start = i + 1 | ||
34 | } | ||
35 | case 1: | ||
36 | cth.s = 0 | ||
37 | } | ||
38 | } | ||
39 | |||
40 | cth.h.Write(buf[start:]) | ||
41 | return len(buf), nil | ||
42 | } | ||
43 | |||
44 | func (cth *canonicalTextHash) Sum(in []byte) []byte { | ||
45 | return cth.h.Sum(in) | ||
46 | } | ||
47 | |||
48 | func (cth *canonicalTextHash) Reset() { | ||
49 | cth.h.Reset() | ||
50 | cth.s = 0 | ||
51 | } | ||
52 | |||
53 | func (cth *canonicalTextHash) Size() int { | ||
54 | return cth.h.Size() | ||
55 | } | ||
56 | |||
57 | func (cth *canonicalTextHash) BlockSize() int { | ||
58 | return cth.h.BlockSize() | ||
59 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/elgamal/elgamal.go b/vendor/golang.org/x/crypto/openpgp/elgamal/elgamal.go new file mode 100644 index 0000000..73f4fe3 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/elgamal/elgamal.go | |||
@@ -0,0 +1,122 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | // Package elgamal implements ElGamal encryption, suitable for OpenPGP, | ||
6 | // as specified in "A Public-Key Cryptosystem and a Signature Scheme Based on | ||
7 | // Discrete Logarithms," IEEE Transactions on Information Theory, v. IT-31, | ||
8 | // n. 4, 1985, pp. 469-472. | ||
9 | // | ||
10 | // This form of ElGamal embeds PKCS#1 v1.5 padding, which may make it | ||
11 | // unsuitable for other protocols. RSA should be used in preference in any | ||
12 | // case. | ||
13 | package elgamal // import "golang.org/x/crypto/openpgp/elgamal" | ||
14 | |||
15 | import ( | ||
16 | "crypto/rand" | ||
17 | "crypto/subtle" | ||
18 | "errors" | ||
19 | "io" | ||
20 | "math/big" | ||
21 | ) | ||
22 | |||
23 | // PublicKey represents an ElGamal public key. | ||
24 | type PublicKey struct { | ||
25 | G, P, Y *big.Int | ||
26 | } | ||
27 | |||
28 | // PrivateKey represents an ElGamal private key. | ||
29 | type PrivateKey struct { | ||
30 | PublicKey | ||
31 | X *big.Int | ||
32 | } | ||
33 | |||
34 | // Encrypt encrypts the given message to the given public key. The result is a | ||
35 | // pair of integers. Errors can result from reading random, or because msg is | ||
36 | // too large to be encrypted to the public key. | ||
37 | func Encrypt(random io.Reader, pub *PublicKey, msg []byte) (c1, c2 *big.Int, err error) { | ||
38 | pLen := (pub.P.BitLen() + 7) / 8 | ||
39 | if len(msg) > pLen-11 { | ||
40 | err = errors.New("elgamal: message too long") | ||
41 | return | ||
42 | } | ||
43 | |||
44 | // EM = 0x02 || PS || 0x00 || M | ||
45 | em := make([]byte, pLen-1) | ||
46 | em[0] = 2 | ||
47 | ps, mm := em[1:len(em)-len(msg)-1], em[len(em)-len(msg):] | ||
48 | err = nonZeroRandomBytes(ps, random) | ||
49 | if err != nil { | ||
50 | return | ||
51 | } | ||
52 | em[len(em)-len(msg)-1] = 0 | ||
53 | copy(mm, msg) | ||
54 | |||
55 | m := new(big.Int).SetBytes(em) | ||
56 | |||
57 | k, err := rand.Int(random, pub.P) | ||
58 | if err != nil { | ||
59 | return | ||
60 | } | ||
61 | |||
62 | c1 = new(big.Int).Exp(pub.G, k, pub.P) | ||
63 | s := new(big.Int).Exp(pub.Y, k, pub.P) | ||
64 | c2 = s.Mul(s, m) | ||
65 | c2.Mod(c2, pub.P) | ||
66 | |||
67 | return | ||
68 | } | ||
69 | |||
70 | // Decrypt takes two integers, resulting from an ElGamal encryption, and | ||
71 | // returns the plaintext of the message. An error can result only if the | ||
72 | // ciphertext is invalid. Users should keep in mind that this is a padding | ||
73 | // oracle and thus, if exposed to an adaptive chosen ciphertext attack, can | ||
74 | // be used to break the cryptosystem. See ``Chosen Ciphertext Attacks | ||
75 | // Against Protocols Based on the RSA Encryption Standard PKCS #1'', Daniel | ||
76 | // Bleichenbacher, Advances in Cryptology (Crypto '98), | ||
77 | func Decrypt(priv *PrivateKey, c1, c2 *big.Int) (msg []byte, err error) { | ||
78 | s := new(big.Int).Exp(c1, priv.X, priv.P) | ||
79 | s.ModInverse(s, priv.P) | ||
80 | s.Mul(s, c2) | ||
81 | s.Mod(s, priv.P) | ||
82 | em := s.Bytes() | ||
83 | |||
84 | firstByteIsTwo := subtle.ConstantTimeByteEq(em[0], 2) | ||
85 | |||
86 | // The remainder of the plaintext must be a string of non-zero random | ||
87 | // octets, followed by a 0, followed by the message. | ||
88 | // lookingForIndex: 1 iff we are still looking for the zero. | ||
89 | // index: the offset of the first zero byte. | ||
90 | var lookingForIndex, index int | ||
91 | lookingForIndex = 1 | ||
92 | |||
93 | for i := 1; i < len(em); i++ { | ||
94 | equals0 := subtle.ConstantTimeByteEq(em[i], 0) | ||
95 | index = subtle.ConstantTimeSelect(lookingForIndex&equals0, i, index) | ||
96 | lookingForIndex = subtle.ConstantTimeSelect(equals0, 0, lookingForIndex) | ||
97 | } | ||
98 | |||
99 | if firstByteIsTwo != 1 || lookingForIndex != 0 || index < 9 { | ||
100 | return nil, errors.New("elgamal: decryption error") | ||
101 | } | ||
102 | return em[index+1:], nil | ||
103 | } | ||
104 | |||
105 | // nonZeroRandomBytes fills the given slice with non-zero random octets. | ||
106 | func nonZeroRandomBytes(s []byte, rand io.Reader) (err error) { | ||
107 | _, err = io.ReadFull(rand, s) | ||
108 | if err != nil { | ||
109 | return | ||
110 | } | ||
111 | |||
112 | for i := 0; i < len(s); i++ { | ||
113 | for s[i] == 0 { | ||
114 | _, err = io.ReadFull(rand, s[i:i+1]) | ||
115 | if err != nil { | ||
116 | return | ||
117 | } | ||
118 | } | ||
119 | } | ||
120 | |||
121 | return | ||
122 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/errors/errors.go b/vendor/golang.org/x/crypto/openpgp/errors/errors.go new file mode 100644 index 0000000..eb0550b --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/errors/errors.go | |||
@@ -0,0 +1,72 @@ | |||
1 | // Copyright 2010 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | // Package errors contains common error types for the OpenPGP packages. | ||
6 | package errors // import "golang.org/x/crypto/openpgp/errors" | ||
7 | |||
8 | import ( | ||
9 | "strconv" | ||
10 | ) | ||
11 | |||
12 | // A StructuralError is returned when OpenPGP data is found to be syntactically | ||
13 | // invalid. | ||
14 | type StructuralError string | ||
15 | |||
16 | func (s StructuralError) Error() string { | ||
17 | return "openpgp: invalid data: " + string(s) | ||
18 | } | ||
19 | |||
20 | // UnsupportedError indicates that, although the OpenPGP data is valid, it | ||
21 | // makes use of currently unimplemented features. | ||
22 | type UnsupportedError string | ||
23 | |||
24 | func (s UnsupportedError) Error() string { | ||
25 | return "openpgp: unsupported feature: " + string(s) | ||
26 | } | ||
27 | |||
28 | // InvalidArgumentError indicates that the caller is in error and passed an | ||
29 | // incorrect value. | ||
30 | type InvalidArgumentError string | ||
31 | |||
32 | func (i InvalidArgumentError) Error() string { | ||
33 | return "openpgp: invalid argument: " + string(i) | ||
34 | } | ||
35 | |||
36 | // SignatureError indicates that a syntactically valid signature failed to | ||
37 | // validate. | ||
38 | type SignatureError string | ||
39 | |||
40 | func (b SignatureError) Error() string { | ||
41 | return "openpgp: invalid signature: " + string(b) | ||
42 | } | ||
43 | |||
44 | type keyIncorrectError int | ||
45 | |||
46 | func (ki keyIncorrectError) Error() string { | ||
47 | return "openpgp: incorrect key" | ||
48 | } | ||
49 | |||
50 | var ErrKeyIncorrect error = keyIncorrectError(0) | ||
51 | |||
52 | type unknownIssuerError int | ||
53 | |||
54 | func (unknownIssuerError) Error() string { | ||
55 | return "openpgp: signature made by unknown entity" | ||
56 | } | ||
57 | |||
58 | var ErrUnknownIssuer error = unknownIssuerError(0) | ||
59 | |||
60 | type keyRevokedError int | ||
61 | |||
62 | func (keyRevokedError) Error() string { | ||
63 | return "openpgp: signature made by revoked key" | ||
64 | } | ||
65 | |||
66 | var ErrKeyRevoked error = keyRevokedError(0) | ||
67 | |||
68 | type UnknownPacketTypeError uint8 | ||
69 | |||
70 | func (upte UnknownPacketTypeError) Error() string { | ||
71 | return "openpgp: unknown packet type: " + strconv.Itoa(int(upte)) | ||
72 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/keys.go b/vendor/golang.org/x/crypto/openpgp/keys.go new file mode 100644 index 0000000..68b14c6 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/keys.go | |||
@@ -0,0 +1,637 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package openpgp | ||
6 | |||
7 | import ( | ||
8 | "crypto/rsa" | ||
9 | "io" | ||
10 | "time" | ||
11 | |||
12 | "golang.org/x/crypto/openpgp/armor" | ||
13 | "golang.org/x/crypto/openpgp/errors" | ||
14 | "golang.org/x/crypto/openpgp/packet" | ||
15 | ) | ||
16 | |||
17 | // PublicKeyType is the armor type for a PGP public key. | ||
18 | var PublicKeyType = "PGP PUBLIC KEY BLOCK" | ||
19 | |||
20 | // PrivateKeyType is the armor type for a PGP private key. | ||
21 | var PrivateKeyType = "PGP PRIVATE KEY BLOCK" | ||
22 | |||
23 | // An Entity represents the components of an OpenPGP key: a primary public key | ||
24 | // (which must be a signing key), one or more identities claimed by that key, | ||
25 | // and zero or more subkeys, which may be encryption keys. | ||
26 | type Entity struct { | ||
27 | PrimaryKey *packet.PublicKey | ||
28 | PrivateKey *packet.PrivateKey | ||
29 | Identities map[string]*Identity // indexed by Identity.Name | ||
30 | Revocations []*packet.Signature | ||
31 | Subkeys []Subkey | ||
32 | } | ||
33 | |||
34 | // An Identity represents an identity claimed by an Entity and zero or more | ||
35 | // assertions by other entities about that claim. | ||
36 | type Identity struct { | ||
37 | Name string // by convention, has the form "Full Name (comment) <email@example.com>" | ||
38 | UserId *packet.UserId | ||
39 | SelfSignature *packet.Signature | ||
40 | Signatures []*packet.Signature | ||
41 | } | ||
42 | |||
43 | // A Subkey is an additional public key in an Entity. Subkeys can be used for | ||
44 | // encryption. | ||
45 | type Subkey struct { | ||
46 | PublicKey *packet.PublicKey | ||
47 | PrivateKey *packet.PrivateKey | ||
48 | Sig *packet.Signature | ||
49 | } | ||
50 | |||
51 | // A Key identifies a specific public key in an Entity. This is either the | ||
52 | // Entity's primary key or a subkey. | ||
53 | type Key struct { | ||
54 | Entity *Entity | ||
55 | PublicKey *packet.PublicKey | ||
56 | PrivateKey *packet.PrivateKey | ||
57 | SelfSignature *packet.Signature | ||
58 | } | ||
59 | |||
60 | // A KeyRing provides access to public and private keys. | ||
61 | type KeyRing interface { | ||
62 | // KeysById returns the set of keys that have the given key id. | ||
63 | KeysById(id uint64) []Key | ||
64 | // KeysByIdAndUsage returns the set of keys with the given id | ||
65 | // that also meet the key usage given by requiredUsage. | ||
66 | // The requiredUsage is expressed as the bitwise-OR of | ||
67 | // packet.KeyFlag* values. | ||
68 | KeysByIdUsage(id uint64, requiredUsage byte) []Key | ||
69 | // DecryptionKeys returns all private keys that are valid for | ||
70 | // decryption. | ||
71 | DecryptionKeys() []Key | ||
72 | } | ||
73 | |||
74 | // primaryIdentity returns the Identity marked as primary or the first identity | ||
75 | // if none are so marked. | ||
76 | func (e *Entity) primaryIdentity() *Identity { | ||
77 | var firstIdentity *Identity | ||
78 | for _, ident := range e.Identities { | ||
79 | if firstIdentity == nil { | ||
80 | firstIdentity = ident | ||
81 | } | ||
82 | if ident.SelfSignature.IsPrimaryId != nil && *ident.SelfSignature.IsPrimaryId { | ||
83 | return ident | ||
84 | } | ||
85 | } | ||
86 | return firstIdentity | ||
87 | } | ||
88 | |||
89 | // encryptionKey returns the best candidate Key for encrypting a message to the | ||
90 | // given Entity. | ||
91 | func (e *Entity) encryptionKey(now time.Time) (Key, bool) { | ||
92 | candidateSubkey := -1 | ||
93 | |||
94 | // Iterate the keys to find the newest key | ||
95 | var maxTime time.Time | ||
96 | for i, subkey := range e.Subkeys { | ||
97 | if subkey.Sig.FlagsValid && | ||
98 | subkey.Sig.FlagEncryptCommunications && | ||
99 | subkey.PublicKey.PubKeyAlgo.CanEncrypt() && | ||
100 | !subkey.Sig.KeyExpired(now) && | ||
101 | (maxTime.IsZero() || subkey.Sig.CreationTime.After(maxTime)) { | ||
102 | candidateSubkey = i | ||
103 | maxTime = subkey.Sig.CreationTime | ||
104 | } | ||
105 | } | ||
106 | |||
107 | if candidateSubkey != -1 { | ||
108 | subkey := e.Subkeys[candidateSubkey] | ||
109 | return Key{e, subkey.PublicKey, subkey.PrivateKey, subkey.Sig}, true | ||
110 | } | ||
111 | |||
112 | // If we don't have any candidate subkeys for encryption and | ||
113 | // the primary key doesn't have any usage metadata then we | ||
114 | // assume that the primary key is ok. Or, if the primary key is | ||
115 | // marked as ok to encrypt to, then we can obviously use it. | ||
116 | i := e.primaryIdentity() | ||
117 | if !i.SelfSignature.FlagsValid || i.SelfSignature.FlagEncryptCommunications && | ||
118 | e.PrimaryKey.PubKeyAlgo.CanEncrypt() && | ||
119 | !i.SelfSignature.KeyExpired(now) { | ||
120 | return Key{e, e.PrimaryKey, e.PrivateKey, i.SelfSignature}, true | ||
121 | } | ||
122 | |||
123 | // This Entity appears to be signing only. | ||
124 | return Key{}, false | ||
125 | } | ||
126 | |||
127 | // signingKey return the best candidate Key for signing a message with this | ||
128 | // Entity. | ||
129 | func (e *Entity) signingKey(now time.Time) (Key, bool) { | ||
130 | candidateSubkey := -1 | ||
131 | |||
132 | for i, subkey := range e.Subkeys { | ||
133 | if subkey.Sig.FlagsValid && | ||
134 | subkey.Sig.FlagSign && | ||
135 | subkey.PublicKey.PubKeyAlgo.CanSign() && | ||
136 | !subkey.Sig.KeyExpired(now) { | ||
137 | candidateSubkey = i | ||
138 | break | ||
139 | } | ||
140 | } | ||
141 | |||
142 | if candidateSubkey != -1 { | ||
143 | subkey := e.Subkeys[candidateSubkey] | ||
144 | return Key{e, subkey.PublicKey, subkey.PrivateKey, subkey.Sig}, true | ||
145 | } | ||
146 | |||
147 | // If we have no candidate subkey then we assume that it's ok to sign | ||
148 | // with the primary key. | ||
149 | i := e.primaryIdentity() | ||
150 | if !i.SelfSignature.FlagsValid || i.SelfSignature.FlagSign && | ||
151 | !i.SelfSignature.KeyExpired(now) { | ||
152 | return Key{e, e.PrimaryKey, e.PrivateKey, i.SelfSignature}, true | ||
153 | } | ||
154 | |||
155 | return Key{}, false | ||
156 | } | ||
157 | |||
158 | // An EntityList contains one or more Entities. | ||
159 | type EntityList []*Entity | ||
160 | |||
161 | // KeysById returns the set of keys that have the given key id. | ||
162 | func (el EntityList) KeysById(id uint64) (keys []Key) { | ||
163 | for _, e := range el { | ||
164 | if e.PrimaryKey.KeyId == id { | ||
165 | var selfSig *packet.Signature | ||
166 | for _, ident := range e.Identities { | ||
167 | if selfSig == nil { | ||
168 | selfSig = ident.SelfSignature | ||
169 | } else if ident.SelfSignature.IsPrimaryId != nil && *ident.SelfSignature.IsPrimaryId { | ||
170 | selfSig = ident.SelfSignature | ||
171 | break | ||
172 | } | ||
173 | } | ||
174 | keys = append(keys, Key{e, e.PrimaryKey, e.PrivateKey, selfSig}) | ||
175 | } | ||
176 | |||
177 | for _, subKey := range e.Subkeys { | ||
178 | if subKey.PublicKey.KeyId == id { | ||
179 | keys = append(keys, Key{e, subKey.PublicKey, subKey.PrivateKey, subKey.Sig}) | ||
180 | } | ||
181 | } | ||
182 | } | ||
183 | return | ||
184 | } | ||
185 | |||
186 | // KeysByIdAndUsage returns the set of keys with the given id that also meet | ||
187 | // the key usage given by requiredUsage. The requiredUsage is expressed as | ||
188 | // the bitwise-OR of packet.KeyFlag* values. | ||
189 | func (el EntityList) KeysByIdUsage(id uint64, requiredUsage byte) (keys []Key) { | ||
190 | for _, key := range el.KeysById(id) { | ||
191 | if len(key.Entity.Revocations) > 0 { | ||
192 | continue | ||
193 | } | ||
194 | |||
195 | if key.SelfSignature.RevocationReason != nil { | ||
196 | continue | ||
197 | } | ||
198 | |||
199 | if key.SelfSignature.FlagsValid && requiredUsage != 0 { | ||
200 | var usage byte | ||
201 | if key.SelfSignature.FlagCertify { | ||
202 | usage |= packet.KeyFlagCertify | ||
203 | } | ||
204 | if key.SelfSignature.FlagSign { | ||
205 | usage |= packet.KeyFlagSign | ||
206 | } | ||
207 | if key.SelfSignature.FlagEncryptCommunications { | ||
208 | usage |= packet.KeyFlagEncryptCommunications | ||
209 | } | ||
210 | if key.SelfSignature.FlagEncryptStorage { | ||
211 | usage |= packet.KeyFlagEncryptStorage | ||
212 | } | ||
213 | if usage&requiredUsage != requiredUsage { | ||
214 | continue | ||
215 | } | ||
216 | } | ||
217 | |||
218 | keys = append(keys, key) | ||
219 | } | ||
220 | return | ||
221 | } | ||
222 | |||
223 | // DecryptionKeys returns all private keys that are valid for decryption. | ||
224 | func (el EntityList) DecryptionKeys() (keys []Key) { | ||
225 | for _, e := range el { | ||
226 | for _, subKey := range e.Subkeys { | ||
227 | if subKey.PrivateKey != nil && (!subKey.Sig.FlagsValid || subKey.Sig.FlagEncryptStorage || subKey.Sig.FlagEncryptCommunications) { | ||
228 | keys = append(keys, Key{e, subKey.PublicKey, subKey.PrivateKey, subKey.Sig}) | ||
229 | } | ||
230 | } | ||
231 | } | ||
232 | return | ||
233 | } | ||
234 | |||
235 | // ReadArmoredKeyRing reads one or more public/private keys from an armor keyring file. | ||
236 | func ReadArmoredKeyRing(r io.Reader) (EntityList, error) { | ||
237 | block, err := armor.Decode(r) | ||
238 | if err == io.EOF { | ||
239 | return nil, errors.InvalidArgumentError("no armored data found") | ||
240 | } | ||
241 | if err != nil { | ||
242 | return nil, err | ||
243 | } | ||
244 | if block.Type != PublicKeyType && block.Type != PrivateKeyType { | ||
245 | return nil, errors.InvalidArgumentError("expected public or private key block, got: " + block.Type) | ||
246 | } | ||
247 | |||
248 | return ReadKeyRing(block.Body) | ||
249 | } | ||
250 | |||
251 | // ReadKeyRing reads one or more public/private keys. Unsupported keys are | ||
252 | // ignored as long as at least a single valid key is found. | ||
253 | func ReadKeyRing(r io.Reader) (el EntityList, err error) { | ||
254 | packets := packet.NewReader(r) | ||
255 | var lastUnsupportedError error | ||
256 | |||
257 | for { | ||
258 | var e *Entity | ||
259 | e, err = ReadEntity(packets) | ||
260 | if err != nil { | ||
261 | // TODO: warn about skipped unsupported/unreadable keys | ||
262 | if _, ok := err.(errors.UnsupportedError); ok { | ||
263 | lastUnsupportedError = err | ||
264 | err = readToNextPublicKey(packets) | ||
265 | } else if _, ok := err.(errors.StructuralError); ok { | ||
266 | // Skip unreadable, badly-formatted keys | ||
267 | lastUnsupportedError = err | ||
268 | err = readToNextPublicKey(packets) | ||
269 | } | ||
270 | if err == io.EOF { | ||
271 | err = nil | ||
272 | break | ||
273 | } | ||
274 | if err != nil { | ||
275 | el = nil | ||
276 | break | ||
277 | } | ||
278 | } else { | ||
279 | el = append(el, e) | ||
280 | } | ||
281 | } | ||
282 | |||
283 | if len(el) == 0 && err == nil { | ||
284 | err = lastUnsupportedError | ||
285 | } | ||
286 | return | ||
287 | } | ||
288 | |||
289 | // readToNextPublicKey reads packets until the start of the entity and leaves | ||
290 | // the first packet of the new entity in the Reader. | ||
291 | func readToNextPublicKey(packets *packet.Reader) (err error) { | ||
292 | var p packet.Packet | ||
293 | for { | ||
294 | p, err = packets.Next() | ||
295 | if err == io.EOF { | ||
296 | return | ||
297 | } else if err != nil { | ||
298 | if _, ok := err.(errors.UnsupportedError); ok { | ||
299 | err = nil | ||
300 | continue | ||
301 | } | ||
302 | return | ||
303 | } | ||
304 | |||
305 | if pk, ok := p.(*packet.PublicKey); ok && !pk.IsSubkey { | ||
306 | packets.Unread(p) | ||
307 | return | ||
308 | } | ||
309 | } | ||
310 | } | ||
311 | |||
312 | // ReadEntity reads an entity (public key, identities, subkeys etc) from the | ||
313 | // given Reader. | ||
314 | func ReadEntity(packets *packet.Reader) (*Entity, error) { | ||
315 | e := new(Entity) | ||
316 | e.Identities = make(map[string]*Identity) | ||
317 | |||
318 | p, err := packets.Next() | ||
319 | if err != nil { | ||
320 | return nil, err | ||
321 | } | ||
322 | |||
323 | var ok bool | ||
324 | if e.PrimaryKey, ok = p.(*packet.PublicKey); !ok { | ||
325 | if e.PrivateKey, ok = p.(*packet.PrivateKey); !ok { | ||
326 | packets.Unread(p) | ||
327 | return nil, errors.StructuralError("first packet was not a public/private key") | ||
328 | } else { | ||
329 | e.PrimaryKey = &e.PrivateKey.PublicKey | ||
330 | } | ||
331 | } | ||
332 | |||
333 | if !e.PrimaryKey.PubKeyAlgo.CanSign() { | ||
334 | return nil, errors.StructuralError("primary key cannot be used for signatures") | ||
335 | } | ||
336 | |||
337 | var current *Identity | ||
338 | var revocations []*packet.Signature | ||
339 | EachPacket: | ||
340 | for { | ||
341 | p, err := packets.Next() | ||
342 | if err == io.EOF { | ||
343 | break | ||
344 | } else if err != nil { | ||
345 | return nil, err | ||
346 | } | ||
347 | |||
348 | switch pkt := p.(type) { | ||
349 | case *packet.UserId: | ||
350 | current = new(Identity) | ||
351 | current.Name = pkt.Id | ||
352 | current.UserId = pkt | ||
353 | e.Identities[pkt.Id] = current | ||
354 | |||
355 | for { | ||
356 | p, err = packets.Next() | ||
357 | if err == io.EOF { | ||
358 | return nil, io.ErrUnexpectedEOF | ||
359 | } else if err != nil { | ||
360 | return nil, err | ||
361 | } | ||
362 | |||
363 | sig, ok := p.(*packet.Signature) | ||
364 | if !ok { | ||
365 | return nil, errors.StructuralError("user ID packet not followed by self-signature") | ||
366 | } | ||
367 | |||
368 | if (sig.SigType == packet.SigTypePositiveCert || sig.SigType == packet.SigTypeGenericCert) && sig.IssuerKeyId != nil && *sig.IssuerKeyId == e.PrimaryKey.KeyId { | ||
369 | if err = e.PrimaryKey.VerifyUserIdSignature(pkt.Id, e.PrimaryKey, sig); err != nil { | ||
370 | return nil, errors.StructuralError("user ID self-signature invalid: " + err.Error()) | ||
371 | } | ||
372 | current.SelfSignature = sig | ||
373 | break | ||
374 | } | ||
375 | current.Signatures = append(current.Signatures, sig) | ||
376 | } | ||
377 | case *packet.Signature: | ||
378 | if pkt.SigType == packet.SigTypeKeyRevocation { | ||
379 | revocations = append(revocations, pkt) | ||
380 | } else if pkt.SigType == packet.SigTypeDirectSignature { | ||
381 | // TODO: RFC4880 5.2.1 permits signatures | ||
382 | // directly on keys (eg. to bind additional | ||
383 | // revocation keys). | ||
384 | } else if current == nil { | ||
385 | return nil, errors.StructuralError("signature packet found before user id packet") | ||
386 | } else { | ||
387 | current.Signatures = append(current.Signatures, pkt) | ||
388 | } | ||
389 | case *packet.PrivateKey: | ||
390 | if pkt.IsSubkey == false { | ||
391 | packets.Unread(p) | ||
392 | break EachPacket | ||
393 | } | ||
394 | err = addSubkey(e, packets, &pkt.PublicKey, pkt) | ||
395 | if err != nil { | ||
396 | return nil, err | ||
397 | } | ||
398 | case *packet.PublicKey: | ||
399 | if pkt.IsSubkey == false { | ||
400 | packets.Unread(p) | ||
401 | break EachPacket | ||
402 | } | ||
403 | err = addSubkey(e, packets, pkt, nil) | ||
404 | if err != nil { | ||
405 | return nil, err | ||
406 | } | ||
407 | default: | ||
408 | // we ignore unknown packets | ||
409 | } | ||
410 | } | ||
411 | |||
412 | if len(e.Identities) == 0 { | ||
413 | return nil, errors.StructuralError("entity without any identities") | ||
414 | } | ||
415 | |||
416 | for _, revocation := range revocations { | ||
417 | err = e.PrimaryKey.VerifyRevocationSignature(revocation) | ||
418 | if err == nil { | ||
419 | e.Revocations = append(e.Revocations, revocation) | ||
420 | } else { | ||
421 | // TODO: RFC 4880 5.2.3.15 defines revocation keys. | ||
422 | return nil, errors.StructuralError("revocation signature signed by alternate key") | ||
423 | } | ||
424 | } | ||
425 | |||
426 | return e, nil | ||
427 | } | ||
428 | |||
429 | func addSubkey(e *Entity, packets *packet.Reader, pub *packet.PublicKey, priv *packet.PrivateKey) error { | ||
430 | var subKey Subkey | ||
431 | subKey.PublicKey = pub | ||
432 | subKey.PrivateKey = priv | ||
433 | p, err := packets.Next() | ||
434 | if err == io.EOF { | ||
435 | return io.ErrUnexpectedEOF | ||
436 | } | ||
437 | if err != nil { | ||
438 | return errors.StructuralError("subkey signature invalid: " + err.Error()) | ||
439 | } | ||
440 | var ok bool | ||
441 | subKey.Sig, ok = p.(*packet.Signature) | ||
442 | if !ok { | ||
443 | return errors.StructuralError("subkey packet not followed by signature") | ||
444 | } | ||
445 | if subKey.Sig.SigType != packet.SigTypeSubkeyBinding && subKey.Sig.SigType != packet.SigTypeSubkeyRevocation { | ||
446 | return errors.StructuralError("subkey signature with wrong type") | ||
447 | } | ||
448 | err = e.PrimaryKey.VerifyKeySignature(subKey.PublicKey, subKey.Sig) | ||
449 | if err != nil { | ||
450 | return errors.StructuralError("subkey signature invalid: " + err.Error()) | ||
451 | } | ||
452 | e.Subkeys = append(e.Subkeys, subKey) | ||
453 | return nil | ||
454 | } | ||
455 | |||
456 | const defaultRSAKeyBits = 2048 | ||
457 | |||
458 | // NewEntity returns an Entity that contains a fresh RSA/RSA keypair with a | ||
459 | // single identity composed of the given full name, comment and email, any of | ||
460 | // which may be empty but must not contain any of "()<>\x00". | ||
461 | // If config is nil, sensible defaults will be used. | ||
462 | func NewEntity(name, comment, email string, config *packet.Config) (*Entity, error) { | ||
463 | currentTime := config.Now() | ||
464 | |||
465 | bits := defaultRSAKeyBits | ||
466 | if config != nil && config.RSABits != 0 { | ||
467 | bits = config.RSABits | ||
468 | } | ||
469 | |||
470 | uid := packet.NewUserId(name, comment, email) | ||
471 | if uid == nil { | ||
472 | return nil, errors.InvalidArgumentError("user id field contained invalid characters") | ||
473 | } | ||
474 | signingPriv, err := rsa.GenerateKey(config.Random(), bits) | ||
475 | if err != nil { | ||
476 | return nil, err | ||
477 | } | ||
478 | encryptingPriv, err := rsa.GenerateKey(config.Random(), bits) | ||
479 | if err != nil { | ||
480 | return nil, err | ||
481 | } | ||
482 | |||
483 | e := &Entity{ | ||
484 | PrimaryKey: packet.NewRSAPublicKey(currentTime, &signingPriv.PublicKey), | ||
485 | PrivateKey: packet.NewRSAPrivateKey(currentTime, signingPriv), | ||
486 | Identities: make(map[string]*Identity), | ||
487 | } | ||
488 | isPrimaryId := true | ||
489 | e.Identities[uid.Id] = &Identity{ | ||
490 | Name: uid.Name, | ||
491 | UserId: uid, | ||
492 | SelfSignature: &packet.Signature{ | ||
493 | CreationTime: currentTime, | ||
494 | SigType: packet.SigTypePositiveCert, | ||
495 | PubKeyAlgo: packet.PubKeyAlgoRSA, | ||
496 | Hash: config.Hash(), | ||
497 | IsPrimaryId: &isPrimaryId, | ||
498 | FlagsValid: true, | ||
499 | FlagSign: true, | ||
500 | FlagCertify: true, | ||
501 | IssuerKeyId: &e.PrimaryKey.KeyId, | ||
502 | }, | ||
503 | } | ||
504 | |||
505 | // If the user passes in a DefaultHash via packet.Config, | ||
506 | // set the PreferredHash for the SelfSignature. | ||
507 | if config != nil && config.DefaultHash != 0 { | ||
508 | e.Identities[uid.Id].SelfSignature.PreferredHash = []uint8{hashToHashId(config.DefaultHash)} | ||
509 | } | ||
510 | |||
511 | e.Subkeys = make([]Subkey, 1) | ||
512 | e.Subkeys[0] = Subkey{ | ||
513 | PublicKey: packet.NewRSAPublicKey(currentTime, &encryptingPriv.PublicKey), | ||
514 | PrivateKey: packet.NewRSAPrivateKey(currentTime, encryptingPriv), | ||
515 | Sig: &packet.Signature{ | ||
516 | CreationTime: currentTime, | ||
517 | SigType: packet.SigTypeSubkeyBinding, | ||
518 | PubKeyAlgo: packet.PubKeyAlgoRSA, | ||
519 | Hash: config.Hash(), | ||
520 | FlagsValid: true, | ||
521 | FlagEncryptStorage: true, | ||
522 | FlagEncryptCommunications: true, | ||
523 | IssuerKeyId: &e.PrimaryKey.KeyId, | ||
524 | }, | ||
525 | } | ||
526 | e.Subkeys[0].PublicKey.IsSubkey = true | ||
527 | e.Subkeys[0].PrivateKey.IsSubkey = true | ||
528 | |||
529 | return e, nil | ||
530 | } | ||
531 | |||
532 | // SerializePrivate serializes an Entity, including private key material, to | ||
533 | // the given Writer. For now, it must only be used on an Entity returned from | ||
534 | // NewEntity. | ||
535 | // If config is nil, sensible defaults will be used. | ||
536 | func (e *Entity) SerializePrivate(w io.Writer, config *packet.Config) (err error) { | ||
537 | err = e.PrivateKey.Serialize(w) | ||
538 | if err != nil { | ||
539 | return | ||
540 | } | ||
541 | for _, ident := range e.Identities { | ||
542 | err = ident.UserId.Serialize(w) | ||
543 | if err != nil { | ||
544 | return | ||
545 | } | ||
546 | err = ident.SelfSignature.SignUserId(ident.UserId.Id, e.PrimaryKey, e.PrivateKey, config) | ||
547 | if err != nil { | ||
548 | return | ||
549 | } | ||
550 | err = ident.SelfSignature.Serialize(w) | ||
551 | if err != nil { | ||
552 | return | ||
553 | } | ||
554 | } | ||
555 | for _, subkey := range e.Subkeys { | ||
556 | err = subkey.PrivateKey.Serialize(w) | ||
557 | if err != nil { | ||
558 | return | ||
559 | } | ||
560 | err = subkey.Sig.SignKey(subkey.PublicKey, e.PrivateKey, config) | ||
561 | if err != nil { | ||
562 | return | ||
563 | } | ||
564 | err = subkey.Sig.Serialize(w) | ||
565 | if err != nil { | ||
566 | return | ||
567 | } | ||
568 | } | ||
569 | return nil | ||
570 | } | ||
571 | |||
572 | // Serialize writes the public part of the given Entity to w. (No private | ||
573 | // key material will be output). | ||
574 | func (e *Entity) Serialize(w io.Writer) error { | ||
575 | err := e.PrimaryKey.Serialize(w) | ||
576 | if err != nil { | ||
577 | return err | ||
578 | } | ||
579 | for _, ident := range e.Identities { | ||
580 | err = ident.UserId.Serialize(w) | ||
581 | if err != nil { | ||
582 | return err | ||
583 | } | ||
584 | err = ident.SelfSignature.Serialize(w) | ||
585 | if err != nil { | ||
586 | return err | ||
587 | } | ||
588 | for _, sig := range ident.Signatures { | ||
589 | err = sig.Serialize(w) | ||
590 | if err != nil { | ||
591 | return err | ||
592 | } | ||
593 | } | ||
594 | } | ||
595 | for _, subkey := range e.Subkeys { | ||
596 | err = subkey.PublicKey.Serialize(w) | ||
597 | if err != nil { | ||
598 | return err | ||
599 | } | ||
600 | err = subkey.Sig.Serialize(w) | ||
601 | if err != nil { | ||
602 | return err | ||
603 | } | ||
604 | } | ||
605 | return nil | ||
606 | } | ||
607 | |||
608 | // SignIdentity adds a signature to e, from signer, attesting that identity is | ||
609 | // associated with e. The provided identity must already be an element of | ||
610 | // e.Identities and the private key of signer must have been decrypted if | ||
611 | // necessary. | ||
612 | // If config is nil, sensible defaults will be used. | ||
613 | func (e *Entity) SignIdentity(identity string, signer *Entity, config *packet.Config) error { | ||
614 | if signer.PrivateKey == nil { | ||
615 | return errors.InvalidArgumentError("signing Entity must have a private key") | ||
616 | } | ||
617 | if signer.PrivateKey.Encrypted { | ||
618 | return errors.InvalidArgumentError("signing Entity's private key must be decrypted") | ||
619 | } | ||
620 | ident, ok := e.Identities[identity] | ||
621 | if !ok { | ||
622 | return errors.InvalidArgumentError("given identity string not found in Entity") | ||
623 | } | ||
624 | |||
625 | sig := &packet.Signature{ | ||
626 | SigType: packet.SigTypeGenericCert, | ||
627 | PubKeyAlgo: signer.PrivateKey.PubKeyAlgo, | ||
628 | Hash: config.Hash(), | ||
629 | CreationTime: config.Now(), | ||
630 | IssuerKeyId: &signer.PrivateKey.KeyId, | ||
631 | } | ||
632 | if err := sig.SignUserId(identity, e.PrimaryKey, signer.PrivateKey, config); err != nil { | ||
633 | return err | ||
634 | } | ||
635 | ident.Signatures = append(ident.Signatures, sig) | ||
636 | return nil | ||
637 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/compressed.go b/vendor/golang.org/x/crypto/openpgp/packet/compressed.go new file mode 100644 index 0000000..e8f0b5c --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/compressed.go | |||
@@ -0,0 +1,123 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "compress/bzip2" | ||
9 | "compress/flate" | ||
10 | "compress/zlib" | ||
11 | "golang.org/x/crypto/openpgp/errors" | ||
12 | "io" | ||
13 | "strconv" | ||
14 | ) | ||
15 | |||
16 | // Compressed represents a compressed OpenPGP packet. The decompressed contents | ||
17 | // will contain more OpenPGP packets. See RFC 4880, section 5.6. | ||
18 | type Compressed struct { | ||
19 | Body io.Reader | ||
20 | } | ||
21 | |||
22 | const ( | ||
23 | NoCompression = flate.NoCompression | ||
24 | BestSpeed = flate.BestSpeed | ||
25 | BestCompression = flate.BestCompression | ||
26 | DefaultCompression = flate.DefaultCompression | ||
27 | ) | ||
28 | |||
29 | // CompressionConfig contains compressor configuration settings. | ||
30 | type CompressionConfig struct { | ||
31 | // Level is the compression level to use. It must be set to | ||
32 | // between -1 and 9, with -1 causing the compressor to use the | ||
33 | // default compression level, 0 causing the compressor to use | ||
34 | // no compression and 1 to 9 representing increasing (better, | ||
35 | // slower) compression levels. If Level is less than -1 or | ||
36 | // more then 9, a non-nil error will be returned during | ||
37 | // encryption. See the constants above for convenient common | ||
38 | // settings for Level. | ||
39 | Level int | ||
40 | } | ||
41 | |||
42 | func (c *Compressed) parse(r io.Reader) error { | ||
43 | var buf [1]byte | ||
44 | _, err := readFull(r, buf[:]) | ||
45 | if err != nil { | ||
46 | return err | ||
47 | } | ||
48 | |||
49 | switch buf[0] { | ||
50 | case 1: | ||
51 | c.Body = flate.NewReader(r) | ||
52 | case 2: | ||
53 | c.Body, err = zlib.NewReader(r) | ||
54 | case 3: | ||
55 | c.Body = bzip2.NewReader(r) | ||
56 | default: | ||
57 | err = errors.UnsupportedError("unknown compression algorithm: " + strconv.Itoa(int(buf[0]))) | ||
58 | } | ||
59 | |||
60 | return err | ||
61 | } | ||
62 | |||
63 | // compressedWriterCloser represents the serialized compression stream | ||
64 | // header and the compressor. Its Close() method ensures that both the | ||
65 | // compressor and serialized stream header are closed. Its Write() | ||
66 | // method writes to the compressor. | ||
67 | type compressedWriteCloser struct { | ||
68 | sh io.Closer // Stream Header | ||
69 | c io.WriteCloser // Compressor | ||
70 | } | ||
71 | |||
72 | func (cwc compressedWriteCloser) Write(p []byte) (int, error) { | ||
73 | return cwc.c.Write(p) | ||
74 | } | ||
75 | |||
76 | func (cwc compressedWriteCloser) Close() (err error) { | ||
77 | err = cwc.c.Close() | ||
78 | if err != nil { | ||
79 | return err | ||
80 | } | ||
81 | |||
82 | return cwc.sh.Close() | ||
83 | } | ||
84 | |||
85 | // SerializeCompressed serializes a compressed data packet to w and | ||
86 | // returns a WriteCloser to which the literal data packets themselves | ||
87 | // can be written and which MUST be closed on completion. If cc is | ||
88 | // nil, sensible defaults will be used to configure the compression | ||
89 | // algorithm. | ||
90 | func SerializeCompressed(w io.WriteCloser, algo CompressionAlgo, cc *CompressionConfig) (literaldata io.WriteCloser, err error) { | ||
91 | compressed, err := serializeStreamHeader(w, packetTypeCompressed) | ||
92 | if err != nil { | ||
93 | return | ||
94 | } | ||
95 | |||
96 | _, err = compressed.Write([]byte{uint8(algo)}) | ||
97 | if err != nil { | ||
98 | return | ||
99 | } | ||
100 | |||
101 | level := DefaultCompression | ||
102 | if cc != nil { | ||
103 | level = cc.Level | ||
104 | } | ||
105 | |||
106 | var compressor io.WriteCloser | ||
107 | switch algo { | ||
108 | case CompressionZIP: | ||
109 | compressor, err = flate.NewWriter(compressed, level) | ||
110 | case CompressionZLIB: | ||
111 | compressor, err = zlib.NewWriterLevel(compressed, level) | ||
112 | default: | ||
113 | s := strconv.Itoa(int(algo)) | ||
114 | err = errors.UnsupportedError("Unsupported compression algorithm: " + s) | ||
115 | } | ||
116 | if err != nil { | ||
117 | return | ||
118 | } | ||
119 | |||
120 | literaldata = compressedWriteCloser{compressed, compressor} | ||
121 | |||
122 | return | ||
123 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/config.go b/vendor/golang.org/x/crypto/openpgp/packet/config.go new file mode 100644 index 0000000..c76eecc --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/config.go | |||
@@ -0,0 +1,91 @@ | |||
1 | // Copyright 2012 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "crypto" | ||
9 | "crypto/rand" | ||
10 | "io" | ||
11 | "time" | ||
12 | ) | ||
13 | |||
14 | // Config collects a number of parameters along with sensible defaults. | ||
15 | // A nil *Config is valid and results in all default values. | ||
16 | type Config struct { | ||
17 | // Rand provides the source of entropy. | ||
18 | // If nil, the crypto/rand Reader is used. | ||
19 | Rand io.Reader | ||
20 | // DefaultHash is the default hash function to be used. | ||
21 | // If zero, SHA-256 is used. | ||
22 | DefaultHash crypto.Hash | ||
23 | // DefaultCipher is the cipher to be used. | ||
24 | // If zero, AES-128 is used. | ||
25 | DefaultCipher CipherFunction | ||
26 | // Time returns the current time as the number of seconds since the | ||
27 | // epoch. If Time is nil, time.Now is used. | ||
28 | Time func() time.Time | ||
29 | // DefaultCompressionAlgo is the compression algorithm to be | ||
30 | // applied to the plaintext before encryption. If zero, no | ||
31 | // compression is done. | ||
32 | DefaultCompressionAlgo CompressionAlgo | ||
33 | // CompressionConfig configures the compression settings. | ||
34 | CompressionConfig *CompressionConfig | ||
35 | // S2KCount is only used for symmetric encryption. It | ||
36 | // determines the strength of the passphrase stretching when | ||
37 | // the said passphrase is hashed to produce a key. S2KCount | ||
38 | // should be between 1024 and 65011712, inclusive. If Config | ||
39 | // is nil or S2KCount is 0, the value 65536 used. Not all | ||
40 | // values in the above range can be represented. S2KCount will | ||
41 | // be rounded up to the next representable value if it cannot | ||
42 | // be encoded exactly. When set, it is strongly encrouraged to | ||
43 | // use a value that is at least 65536. See RFC 4880 Section | ||
44 | // 3.7.1.3. | ||
45 | S2KCount int | ||
46 | // RSABits is the number of bits in new RSA keys made with NewEntity. | ||
47 | // If zero, then 2048 bit keys are created. | ||
48 | RSABits int | ||
49 | } | ||
50 | |||
51 | func (c *Config) Random() io.Reader { | ||
52 | if c == nil || c.Rand == nil { | ||
53 | return rand.Reader | ||
54 | } | ||
55 | return c.Rand | ||
56 | } | ||
57 | |||
58 | func (c *Config) Hash() crypto.Hash { | ||
59 | if c == nil || uint(c.DefaultHash) == 0 { | ||
60 | return crypto.SHA256 | ||
61 | } | ||
62 | return c.DefaultHash | ||
63 | } | ||
64 | |||
65 | func (c *Config) Cipher() CipherFunction { | ||
66 | if c == nil || uint8(c.DefaultCipher) == 0 { | ||
67 | return CipherAES128 | ||
68 | } | ||
69 | return c.DefaultCipher | ||
70 | } | ||
71 | |||
72 | func (c *Config) Now() time.Time { | ||
73 | if c == nil || c.Time == nil { | ||
74 | return time.Now() | ||
75 | } | ||
76 | return c.Time() | ||
77 | } | ||
78 | |||
79 | func (c *Config) Compression() CompressionAlgo { | ||
80 | if c == nil { | ||
81 | return CompressionNone | ||
82 | } | ||
83 | return c.DefaultCompressionAlgo | ||
84 | } | ||
85 | |||
86 | func (c *Config) PasswordHashIterations() int { | ||
87 | if c == nil || c.S2KCount == 0 { | ||
88 | return 0 | ||
89 | } | ||
90 | return c.S2KCount | ||
91 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/encrypted_key.go b/vendor/golang.org/x/crypto/openpgp/packet/encrypted_key.go new file mode 100644 index 0000000..266840d --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/encrypted_key.go | |||
@@ -0,0 +1,199 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "crypto/rsa" | ||
9 | "encoding/binary" | ||
10 | "io" | ||
11 | "math/big" | ||
12 | "strconv" | ||
13 | |||
14 | "golang.org/x/crypto/openpgp/elgamal" | ||
15 | "golang.org/x/crypto/openpgp/errors" | ||
16 | ) | ||
17 | |||
18 | const encryptedKeyVersion = 3 | ||
19 | |||
20 | // EncryptedKey represents a public-key encrypted session key. See RFC 4880, | ||
21 | // section 5.1. | ||
22 | type EncryptedKey struct { | ||
23 | KeyId uint64 | ||
24 | Algo PublicKeyAlgorithm | ||
25 | CipherFunc CipherFunction // only valid after a successful Decrypt | ||
26 | Key []byte // only valid after a successful Decrypt | ||
27 | |||
28 | encryptedMPI1, encryptedMPI2 parsedMPI | ||
29 | } | ||
30 | |||
31 | func (e *EncryptedKey) parse(r io.Reader) (err error) { | ||
32 | var buf [10]byte | ||
33 | _, err = readFull(r, buf[:]) | ||
34 | if err != nil { | ||
35 | return | ||
36 | } | ||
37 | if buf[0] != encryptedKeyVersion { | ||
38 | return errors.UnsupportedError("unknown EncryptedKey version " + strconv.Itoa(int(buf[0]))) | ||
39 | } | ||
40 | e.KeyId = binary.BigEndian.Uint64(buf[1:9]) | ||
41 | e.Algo = PublicKeyAlgorithm(buf[9]) | ||
42 | switch e.Algo { | ||
43 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly: | ||
44 | e.encryptedMPI1.bytes, e.encryptedMPI1.bitLength, err = readMPI(r) | ||
45 | case PubKeyAlgoElGamal: | ||
46 | e.encryptedMPI1.bytes, e.encryptedMPI1.bitLength, err = readMPI(r) | ||
47 | if err != nil { | ||
48 | return | ||
49 | } | ||
50 | e.encryptedMPI2.bytes, e.encryptedMPI2.bitLength, err = readMPI(r) | ||
51 | } | ||
52 | _, err = consumeAll(r) | ||
53 | return | ||
54 | } | ||
55 | |||
56 | func checksumKeyMaterial(key []byte) uint16 { | ||
57 | var checksum uint16 | ||
58 | for _, v := range key { | ||
59 | checksum += uint16(v) | ||
60 | } | ||
61 | return checksum | ||
62 | } | ||
63 | |||
64 | // Decrypt decrypts an encrypted session key with the given private key. The | ||
65 | // private key must have been decrypted first. | ||
66 | // If config is nil, sensible defaults will be used. | ||
67 | func (e *EncryptedKey) Decrypt(priv *PrivateKey, config *Config) error { | ||
68 | var err error | ||
69 | var b []byte | ||
70 | |||
71 | // TODO(agl): use session key decryption routines here to avoid | ||
72 | // padding oracle attacks. | ||
73 | switch priv.PubKeyAlgo { | ||
74 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly: | ||
75 | b, err = rsa.DecryptPKCS1v15(config.Random(), priv.PrivateKey.(*rsa.PrivateKey), e.encryptedMPI1.bytes) | ||
76 | case PubKeyAlgoElGamal: | ||
77 | c1 := new(big.Int).SetBytes(e.encryptedMPI1.bytes) | ||
78 | c2 := new(big.Int).SetBytes(e.encryptedMPI2.bytes) | ||
79 | b, err = elgamal.Decrypt(priv.PrivateKey.(*elgamal.PrivateKey), c1, c2) | ||
80 | default: | ||
81 | err = errors.InvalidArgumentError("cannot decrypted encrypted session key with private key of type " + strconv.Itoa(int(priv.PubKeyAlgo))) | ||
82 | } | ||
83 | |||
84 | if err != nil { | ||
85 | return err | ||
86 | } | ||
87 | |||
88 | e.CipherFunc = CipherFunction(b[0]) | ||
89 | e.Key = b[1 : len(b)-2] | ||
90 | expectedChecksum := uint16(b[len(b)-2])<<8 | uint16(b[len(b)-1]) | ||
91 | checksum := checksumKeyMaterial(e.Key) | ||
92 | if checksum != expectedChecksum { | ||
93 | return errors.StructuralError("EncryptedKey checksum incorrect") | ||
94 | } | ||
95 | |||
96 | return nil | ||
97 | } | ||
98 | |||
99 | // Serialize writes the encrypted key packet, e, to w. | ||
100 | func (e *EncryptedKey) Serialize(w io.Writer) error { | ||
101 | var mpiLen int | ||
102 | switch e.Algo { | ||
103 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly: | ||
104 | mpiLen = 2 + len(e.encryptedMPI1.bytes) | ||
105 | case PubKeyAlgoElGamal: | ||
106 | mpiLen = 2 + len(e.encryptedMPI1.bytes) + 2 + len(e.encryptedMPI2.bytes) | ||
107 | default: | ||
108 | return errors.InvalidArgumentError("don't know how to serialize encrypted key type " + strconv.Itoa(int(e.Algo))) | ||
109 | } | ||
110 | |||
111 | serializeHeader(w, packetTypeEncryptedKey, 1 /* version */ +8 /* key id */ +1 /* algo */ +mpiLen) | ||
112 | |||
113 | w.Write([]byte{encryptedKeyVersion}) | ||
114 | binary.Write(w, binary.BigEndian, e.KeyId) | ||
115 | w.Write([]byte{byte(e.Algo)}) | ||
116 | |||
117 | switch e.Algo { | ||
118 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly: | ||
119 | writeMPIs(w, e.encryptedMPI1) | ||
120 | case PubKeyAlgoElGamal: | ||
121 | writeMPIs(w, e.encryptedMPI1, e.encryptedMPI2) | ||
122 | default: | ||
123 | panic("internal error") | ||
124 | } | ||
125 | |||
126 | return nil | ||
127 | } | ||
128 | |||
129 | // SerializeEncryptedKey serializes an encrypted key packet to w that contains | ||
130 | // key, encrypted to pub. | ||
131 | // If config is nil, sensible defaults will be used. | ||
132 | func SerializeEncryptedKey(w io.Writer, pub *PublicKey, cipherFunc CipherFunction, key []byte, config *Config) error { | ||
133 | var buf [10]byte | ||
134 | buf[0] = encryptedKeyVersion | ||
135 | binary.BigEndian.PutUint64(buf[1:9], pub.KeyId) | ||
136 | buf[9] = byte(pub.PubKeyAlgo) | ||
137 | |||
138 | keyBlock := make([]byte, 1 /* cipher type */ +len(key)+2 /* checksum */) | ||
139 | keyBlock[0] = byte(cipherFunc) | ||
140 | copy(keyBlock[1:], key) | ||
141 | checksum := checksumKeyMaterial(key) | ||
142 | keyBlock[1+len(key)] = byte(checksum >> 8) | ||
143 | keyBlock[1+len(key)+1] = byte(checksum) | ||
144 | |||
145 | switch pub.PubKeyAlgo { | ||
146 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly: | ||
147 | return serializeEncryptedKeyRSA(w, config.Random(), buf, pub.PublicKey.(*rsa.PublicKey), keyBlock) | ||
148 | case PubKeyAlgoElGamal: | ||
149 | return serializeEncryptedKeyElGamal(w, config.Random(), buf, pub.PublicKey.(*elgamal.PublicKey), keyBlock) | ||
150 | case PubKeyAlgoDSA, PubKeyAlgoRSASignOnly: | ||
151 | return errors.InvalidArgumentError("cannot encrypt to public key of type " + strconv.Itoa(int(pub.PubKeyAlgo))) | ||
152 | } | ||
153 | |||
154 | return errors.UnsupportedError("encrypting a key to public key of type " + strconv.Itoa(int(pub.PubKeyAlgo))) | ||
155 | } | ||
156 | |||
157 | func serializeEncryptedKeyRSA(w io.Writer, rand io.Reader, header [10]byte, pub *rsa.PublicKey, keyBlock []byte) error { | ||
158 | cipherText, err := rsa.EncryptPKCS1v15(rand, pub, keyBlock) | ||
159 | if err != nil { | ||
160 | return errors.InvalidArgumentError("RSA encryption failed: " + err.Error()) | ||
161 | } | ||
162 | |||
163 | packetLen := 10 /* header length */ + 2 /* mpi size */ + len(cipherText) | ||
164 | |||
165 | err = serializeHeader(w, packetTypeEncryptedKey, packetLen) | ||
166 | if err != nil { | ||
167 | return err | ||
168 | } | ||
169 | _, err = w.Write(header[:]) | ||
170 | if err != nil { | ||
171 | return err | ||
172 | } | ||
173 | return writeMPI(w, 8*uint16(len(cipherText)), cipherText) | ||
174 | } | ||
175 | |||
176 | func serializeEncryptedKeyElGamal(w io.Writer, rand io.Reader, header [10]byte, pub *elgamal.PublicKey, keyBlock []byte) error { | ||
177 | c1, c2, err := elgamal.Encrypt(rand, pub, keyBlock) | ||
178 | if err != nil { | ||
179 | return errors.InvalidArgumentError("ElGamal encryption failed: " + err.Error()) | ||
180 | } | ||
181 | |||
182 | packetLen := 10 /* header length */ | ||
183 | packetLen += 2 /* mpi size */ + (c1.BitLen()+7)/8 | ||
184 | packetLen += 2 /* mpi size */ + (c2.BitLen()+7)/8 | ||
185 | |||
186 | err = serializeHeader(w, packetTypeEncryptedKey, packetLen) | ||
187 | if err != nil { | ||
188 | return err | ||
189 | } | ||
190 | _, err = w.Write(header[:]) | ||
191 | if err != nil { | ||
192 | return err | ||
193 | } | ||
194 | err = writeBig(w, c1) | ||
195 | if err != nil { | ||
196 | return err | ||
197 | } | ||
198 | return writeBig(w, c2) | ||
199 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/literal.go b/vendor/golang.org/x/crypto/openpgp/packet/literal.go new file mode 100644 index 0000000..1a9ec6e --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/literal.go | |||
@@ -0,0 +1,89 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "encoding/binary" | ||
9 | "io" | ||
10 | ) | ||
11 | |||
12 | // LiteralData represents an encrypted file. See RFC 4880, section 5.9. | ||
13 | type LiteralData struct { | ||
14 | IsBinary bool | ||
15 | FileName string | ||
16 | Time uint32 // Unix epoch time. Either creation time or modification time. 0 means undefined. | ||
17 | Body io.Reader | ||
18 | } | ||
19 | |||
20 | // ForEyesOnly returns whether the contents of the LiteralData have been marked | ||
21 | // as especially sensitive. | ||
22 | func (l *LiteralData) ForEyesOnly() bool { | ||
23 | return l.FileName == "_CONSOLE" | ||
24 | } | ||
25 | |||
26 | func (l *LiteralData) parse(r io.Reader) (err error) { | ||
27 | var buf [256]byte | ||
28 | |||
29 | _, err = readFull(r, buf[:2]) | ||
30 | if err != nil { | ||
31 | return | ||
32 | } | ||
33 | |||
34 | l.IsBinary = buf[0] == 'b' | ||
35 | fileNameLen := int(buf[1]) | ||
36 | |||
37 | _, err = readFull(r, buf[:fileNameLen]) | ||
38 | if err != nil { | ||
39 | return | ||
40 | } | ||
41 | |||
42 | l.FileName = string(buf[:fileNameLen]) | ||
43 | |||
44 | _, err = readFull(r, buf[:4]) | ||
45 | if err != nil { | ||
46 | return | ||
47 | } | ||
48 | |||
49 | l.Time = binary.BigEndian.Uint32(buf[:4]) | ||
50 | l.Body = r | ||
51 | return | ||
52 | } | ||
53 | |||
54 | // SerializeLiteral serializes a literal data packet to w and returns a | ||
55 | // WriteCloser to which the data itself can be written and which MUST be closed | ||
56 | // on completion. The fileName is truncated to 255 bytes. | ||
57 | func SerializeLiteral(w io.WriteCloser, isBinary bool, fileName string, time uint32) (plaintext io.WriteCloser, err error) { | ||
58 | var buf [4]byte | ||
59 | buf[0] = 't' | ||
60 | if isBinary { | ||
61 | buf[0] = 'b' | ||
62 | } | ||
63 | if len(fileName) > 255 { | ||
64 | fileName = fileName[:255] | ||
65 | } | ||
66 | buf[1] = byte(len(fileName)) | ||
67 | |||
68 | inner, err := serializeStreamHeader(w, packetTypeLiteralData) | ||
69 | if err != nil { | ||
70 | return | ||
71 | } | ||
72 | |||
73 | _, err = inner.Write(buf[:2]) | ||
74 | if err != nil { | ||
75 | return | ||
76 | } | ||
77 | _, err = inner.Write([]byte(fileName)) | ||
78 | if err != nil { | ||
79 | return | ||
80 | } | ||
81 | binary.BigEndian.PutUint32(buf[:], time) | ||
82 | _, err = inner.Write(buf[:]) | ||
83 | if err != nil { | ||
84 | return | ||
85 | } | ||
86 | |||
87 | plaintext = inner | ||
88 | return | ||
89 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/ocfb.go b/vendor/golang.org/x/crypto/openpgp/packet/ocfb.go new file mode 100644 index 0000000..ce2a33a --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/ocfb.go | |||
@@ -0,0 +1,143 @@ | |||
1 | // Copyright 2010 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | // OpenPGP CFB Mode. http://tools.ietf.org/html/rfc4880#section-13.9 | ||
6 | |||
7 | package packet | ||
8 | |||
9 | import ( | ||
10 | "crypto/cipher" | ||
11 | ) | ||
12 | |||
13 | type ocfbEncrypter struct { | ||
14 | b cipher.Block | ||
15 | fre []byte | ||
16 | outUsed int | ||
17 | } | ||
18 | |||
19 | // An OCFBResyncOption determines if the "resynchronization step" of OCFB is | ||
20 | // performed. | ||
21 | type OCFBResyncOption bool | ||
22 | |||
23 | const ( | ||
24 | OCFBResync OCFBResyncOption = true | ||
25 | OCFBNoResync OCFBResyncOption = false | ||
26 | ) | ||
27 | |||
28 | // NewOCFBEncrypter returns a cipher.Stream which encrypts data with OpenPGP's | ||
29 | // cipher feedback mode using the given cipher.Block, and an initial amount of | ||
30 | // ciphertext. randData must be random bytes and be the same length as the | ||
31 | // cipher.Block's block size. Resync determines if the "resynchronization step" | ||
32 | // from RFC 4880, 13.9 step 7 is performed. Different parts of OpenPGP vary on | ||
33 | // this point. | ||
34 | func NewOCFBEncrypter(block cipher.Block, randData []byte, resync OCFBResyncOption) (cipher.Stream, []byte) { | ||
35 | blockSize := block.BlockSize() | ||
36 | if len(randData) != blockSize { | ||
37 | return nil, nil | ||
38 | } | ||
39 | |||
40 | x := &ocfbEncrypter{ | ||
41 | b: block, | ||
42 | fre: make([]byte, blockSize), | ||
43 | outUsed: 0, | ||
44 | } | ||
45 | prefix := make([]byte, blockSize+2) | ||
46 | |||
47 | block.Encrypt(x.fre, x.fre) | ||
48 | for i := 0; i < blockSize; i++ { | ||
49 | prefix[i] = randData[i] ^ x.fre[i] | ||
50 | } | ||
51 | |||
52 | block.Encrypt(x.fre, prefix[:blockSize]) | ||
53 | prefix[blockSize] = x.fre[0] ^ randData[blockSize-2] | ||
54 | prefix[blockSize+1] = x.fre[1] ^ randData[blockSize-1] | ||
55 | |||
56 | if resync { | ||
57 | block.Encrypt(x.fre, prefix[2:]) | ||
58 | } else { | ||
59 | x.fre[0] = prefix[blockSize] | ||
60 | x.fre[1] = prefix[blockSize+1] | ||
61 | x.outUsed = 2 | ||
62 | } | ||
63 | return x, prefix | ||
64 | } | ||
65 | |||
66 | func (x *ocfbEncrypter) XORKeyStream(dst, src []byte) { | ||
67 | for i := 0; i < len(src); i++ { | ||
68 | if x.outUsed == len(x.fre) { | ||
69 | x.b.Encrypt(x.fre, x.fre) | ||
70 | x.outUsed = 0 | ||
71 | } | ||
72 | |||
73 | x.fre[x.outUsed] ^= src[i] | ||
74 | dst[i] = x.fre[x.outUsed] | ||
75 | x.outUsed++ | ||
76 | } | ||
77 | } | ||
78 | |||
79 | type ocfbDecrypter struct { | ||
80 | b cipher.Block | ||
81 | fre []byte | ||
82 | outUsed int | ||
83 | } | ||
84 | |||
85 | // NewOCFBDecrypter returns a cipher.Stream which decrypts data with OpenPGP's | ||
86 | // cipher feedback mode using the given cipher.Block. Prefix must be the first | ||
87 | // blockSize + 2 bytes of the ciphertext, where blockSize is the cipher.Block's | ||
88 | // block size. If an incorrect key is detected then nil is returned. On | ||
89 | // successful exit, blockSize+2 bytes of decrypted data are written into | ||
90 | // prefix. Resync determines if the "resynchronization step" from RFC 4880, | ||
91 | // 13.9 step 7 is performed. Different parts of OpenPGP vary on this point. | ||
92 | func NewOCFBDecrypter(block cipher.Block, prefix []byte, resync OCFBResyncOption) cipher.Stream { | ||
93 | blockSize := block.BlockSize() | ||
94 | if len(prefix) != blockSize+2 { | ||
95 | return nil | ||
96 | } | ||
97 | |||
98 | x := &ocfbDecrypter{ | ||
99 | b: block, | ||
100 | fre: make([]byte, blockSize), | ||
101 | outUsed: 0, | ||
102 | } | ||
103 | prefixCopy := make([]byte, len(prefix)) | ||
104 | copy(prefixCopy, prefix) | ||
105 | |||
106 | block.Encrypt(x.fre, x.fre) | ||
107 | for i := 0; i < blockSize; i++ { | ||
108 | prefixCopy[i] ^= x.fre[i] | ||
109 | } | ||
110 | |||
111 | block.Encrypt(x.fre, prefix[:blockSize]) | ||
112 | prefixCopy[blockSize] ^= x.fre[0] | ||
113 | prefixCopy[blockSize+1] ^= x.fre[1] | ||
114 | |||
115 | if prefixCopy[blockSize-2] != prefixCopy[blockSize] || | ||
116 | prefixCopy[blockSize-1] != prefixCopy[blockSize+1] { | ||
117 | return nil | ||
118 | } | ||
119 | |||
120 | if resync { | ||
121 | block.Encrypt(x.fre, prefix[2:]) | ||
122 | } else { | ||
123 | x.fre[0] = prefix[blockSize] | ||
124 | x.fre[1] = prefix[blockSize+1] | ||
125 | x.outUsed = 2 | ||
126 | } | ||
127 | copy(prefix, prefixCopy) | ||
128 | return x | ||
129 | } | ||
130 | |||
131 | func (x *ocfbDecrypter) XORKeyStream(dst, src []byte) { | ||
132 | for i := 0; i < len(src); i++ { | ||
133 | if x.outUsed == len(x.fre) { | ||
134 | x.b.Encrypt(x.fre, x.fre) | ||
135 | x.outUsed = 0 | ||
136 | } | ||
137 | |||
138 | c := src[i] | ||
139 | dst[i] = x.fre[x.outUsed] ^ src[i] | ||
140 | x.fre[x.outUsed] = c | ||
141 | x.outUsed++ | ||
142 | } | ||
143 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/one_pass_signature.go b/vendor/golang.org/x/crypto/openpgp/packet/one_pass_signature.go new file mode 100644 index 0000000..1713503 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/one_pass_signature.go | |||
@@ -0,0 +1,73 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "crypto" | ||
9 | "encoding/binary" | ||
10 | "golang.org/x/crypto/openpgp/errors" | ||
11 | "golang.org/x/crypto/openpgp/s2k" | ||
12 | "io" | ||
13 | "strconv" | ||
14 | ) | ||
15 | |||
16 | // OnePassSignature represents a one-pass signature packet. See RFC 4880, | ||
17 | // section 5.4. | ||
18 | type OnePassSignature struct { | ||
19 | SigType SignatureType | ||
20 | Hash crypto.Hash | ||
21 | PubKeyAlgo PublicKeyAlgorithm | ||
22 | KeyId uint64 | ||
23 | IsLast bool | ||
24 | } | ||
25 | |||
26 | const onePassSignatureVersion = 3 | ||
27 | |||
28 | func (ops *OnePassSignature) parse(r io.Reader) (err error) { | ||
29 | var buf [13]byte | ||
30 | |||
31 | _, err = readFull(r, buf[:]) | ||
32 | if err != nil { | ||
33 | return | ||
34 | } | ||
35 | if buf[0] != onePassSignatureVersion { | ||
36 | err = errors.UnsupportedError("one-pass-signature packet version " + strconv.Itoa(int(buf[0]))) | ||
37 | } | ||
38 | |||
39 | var ok bool | ||
40 | ops.Hash, ok = s2k.HashIdToHash(buf[2]) | ||
41 | if !ok { | ||
42 | return errors.UnsupportedError("hash function: " + strconv.Itoa(int(buf[2]))) | ||
43 | } | ||
44 | |||
45 | ops.SigType = SignatureType(buf[1]) | ||
46 | ops.PubKeyAlgo = PublicKeyAlgorithm(buf[3]) | ||
47 | ops.KeyId = binary.BigEndian.Uint64(buf[4:12]) | ||
48 | ops.IsLast = buf[12] != 0 | ||
49 | return | ||
50 | } | ||
51 | |||
52 | // Serialize marshals the given OnePassSignature to w. | ||
53 | func (ops *OnePassSignature) Serialize(w io.Writer) error { | ||
54 | var buf [13]byte | ||
55 | buf[0] = onePassSignatureVersion | ||
56 | buf[1] = uint8(ops.SigType) | ||
57 | var ok bool | ||
58 | buf[2], ok = s2k.HashToHashId(ops.Hash) | ||
59 | if !ok { | ||
60 | return errors.UnsupportedError("hash type: " + strconv.Itoa(int(ops.Hash))) | ||
61 | } | ||
62 | buf[3] = uint8(ops.PubKeyAlgo) | ||
63 | binary.BigEndian.PutUint64(buf[4:12], ops.KeyId) | ||
64 | if ops.IsLast { | ||
65 | buf[12] = 1 | ||
66 | } | ||
67 | |||
68 | if err := serializeHeader(w, packetTypeOnePassSignature, len(buf)); err != nil { | ||
69 | return err | ||
70 | } | ||
71 | _, err := w.Write(buf[:]) | ||
72 | return err | ||
73 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/opaque.go b/vendor/golang.org/x/crypto/openpgp/packet/opaque.go new file mode 100644 index 0000000..456d807 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/opaque.go | |||
@@ -0,0 +1,162 @@ | |||
1 | // Copyright 2012 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "bytes" | ||
9 | "io" | ||
10 | "io/ioutil" | ||
11 | |||
12 | "golang.org/x/crypto/openpgp/errors" | ||
13 | ) | ||
14 | |||
15 | // OpaquePacket represents an OpenPGP packet as raw, unparsed data. This is | ||
16 | // useful for splitting and storing the original packet contents separately, | ||
17 | // handling unsupported packet types or accessing parts of the packet not yet | ||
18 | // implemented by this package. | ||
19 | type OpaquePacket struct { | ||
20 | // Packet type | ||
21 | Tag uint8 | ||
22 | // Reason why the packet was parsed opaquely | ||
23 | Reason error | ||
24 | // Binary contents of the packet data | ||
25 | Contents []byte | ||
26 | } | ||
27 | |||
28 | func (op *OpaquePacket) parse(r io.Reader) (err error) { | ||
29 | op.Contents, err = ioutil.ReadAll(r) | ||
30 | return | ||
31 | } | ||
32 | |||
33 | // Serialize marshals the packet to a writer in its original form, including | ||
34 | // the packet header. | ||
35 | func (op *OpaquePacket) Serialize(w io.Writer) (err error) { | ||
36 | err = serializeHeader(w, packetType(op.Tag), len(op.Contents)) | ||
37 | if err == nil { | ||
38 | _, err = w.Write(op.Contents) | ||
39 | } | ||
40 | return | ||
41 | } | ||
42 | |||
43 | // Parse attempts to parse the opaque contents into a structure supported by | ||
44 | // this package. If the packet is not known then the result will be another | ||
45 | // OpaquePacket. | ||
46 | func (op *OpaquePacket) Parse() (p Packet, err error) { | ||
47 | hdr := bytes.NewBuffer(nil) | ||
48 | err = serializeHeader(hdr, packetType(op.Tag), len(op.Contents)) | ||
49 | if err != nil { | ||
50 | op.Reason = err | ||
51 | return op, err | ||
52 | } | ||
53 | p, err = Read(io.MultiReader(hdr, bytes.NewBuffer(op.Contents))) | ||
54 | if err != nil { | ||
55 | op.Reason = err | ||
56 | p = op | ||
57 | } | ||
58 | return | ||
59 | } | ||
60 | |||
61 | // OpaqueReader reads OpaquePackets from an io.Reader. | ||
62 | type OpaqueReader struct { | ||
63 | r io.Reader | ||
64 | } | ||
65 | |||
66 | func NewOpaqueReader(r io.Reader) *OpaqueReader { | ||
67 | return &OpaqueReader{r: r} | ||
68 | } | ||
69 | |||
70 | // Read the next OpaquePacket. | ||
71 | func (or *OpaqueReader) Next() (op *OpaquePacket, err error) { | ||
72 | tag, _, contents, err := readHeader(or.r) | ||
73 | if err != nil { | ||
74 | return | ||
75 | } | ||
76 | op = &OpaquePacket{Tag: uint8(tag), Reason: err} | ||
77 | err = op.parse(contents) | ||
78 | if err != nil { | ||
79 | consumeAll(contents) | ||
80 | } | ||
81 | return | ||
82 | } | ||
83 | |||
84 | // OpaqueSubpacket represents an unparsed OpenPGP subpacket, | ||
85 | // as found in signature and user attribute packets. | ||
86 | type OpaqueSubpacket struct { | ||
87 | SubType uint8 | ||
88 | Contents []byte | ||
89 | } | ||
90 | |||
91 | // OpaqueSubpackets extracts opaque, unparsed OpenPGP subpackets from | ||
92 | // their byte representation. | ||
93 | func OpaqueSubpackets(contents []byte) (result []*OpaqueSubpacket, err error) { | ||
94 | var ( | ||
95 | subHeaderLen int | ||
96 | subPacket *OpaqueSubpacket | ||
97 | ) | ||
98 | for len(contents) > 0 { | ||
99 | subHeaderLen, subPacket, err = nextSubpacket(contents) | ||
100 | if err != nil { | ||
101 | break | ||
102 | } | ||
103 | result = append(result, subPacket) | ||
104 | contents = contents[subHeaderLen+len(subPacket.Contents):] | ||
105 | } | ||
106 | return | ||
107 | } | ||
108 | |||
109 | func nextSubpacket(contents []byte) (subHeaderLen int, subPacket *OpaqueSubpacket, err error) { | ||
110 | // RFC 4880, section 5.2.3.1 | ||
111 | var subLen uint32 | ||
112 | if len(contents) < 1 { | ||
113 | goto Truncated | ||
114 | } | ||
115 | subPacket = &OpaqueSubpacket{} | ||
116 | switch { | ||
117 | case contents[0] < 192: | ||
118 | subHeaderLen = 2 // 1 length byte, 1 subtype byte | ||
119 | if len(contents) < subHeaderLen { | ||
120 | goto Truncated | ||
121 | } | ||
122 | subLen = uint32(contents[0]) | ||
123 | contents = contents[1:] | ||
124 | case contents[0] < 255: | ||
125 | subHeaderLen = 3 // 2 length bytes, 1 subtype | ||
126 | if len(contents) < subHeaderLen { | ||
127 | goto Truncated | ||
128 | } | ||
129 | subLen = uint32(contents[0]-192)<<8 + uint32(contents[1]) + 192 | ||
130 | contents = contents[2:] | ||
131 | default: | ||
132 | subHeaderLen = 6 // 5 length bytes, 1 subtype | ||
133 | if len(contents) < subHeaderLen { | ||
134 | goto Truncated | ||
135 | } | ||
136 | subLen = uint32(contents[1])<<24 | | ||
137 | uint32(contents[2])<<16 | | ||
138 | uint32(contents[3])<<8 | | ||
139 | uint32(contents[4]) | ||
140 | contents = contents[5:] | ||
141 | } | ||
142 | if subLen > uint32(len(contents)) || subLen == 0 { | ||
143 | goto Truncated | ||
144 | } | ||
145 | subPacket.SubType = contents[0] | ||
146 | subPacket.Contents = contents[1:subLen] | ||
147 | return | ||
148 | Truncated: | ||
149 | err = errors.StructuralError("subpacket truncated") | ||
150 | return | ||
151 | } | ||
152 | |||
153 | func (osp *OpaqueSubpacket) Serialize(w io.Writer) (err error) { | ||
154 | buf := make([]byte, 6) | ||
155 | n := serializeSubpacketLength(buf, len(osp.Contents)+1) | ||
156 | buf[n] = osp.SubType | ||
157 | if _, err = w.Write(buf[:n+1]); err != nil { | ||
158 | return | ||
159 | } | ||
160 | _, err = w.Write(osp.Contents) | ||
161 | return | ||
162 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/packet.go b/vendor/golang.org/x/crypto/openpgp/packet/packet.go new file mode 100644 index 0000000..3eded93 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/packet.go | |||
@@ -0,0 +1,537 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | // Package packet implements parsing and serialization of OpenPGP packets, as | ||
6 | // specified in RFC 4880. | ||
7 | package packet // import "golang.org/x/crypto/openpgp/packet" | ||
8 | |||
9 | import ( | ||
10 | "bufio" | ||
11 | "crypto/aes" | ||
12 | "crypto/cipher" | ||
13 | "crypto/des" | ||
14 | "golang.org/x/crypto/cast5" | ||
15 | "golang.org/x/crypto/openpgp/errors" | ||
16 | "io" | ||
17 | "math/big" | ||
18 | ) | ||
19 | |||
20 | // readFull is the same as io.ReadFull except that reading zero bytes returns | ||
21 | // ErrUnexpectedEOF rather than EOF. | ||
22 | func readFull(r io.Reader, buf []byte) (n int, err error) { | ||
23 | n, err = io.ReadFull(r, buf) | ||
24 | if err == io.EOF { | ||
25 | err = io.ErrUnexpectedEOF | ||
26 | } | ||
27 | return | ||
28 | } | ||
29 | |||
30 | // readLength reads an OpenPGP length from r. See RFC 4880, section 4.2.2. | ||
31 | func readLength(r io.Reader) (length int64, isPartial bool, err error) { | ||
32 | var buf [4]byte | ||
33 | _, err = readFull(r, buf[:1]) | ||
34 | if err != nil { | ||
35 | return | ||
36 | } | ||
37 | switch { | ||
38 | case buf[0] < 192: | ||
39 | length = int64(buf[0]) | ||
40 | case buf[0] < 224: | ||
41 | length = int64(buf[0]-192) << 8 | ||
42 | _, err = readFull(r, buf[0:1]) | ||
43 | if err != nil { | ||
44 | return | ||
45 | } | ||
46 | length += int64(buf[0]) + 192 | ||
47 | case buf[0] < 255: | ||
48 | length = int64(1) << (buf[0] & 0x1f) | ||
49 | isPartial = true | ||
50 | default: | ||
51 | _, err = readFull(r, buf[0:4]) | ||
52 | if err != nil { | ||
53 | return | ||
54 | } | ||
55 | length = int64(buf[0])<<24 | | ||
56 | int64(buf[1])<<16 | | ||
57 | int64(buf[2])<<8 | | ||
58 | int64(buf[3]) | ||
59 | } | ||
60 | return | ||
61 | } | ||
62 | |||
63 | // partialLengthReader wraps an io.Reader and handles OpenPGP partial lengths. | ||
64 | // The continuation lengths are parsed and removed from the stream and EOF is | ||
65 | // returned at the end of the packet. See RFC 4880, section 4.2.2.4. | ||
66 | type partialLengthReader struct { | ||
67 | r io.Reader | ||
68 | remaining int64 | ||
69 | isPartial bool | ||
70 | } | ||
71 | |||
72 | func (r *partialLengthReader) Read(p []byte) (n int, err error) { | ||
73 | for r.remaining == 0 { | ||
74 | if !r.isPartial { | ||
75 | return 0, io.EOF | ||
76 | } | ||
77 | r.remaining, r.isPartial, err = readLength(r.r) | ||
78 | if err != nil { | ||
79 | return 0, err | ||
80 | } | ||
81 | } | ||
82 | |||
83 | toRead := int64(len(p)) | ||
84 | if toRead > r.remaining { | ||
85 | toRead = r.remaining | ||
86 | } | ||
87 | |||
88 | n, err = r.r.Read(p[:int(toRead)]) | ||
89 | r.remaining -= int64(n) | ||
90 | if n < int(toRead) && err == io.EOF { | ||
91 | err = io.ErrUnexpectedEOF | ||
92 | } | ||
93 | return | ||
94 | } | ||
95 | |||
96 | // partialLengthWriter writes a stream of data using OpenPGP partial lengths. | ||
97 | // See RFC 4880, section 4.2.2.4. | ||
98 | type partialLengthWriter struct { | ||
99 | w io.WriteCloser | ||
100 | lengthByte [1]byte | ||
101 | } | ||
102 | |||
103 | func (w *partialLengthWriter) Write(p []byte) (n int, err error) { | ||
104 | for len(p) > 0 { | ||
105 | for power := uint(14); power < 32; power-- { | ||
106 | l := 1 << power | ||
107 | if len(p) >= l { | ||
108 | w.lengthByte[0] = 224 + uint8(power) | ||
109 | _, err = w.w.Write(w.lengthByte[:]) | ||
110 | if err != nil { | ||
111 | return | ||
112 | } | ||
113 | var m int | ||
114 | m, err = w.w.Write(p[:l]) | ||
115 | n += m | ||
116 | if err != nil { | ||
117 | return | ||
118 | } | ||
119 | p = p[l:] | ||
120 | break | ||
121 | } | ||
122 | } | ||
123 | } | ||
124 | return | ||
125 | } | ||
126 | |||
127 | func (w *partialLengthWriter) Close() error { | ||
128 | w.lengthByte[0] = 0 | ||
129 | _, err := w.w.Write(w.lengthByte[:]) | ||
130 | if err != nil { | ||
131 | return err | ||
132 | } | ||
133 | return w.w.Close() | ||
134 | } | ||
135 | |||
136 | // A spanReader is an io.LimitReader, but it returns ErrUnexpectedEOF if the | ||
137 | // underlying Reader returns EOF before the limit has been reached. | ||
138 | type spanReader struct { | ||
139 | r io.Reader | ||
140 | n int64 | ||
141 | } | ||
142 | |||
143 | func (l *spanReader) Read(p []byte) (n int, err error) { | ||
144 | if l.n <= 0 { | ||
145 | return 0, io.EOF | ||
146 | } | ||
147 | if int64(len(p)) > l.n { | ||
148 | p = p[0:l.n] | ||
149 | } | ||
150 | n, err = l.r.Read(p) | ||
151 | l.n -= int64(n) | ||
152 | if l.n > 0 && err == io.EOF { | ||
153 | err = io.ErrUnexpectedEOF | ||
154 | } | ||
155 | return | ||
156 | } | ||
157 | |||
158 | // readHeader parses a packet header and returns an io.Reader which will return | ||
159 | // the contents of the packet. See RFC 4880, section 4.2. | ||
160 | func readHeader(r io.Reader) (tag packetType, length int64, contents io.Reader, err error) { | ||
161 | var buf [4]byte | ||
162 | _, err = io.ReadFull(r, buf[:1]) | ||
163 | if err != nil { | ||
164 | return | ||
165 | } | ||
166 | if buf[0]&0x80 == 0 { | ||
167 | err = errors.StructuralError("tag byte does not have MSB set") | ||
168 | return | ||
169 | } | ||
170 | if buf[0]&0x40 == 0 { | ||
171 | // Old format packet | ||
172 | tag = packetType((buf[0] & 0x3f) >> 2) | ||
173 | lengthType := buf[0] & 3 | ||
174 | if lengthType == 3 { | ||
175 | length = -1 | ||
176 | contents = r | ||
177 | return | ||
178 | } | ||
179 | lengthBytes := 1 << lengthType | ||
180 | _, err = readFull(r, buf[0:lengthBytes]) | ||
181 | if err != nil { | ||
182 | return | ||
183 | } | ||
184 | for i := 0; i < lengthBytes; i++ { | ||
185 | length <<= 8 | ||
186 | length |= int64(buf[i]) | ||
187 | } | ||
188 | contents = &spanReader{r, length} | ||
189 | return | ||
190 | } | ||
191 | |||
192 | // New format packet | ||
193 | tag = packetType(buf[0] & 0x3f) | ||
194 | length, isPartial, err := readLength(r) | ||
195 | if err != nil { | ||
196 | return | ||
197 | } | ||
198 | if isPartial { | ||
199 | contents = &partialLengthReader{ | ||
200 | remaining: length, | ||
201 | isPartial: true, | ||
202 | r: r, | ||
203 | } | ||
204 | length = -1 | ||
205 | } else { | ||
206 | contents = &spanReader{r, length} | ||
207 | } | ||
208 | return | ||
209 | } | ||
210 | |||
211 | // serializeHeader writes an OpenPGP packet header to w. See RFC 4880, section | ||
212 | // 4.2. | ||
213 | func serializeHeader(w io.Writer, ptype packetType, length int) (err error) { | ||
214 | var buf [6]byte | ||
215 | var n int | ||
216 | |||
217 | buf[0] = 0x80 | 0x40 | byte(ptype) | ||
218 | if length < 192 { | ||
219 | buf[1] = byte(length) | ||
220 | n = 2 | ||
221 | } else if length < 8384 { | ||
222 | length -= 192 | ||
223 | buf[1] = 192 + byte(length>>8) | ||
224 | buf[2] = byte(length) | ||
225 | n = 3 | ||
226 | } else { | ||
227 | buf[1] = 255 | ||
228 | buf[2] = byte(length >> 24) | ||
229 | buf[3] = byte(length >> 16) | ||
230 | buf[4] = byte(length >> 8) | ||
231 | buf[5] = byte(length) | ||
232 | n = 6 | ||
233 | } | ||
234 | |||
235 | _, err = w.Write(buf[:n]) | ||
236 | return | ||
237 | } | ||
238 | |||
239 | // serializeStreamHeader writes an OpenPGP packet header to w where the | ||
240 | // length of the packet is unknown. It returns a io.WriteCloser which can be | ||
241 | // used to write the contents of the packet. See RFC 4880, section 4.2. | ||
242 | func serializeStreamHeader(w io.WriteCloser, ptype packetType) (out io.WriteCloser, err error) { | ||
243 | var buf [1]byte | ||
244 | buf[0] = 0x80 | 0x40 | byte(ptype) | ||
245 | _, err = w.Write(buf[:]) | ||
246 | if err != nil { | ||
247 | return | ||
248 | } | ||
249 | out = &partialLengthWriter{w: w} | ||
250 | return | ||
251 | } | ||
252 | |||
253 | // Packet represents an OpenPGP packet. Users are expected to try casting | ||
254 | // instances of this interface to specific packet types. | ||
255 | type Packet interface { | ||
256 | parse(io.Reader) error | ||
257 | } | ||
258 | |||
259 | // consumeAll reads from the given Reader until error, returning the number of | ||
260 | // bytes read. | ||
261 | func consumeAll(r io.Reader) (n int64, err error) { | ||
262 | var m int | ||
263 | var buf [1024]byte | ||
264 | |||
265 | for { | ||
266 | m, err = r.Read(buf[:]) | ||
267 | n += int64(m) | ||
268 | if err == io.EOF { | ||
269 | err = nil | ||
270 | return | ||
271 | } | ||
272 | if err != nil { | ||
273 | return | ||
274 | } | ||
275 | } | ||
276 | } | ||
277 | |||
278 | // packetType represents the numeric ids of the different OpenPGP packet types. See | ||
279 | // http://www.iana.org/assignments/pgp-parameters/pgp-parameters.xhtml#pgp-parameters-2 | ||
280 | type packetType uint8 | ||
281 | |||
282 | const ( | ||
283 | packetTypeEncryptedKey packetType = 1 | ||
284 | packetTypeSignature packetType = 2 | ||
285 | packetTypeSymmetricKeyEncrypted packetType = 3 | ||
286 | packetTypeOnePassSignature packetType = 4 | ||
287 | packetTypePrivateKey packetType = 5 | ||
288 | packetTypePublicKey packetType = 6 | ||
289 | packetTypePrivateSubkey packetType = 7 | ||
290 | packetTypeCompressed packetType = 8 | ||
291 | packetTypeSymmetricallyEncrypted packetType = 9 | ||
292 | packetTypeLiteralData packetType = 11 | ||
293 | packetTypeUserId packetType = 13 | ||
294 | packetTypePublicSubkey packetType = 14 | ||
295 | packetTypeUserAttribute packetType = 17 | ||
296 | packetTypeSymmetricallyEncryptedMDC packetType = 18 | ||
297 | ) | ||
298 | |||
299 | // peekVersion detects the version of a public key packet about to | ||
300 | // be read. A bufio.Reader at the original position of the io.Reader | ||
301 | // is returned. | ||
302 | func peekVersion(r io.Reader) (bufr *bufio.Reader, ver byte, err error) { | ||
303 | bufr = bufio.NewReader(r) | ||
304 | var verBuf []byte | ||
305 | if verBuf, err = bufr.Peek(1); err != nil { | ||
306 | return | ||
307 | } | ||
308 | ver = verBuf[0] | ||
309 | return | ||
310 | } | ||
311 | |||
312 | // Read reads a single OpenPGP packet from the given io.Reader. If there is an | ||
313 | // error parsing a packet, the whole packet is consumed from the input. | ||
314 | func Read(r io.Reader) (p Packet, err error) { | ||
315 | tag, _, contents, err := readHeader(r) | ||
316 | if err != nil { | ||
317 | return | ||
318 | } | ||
319 | |||
320 | switch tag { | ||
321 | case packetTypeEncryptedKey: | ||
322 | p = new(EncryptedKey) | ||
323 | case packetTypeSignature: | ||
324 | var version byte | ||
325 | // Detect signature version | ||
326 | if contents, version, err = peekVersion(contents); err != nil { | ||
327 | return | ||
328 | } | ||
329 | if version < 4 { | ||
330 | p = new(SignatureV3) | ||
331 | } else { | ||
332 | p = new(Signature) | ||
333 | } | ||
334 | case packetTypeSymmetricKeyEncrypted: | ||
335 | p = new(SymmetricKeyEncrypted) | ||
336 | case packetTypeOnePassSignature: | ||
337 | p = new(OnePassSignature) | ||
338 | case packetTypePrivateKey, packetTypePrivateSubkey: | ||
339 | pk := new(PrivateKey) | ||
340 | if tag == packetTypePrivateSubkey { | ||
341 | pk.IsSubkey = true | ||
342 | } | ||
343 | p = pk | ||
344 | case packetTypePublicKey, packetTypePublicSubkey: | ||
345 | var version byte | ||
346 | if contents, version, err = peekVersion(contents); err != nil { | ||
347 | return | ||
348 | } | ||
349 | isSubkey := tag == packetTypePublicSubkey | ||
350 | if version < 4 { | ||
351 | p = &PublicKeyV3{IsSubkey: isSubkey} | ||
352 | } else { | ||
353 | p = &PublicKey{IsSubkey: isSubkey} | ||
354 | } | ||
355 | case packetTypeCompressed: | ||
356 | p = new(Compressed) | ||
357 | case packetTypeSymmetricallyEncrypted: | ||
358 | p = new(SymmetricallyEncrypted) | ||
359 | case packetTypeLiteralData: | ||
360 | p = new(LiteralData) | ||
361 | case packetTypeUserId: | ||
362 | p = new(UserId) | ||
363 | case packetTypeUserAttribute: | ||
364 | p = new(UserAttribute) | ||
365 | case packetTypeSymmetricallyEncryptedMDC: | ||
366 | se := new(SymmetricallyEncrypted) | ||
367 | se.MDC = true | ||
368 | p = se | ||
369 | default: | ||
370 | err = errors.UnknownPacketTypeError(tag) | ||
371 | } | ||
372 | if p != nil { | ||
373 | err = p.parse(contents) | ||
374 | } | ||
375 | if err != nil { | ||
376 | consumeAll(contents) | ||
377 | } | ||
378 | return | ||
379 | } | ||
380 | |||
381 | // SignatureType represents the different semantic meanings of an OpenPGP | ||
382 | // signature. See RFC 4880, section 5.2.1. | ||
383 | type SignatureType uint8 | ||
384 | |||
385 | const ( | ||
386 | SigTypeBinary SignatureType = 0 | ||
387 | SigTypeText = 1 | ||
388 | SigTypeGenericCert = 0x10 | ||
389 | SigTypePersonaCert = 0x11 | ||
390 | SigTypeCasualCert = 0x12 | ||
391 | SigTypePositiveCert = 0x13 | ||
392 | SigTypeSubkeyBinding = 0x18 | ||
393 | SigTypePrimaryKeyBinding = 0x19 | ||
394 | SigTypeDirectSignature = 0x1F | ||
395 | SigTypeKeyRevocation = 0x20 | ||
396 | SigTypeSubkeyRevocation = 0x28 | ||
397 | ) | ||
398 | |||
399 | // PublicKeyAlgorithm represents the different public key system specified for | ||
400 | // OpenPGP. See | ||
401 | // http://www.iana.org/assignments/pgp-parameters/pgp-parameters.xhtml#pgp-parameters-12 | ||
402 | type PublicKeyAlgorithm uint8 | ||
403 | |||
404 | const ( | ||
405 | PubKeyAlgoRSA PublicKeyAlgorithm = 1 | ||
406 | PubKeyAlgoRSAEncryptOnly PublicKeyAlgorithm = 2 | ||
407 | PubKeyAlgoRSASignOnly PublicKeyAlgorithm = 3 | ||
408 | PubKeyAlgoElGamal PublicKeyAlgorithm = 16 | ||
409 | PubKeyAlgoDSA PublicKeyAlgorithm = 17 | ||
410 | // RFC 6637, Section 5. | ||
411 | PubKeyAlgoECDH PublicKeyAlgorithm = 18 | ||
412 | PubKeyAlgoECDSA PublicKeyAlgorithm = 19 | ||
413 | ) | ||
414 | |||
415 | // CanEncrypt returns true if it's possible to encrypt a message to a public | ||
416 | // key of the given type. | ||
417 | func (pka PublicKeyAlgorithm) CanEncrypt() bool { | ||
418 | switch pka { | ||
419 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoElGamal: | ||
420 | return true | ||
421 | } | ||
422 | return false | ||
423 | } | ||
424 | |||
425 | // CanSign returns true if it's possible for a public key of the given type to | ||
426 | // sign a message. | ||
427 | func (pka PublicKeyAlgorithm) CanSign() bool { | ||
428 | switch pka { | ||
429 | case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly, PubKeyAlgoDSA, PubKeyAlgoECDSA: | ||
430 | return true | ||
431 | } | ||
432 | return false | ||
433 | } | ||
434 | |||
435 | // CipherFunction represents the different block ciphers specified for OpenPGP. See | ||
436 | // http://www.iana.org/assignments/pgp-parameters/pgp-parameters.xhtml#pgp-parameters-13 | ||
437 | type CipherFunction uint8 | ||
438 | |||
439 | const ( | ||
440 | Cipher3DES CipherFunction = 2 | ||
441 | CipherCAST5 CipherFunction = 3 | ||
442 | CipherAES128 CipherFunction = 7 | ||
443 | CipherAES192 CipherFunction = 8 | ||
444 | CipherAES256 CipherFunction = 9 | ||
445 | ) | ||
446 | |||
447 | // KeySize returns the key size, in bytes, of cipher. | ||
448 | func (cipher CipherFunction) KeySize() int { | ||
449 | switch cipher { | ||
450 | case Cipher3DES: | ||
451 | return 24 | ||
452 | case CipherCAST5: | ||
453 | return cast5.KeySize | ||
454 | case CipherAES128: | ||
455 | return 16 | ||
456 | case CipherAES192: | ||
457 | return 24 | ||
458 | case CipherAES256: | ||
459 | return 32 | ||
460 | } | ||
461 | return 0 | ||
462 | } | ||
463 | |||
464 | // blockSize returns the block size, in bytes, of cipher. | ||
465 | func (cipher CipherFunction) blockSize() int { | ||
466 | switch cipher { | ||
467 | case Cipher3DES: | ||
468 | return des.BlockSize | ||
469 | case CipherCAST5: | ||
470 | return 8 | ||
471 | case CipherAES128, CipherAES192, CipherAES256: | ||
472 | return 16 | ||
473 | } | ||
474 | return 0 | ||
475 | } | ||
476 | |||
477 | // new returns a fresh instance of the given cipher. | ||
478 | func (cipher CipherFunction) new(key []byte) (block cipher.Block) { | ||
479 | switch cipher { | ||
480 | case Cipher3DES: | ||
481 | block, _ = des.NewTripleDESCipher(key) | ||
482 | case CipherCAST5: | ||
483 | block, _ = cast5.NewCipher(key) | ||
484 | case CipherAES128, CipherAES192, CipherAES256: | ||
485 | block, _ = aes.NewCipher(key) | ||
486 | } | ||
487 | return | ||
488 | } | ||
489 | |||
490 | // readMPI reads a big integer from r. The bit length returned is the bit | ||
491 | // length that was specified in r. This is preserved so that the integer can be | ||
492 | // reserialized exactly. | ||
493 | func readMPI(r io.Reader) (mpi []byte, bitLength uint16, err error) { | ||
494 | var buf [2]byte | ||
495 | _, err = readFull(r, buf[0:]) | ||
496 | if err != nil { | ||
497 | return | ||
498 | } | ||
499 | bitLength = uint16(buf[0])<<8 | uint16(buf[1]) | ||
500 | numBytes := (int(bitLength) + 7) / 8 | ||
501 | mpi = make([]byte, numBytes) | ||
502 | _, err = readFull(r, mpi) | ||
503 | return | ||
504 | } | ||
505 | |||
506 | // mpiLength returns the length of the given *big.Int when serialized as an | ||
507 | // MPI. | ||
508 | func mpiLength(n *big.Int) (mpiLengthInBytes int) { | ||
509 | mpiLengthInBytes = 2 /* MPI length */ | ||
510 | mpiLengthInBytes += (n.BitLen() + 7) / 8 | ||
511 | return | ||
512 | } | ||
513 | |||
514 | // writeMPI serializes a big integer to w. | ||
515 | func writeMPI(w io.Writer, bitLength uint16, mpiBytes []byte) (err error) { | ||
516 | _, err = w.Write([]byte{byte(bitLength >> 8), byte(bitLength)}) | ||
517 | if err == nil { | ||
518 | _, err = w.Write(mpiBytes) | ||
519 | } | ||
520 | return | ||
521 | } | ||
522 | |||
523 | // writeBig serializes a *big.Int to w. | ||
524 | func writeBig(w io.Writer, i *big.Int) error { | ||
525 | return writeMPI(w, uint16(i.BitLen()), i.Bytes()) | ||
526 | } | ||
527 | |||
528 | // CompressionAlgo Represents the different compression algorithms | ||
529 | // supported by OpenPGP (except for BZIP2, which is not currently | ||
530 | // supported). See Section 9.3 of RFC 4880. | ||
531 | type CompressionAlgo uint8 | ||
532 | |||
533 | const ( | ||
534 | CompressionNone CompressionAlgo = 0 | ||
535 | CompressionZIP CompressionAlgo = 1 | ||
536 | CompressionZLIB CompressionAlgo = 2 | ||
537 | ) | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/private_key.go b/vendor/golang.org/x/crypto/openpgp/packet/private_key.go new file mode 100644 index 0000000..34734cc --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/private_key.go | |||
@@ -0,0 +1,380 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "bytes" | ||
9 | "crypto" | ||
10 | "crypto/cipher" | ||
11 | "crypto/dsa" | ||
12 | "crypto/ecdsa" | ||
13 | "crypto/rsa" | ||
14 | "crypto/sha1" | ||
15 | "io" | ||
16 | "io/ioutil" | ||
17 | "math/big" | ||
18 | "strconv" | ||
19 | "time" | ||
20 | |||
21 | "golang.org/x/crypto/openpgp/elgamal" | ||
22 | "golang.org/x/crypto/openpgp/errors" | ||
23 | "golang.org/x/crypto/openpgp/s2k" | ||
24 | ) | ||
25 | |||
26 | // PrivateKey represents a possibly encrypted private key. See RFC 4880, | ||
27 | // section 5.5.3. | ||
28 | type PrivateKey struct { | ||
29 | PublicKey | ||
30 | Encrypted bool // if true then the private key is unavailable until Decrypt has been called. | ||
31 | encryptedData []byte | ||
32 | cipher CipherFunction | ||
33 | s2k func(out, in []byte) | ||
34 | PrivateKey interface{} // An *{rsa|dsa|ecdsa}.PrivateKey or a crypto.Signer. | ||
35 | sha1Checksum bool | ||
36 | iv []byte | ||
37 | } | ||
38 | |||
39 | func NewRSAPrivateKey(currentTime time.Time, priv *rsa.PrivateKey) *PrivateKey { | ||
40 | pk := new(PrivateKey) | ||
41 | pk.PublicKey = *NewRSAPublicKey(currentTime, &priv.PublicKey) | ||
42 | pk.PrivateKey = priv | ||
43 | return pk | ||
44 | } | ||
45 | |||
46 | func NewDSAPrivateKey(currentTime time.Time, priv *dsa.PrivateKey) *PrivateKey { | ||
47 | pk := new(PrivateKey) | ||
48 | pk.PublicKey = *NewDSAPublicKey(currentTime, &priv.PublicKey) | ||
49 | pk.PrivateKey = priv | ||
50 | return pk | ||
51 | } | ||
52 | |||
53 | func NewElGamalPrivateKey(currentTime time.Time, priv *elgamal.PrivateKey) *PrivateKey { | ||
54 | pk := new(PrivateKey) | ||
55 | pk.PublicKey = *NewElGamalPublicKey(currentTime, &priv.PublicKey) | ||
56 | pk.PrivateKey = priv | ||
57 | return pk | ||
58 | } | ||
59 | |||
60 | func NewECDSAPrivateKey(currentTime time.Time, priv *ecdsa.PrivateKey) *PrivateKey { | ||
61 | pk := new(PrivateKey) | ||
62 | pk.PublicKey = *NewECDSAPublicKey(currentTime, &priv.PublicKey) | ||
63 | pk.PrivateKey = priv | ||
64 | return pk | ||
65 | } | ||
66 | |||
67 | // NewSignerPrivateKey creates a sign-only PrivateKey from a crypto.Signer that | ||
68 | // implements RSA or ECDSA. | ||
69 | func NewSignerPrivateKey(currentTime time.Time, signer crypto.Signer) *PrivateKey { | ||
70 | pk := new(PrivateKey) | ||
71 | switch pubkey := signer.Public().(type) { | ||
72 | case rsa.PublicKey: | ||
73 | pk.PublicKey = *NewRSAPublicKey(currentTime, &pubkey) | ||
74 | pk.PubKeyAlgo = PubKeyAlgoRSASignOnly | ||
75 | case ecdsa.PublicKey: | ||
76 | pk.PublicKey = *NewECDSAPublicKey(currentTime, &pubkey) | ||
77 | default: | ||
78 | panic("openpgp: unknown crypto.Signer type in NewSignerPrivateKey") | ||
79 | } | ||
80 | pk.PrivateKey = signer | ||
81 | return pk | ||
82 | } | ||
83 | |||
84 | func (pk *PrivateKey) parse(r io.Reader) (err error) { | ||
85 | err = (&pk.PublicKey).parse(r) | ||
86 | if err != nil { | ||
87 | return | ||
88 | } | ||
89 | var buf [1]byte | ||
90 | _, err = readFull(r, buf[:]) | ||
91 | if err != nil { | ||
92 | return | ||
93 | } | ||
94 | |||
95 | s2kType := buf[0] | ||
96 | |||
97 | switch s2kType { | ||
98 | case 0: | ||
99 | pk.s2k = nil | ||
100 | pk.Encrypted = false | ||
101 | case 254, 255: | ||
102 | _, err = readFull(r, buf[:]) | ||
103 | if err != nil { | ||
104 | return | ||
105 | } | ||
106 | pk.cipher = CipherFunction(buf[0]) | ||
107 | pk.Encrypted = true | ||
108 | pk.s2k, err = s2k.Parse(r) | ||
109 | if err != nil { | ||
110 | return | ||
111 | } | ||
112 | if s2kType == 254 { | ||
113 | pk.sha1Checksum = true | ||
114 | } | ||
115 | default: | ||
116 | return errors.UnsupportedError("deprecated s2k function in private key") | ||
117 | } | ||
118 | |||
119 | if pk.Encrypted { | ||
120 | blockSize := pk.cipher.blockSize() | ||
121 | if blockSize == 0 { | ||
122 | return errors.UnsupportedError("unsupported cipher in private key: " + strconv.Itoa(int(pk.cipher))) | ||
123 | } | ||
124 | pk.iv = make([]byte, blockSize) | ||
125 | _, err = readFull(r, pk.iv) | ||
126 | if err != nil { | ||
127 | return | ||
128 | } | ||
129 | } | ||
130 | |||
131 | pk.encryptedData, err = ioutil.ReadAll(r) | ||
132 | if err != nil { | ||
133 | return | ||
134 | } | ||
135 | |||
136 | if !pk.Encrypted { | ||
137 | return pk.parsePrivateKey(pk.encryptedData) | ||
138 | } | ||
139 | |||
140 | return | ||
141 | } | ||
142 | |||
143 | func mod64kHash(d []byte) uint16 { | ||
144 | var h uint16 | ||
145 | for _, b := range d { | ||
146 | h += uint16(b) | ||
147 | } | ||
148 | return h | ||
149 | } | ||
150 | |||
151 | func (pk *PrivateKey) Serialize(w io.Writer) (err error) { | ||
152 | // TODO(agl): support encrypted private keys | ||
153 | buf := bytes.NewBuffer(nil) | ||
154 | err = pk.PublicKey.serializeWithoutHeaders(buf) | ||
155 | if err != nil { | ||
156 | return | ||
157 | } | ||
158 | buf.WriteByte(0 /* no encryption */) | ||
159 | |||
160 | privateKeyBuf := bytes.NewBuffer(nil) | ||
161 | |||
162 | switch priv := pk.PrivateKey.(type) { | ||
163 | case *rsa.PrivateKey: | ||
164 | err = serializeRSAPrivateKey(privateKeyBuf, priv) | ||
165 | case *dsa.PrivateKey: | ||
166 | err = serializeDSAPrivateKey(privateKeyBuf, priv) | ||
167 | case *elgamal.PrivateKey: | ||
168 | err = serializeElGamalPrivateKey(privateKeyBuf, priv) | ||
169 | case *ecdsa.PrivateKey: | ||
170 | err = serializeECDSAPrivateKey(privateKeyBuf, priv) | ||
171 | default: | ||
172 | err = errors.InvalidArgumentError("unknown private key type") | ||
173 | } | ||
174 | if err != nil { | ||
175 | return | ||
176 | } | ||
177 | |||
178 | ptype := packetTypePrivateKey | ||
179 | contents := buf.Bytes() | ||
180 | privateKeyBytes := privateKeyBuf.Bytes() | ||
181 | if pk.IsSubkey { | ||
182 | ptype = packetTypePrivateSubkey | ||
183 | } | ||
184 | err = serializeHeader(w, ptype, len(contents)+len(privateKeyBytes)+2) | ||
185 | if err != nil { | ||
186 | return | ||
187 | } | ||
188 | _, err = w.Write(contents) | ||
189 | if err != nil { | ||
190 | return | ||
191 | } | ||
192 | _, err = w.Write(privateKeyBytes) | ||
193 | if err != nil { | ||
194 | return | ||
195 | } | ||
196 | |||
197 | checksum := mod64kHash(privateKeyBytes) | ||
198 | var checksumBytes [2]byte | ||
199 | checksumBytes[0] = byte(checksum >> 8) | ||
200 | checksumBytes[1] = byte(checksum) | ||
201 | _, err = w.Write(checksumBytes[:]) | ||
202 | |||
203 | return | ||
204 | } | ||
205 | |||
206 | func serializeRSAPrivateKey(w io.Writer, priv *rsa.PrivateKey) error { | ||
207 | err := writeBig(w, priv.D) | ||
208 | if err != nil { | ||
209 | return err | ||
210 | } | ||
211 | err = writeBig(w, priv.Primes[1]) | ||
212 | if err != nil { | ||
213 | return err | ||
214 | } | ||
215 | err = writeBig(w, priv.Primes[0]) | ||
216 | if err != nil { | ||
217 | return err | ||
218 | } | ||
219 | return writeBig(w, priv.Precomputed.Qinv) | ||
220 | } | ||
221 | |||
222 | func serializeDSAPrivateKey(w io.Writer, priv *dsa.PrivateKey) error { | ||
223 | return writeBig(w, priv.X) | ||
224 | } | ||
225 | |||
226 | func serializeElGamalPrivateKey(w io.Writer, priv *elgamal.PrivateKey) error { | ||
227 | return writeBig(w, priv.X) | ||
228 | } | ||
229 | |||
230 | func serializeECDSAPrivateKey(w io.Writer, priv *ecdsa.PrivateKey) error { | ||
231 | return writeBig(w, priv.D) | ||
232 | } | ||
233 | |||
234 | // Decrypt decrypts an encrypted private key using a passphrase. | ||
235 | func (pk *PrivateKey) Decrypt(passphrase []byte) error { | ||
236 | if !pk.Encrypted { | ||
237 | return nil | ||
238 | } | ||
239 | |||
240 | key := make([]byte, pk.cipher.KeySize()) | ||
241 | pk.s2k(key, passphrase) | ||
242 | block := pk.cipher.new(key) | ||
243 | cfb := cipher.NewCFBDecrypter(block, pk.iv) | ||
244 | |||
245 | data := make([]byte, len(pk.encryptedData)) | ||
246 | cfb.XORKeyStream(data, pk.encryptedData) | ||
247 | |||
248 | if pk.sha1Checksum { | ||
249 | if len(data) < sha1.Size { | ||
250 | return errors.StructuralError("truncated private key data") | ||
251 | } | ||
252 | h := sha1.New() | ||
253 | h.Write(data[:len(data)-sha1.Size]) | ||
254 | sum := h.Sum(nil) | ||
255 | if !bytes.Equal(sum, data[len(data)-sha1.Size:]) { | ||
256 | return errors.StructuralError("private key checksum failure") | ||
257 | } | ||
258 | data = data[:len(data)-sha1.Size] | ||
259 | } else { | ||
260 | if len(data) < 2 { | ||
261 | return errors.StructuralError("truncated private key data") | ||
262 | } | ||
263 | var sum uint16 | ||
264 | for i := 0; i < len(data)-2; i++ { | ||
265 | sum += uint16(data[i]) | ||
266 | } | ||
267 | if data[len(data)-2] != uint8(sum>>8) || | ||
268 | data[len(data)-1] != uint8(sum) { | ||
269 | return errors.StructuralError("private key checksum failure") | ||
270 | } | ||
271 | data = data[:len(data)-2] | ||
272 | } | ||
273 | |||
274 | return pk.parsePrivateKey(data) | ||
275 | } | ||
276 | |||
277 | func (pk *PrivateKey) parsePrivateKey(data []byte) (err error) { | ||
278 | switch pk.PublicKey.PubKeyAlgo { | ||
279 | case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly, PubKeyAlgoRSAEncryptOnly: | ||
280 | return pk.parseRSAPrivateKey(data) | ||
281 | case PubKeyAlgoDSA: | ||
282 | return pk.parseDSAPrivateKey(data) | ||
283 | case PubKeyAlgoElGamal: | ||
284 | return pk.parseElGamalPrivateKey(data) | ||
285 | case PubKeyAlgoECDSA: | ||
286 | return pk.parseECDSAPrivateKey(data) | ||
287 | } | ||
288 | panic("impossible") | ||
289 | } | ||
290 | |||
291 | func (pk *PrivateKey) parseRSAPrivateKey(data []byte) (err error) { | ||
292 | rsaPub := pk.PublicKey.PublicKey.(*rsa.PublicKey) | ||
293 | rsaPriv := new(rsa.PrivateKey) | ||
294 | rsaPriv.PublicKey = *rsaPub | ||
295 | |||
296 | buf := bytes.NewBuffer(data) | ||
297 | d, _, err := readMPI(buf) | ||
298 | if err != nil { | ||
299 | return | ||
300 | } | ||
301 | p, _, err := readMPI(buf) | ||
302 | if err != nil { | ||
303 | return | ||
304 | } | ||
305 | q, _, err := readMPI(buf) | ||
306 | if err != nil { | ||
307 | return | ||
308 | } | ||
309 | |||
310 | rsaPriv.D = new(big.Int).SetBytes(d) | ||
311 | rsaPriv.Primes = make([]*big.Int, 2) | ||
312 | rsaPriv.Primes[0] = new(big.Int).SetBytes(p) | ||
313 | rsaPriv.Primes[1] = new(big.Int).SetBytes(q) | ||
314 | if err := rsaPriv.Validate(); err != nil { | ||
315 | return err | ||
316 | } | ||
317 | rsaPriv.Precompute() | ||
318 | pk.PrivateKey = rsaPriv | ||
319 | pk.Encrypted = false | ||
320 | pk.encryptedData = nil | ||
321 | |||
322 | return nil | ||
323 | } | ||
324 | |||
325 | func (pk *PrivateKey) parseDSAPrivateKey(data []byte) (err error) { | ||
326 | dsaPub := pk.PublicKey.PublicKey.(*dsa.PublicKey) | ||
327 | dsaPriv := new(dsa.PrivateKey) | ||
328 | dsaPriv.PublicKey = *dsaPub | ||
329 | |||
330 | buf := bytes.NewBuffer(data) | ||
331 | x, _, err := readMPI(buf) | ||
332 | if err != nil { | ||
333 | return | ||
334 | } | ||
335 | |||
336 | dsaPriv.X = new(big.Int).SetBytes(x) | ||
337 | pk.PrivateKey = dsaPriv | ||
338 | pk.Encrypted = false | ||
339 | pk.encryptedData = nil | ||
340 | |||
341 | return nil | ||
342 | } | ||
343 | |||
344 | func (pk *PrivateKey) parseElGamalPrivateKey(data []byte) (err error) { | ||
345 | pub := pk.PublicKey.PublicKey.(*elgamal.PublicKey) | ||
346 | priv := new(elgamal.PrivateKey) | ||
347 | priv.PublicKey = *pub | ||
348 | |||
349 | buf := bytes.NewBuffer(data) | ||
350 | x, _, err := readMPI(buf) | ||
351 | if err != nil { | ||
352 | return | ||
353 | } | ||
354 | |||
355 | priv.X = new(big.Int).SetBytes(x) | ||
356 | pk.PrivateKey = priv | ||
357 | pk.Encrypted = false | ||
358 | pk.encryptedData = nil | ||
359 | |||
360 | return nil | ||
361 | } | ||
362 | |||
363 | func (pk *PrivateKey) parseECDSAPrivateKey(data []byte) (err error) { | ||
364 | ecdsaPub := pk.PublicKey.PublicKey.(*ecdsa.PublicKey) | ||
365 | |||
366 | buf := bytes.NewBuffer(data) | ||
367 | d, _, err := readMPI(buf) | ||
368 | if err != nil { | ||
369 | return | ||
370 | } | ||
371 | |||
372 | pk.PrivateKey = &ecdsa.PrivateKey{ | ||
373 | PublicKey: *ecdsaPub, | ||
374 | D: new(big.Int).SetBytes(d), | ||
375 | } | ||
376 | pk.Encrypted = false | ||
377 | pk.encryptedData = nil | ||
378 | |||
379 | return nil | ||
380 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/public_key.go b/vendor/golang.org/x/crypto/openpgp/packet/public_key.go new file mode 100644 index 0000000..ead2623 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/public_key.go | |||
@@ -0,0 +1,748 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "bytes" | ||
9 | "crypto" | ||
10 | "crypto/dsa" | ||
11 | "crypto/ecdsa" | ||
12 | "crypto/elliptic" | ||
13 | "crypto/rsa" | ||
14 | "crypto/sha1" | ||
15 | _ "crypto/sha256" | ||
16 | _ "crypto/sha512" | ||
17 | "encoding/binary" | ||
18 | "fmt" | ||
19 | "hash" | ||
20 | "io" | ||
21 | "math/big" | ||
22 | "strconv" | ||
23 | "time" | ||
24 | |||
25 | "golang.org/x/crypto/openpgp/elgamal" | ||
26 | "golang.org/x/crypto/openpgp/errors" | ||
27 | ) | ||
28 | |||
29 | var ( | ||
30 | // NIST curve P-256 | ||
31 | oidCurveP256 []byte = []byte{0x2A, 0x86, 0x48, 0xCE, 0x3D, 0x03, 0x01, 0x07} | ||
32 | // NIST curve P-384 | ||
33 | oidCurveP384 []byte = []byte{0x2B, 0x81, 0x04, 0x00, 0x22} | ||
34 | // NIST curve P-521 | ||
35 | oidCurveP521 []byte = []byte{0x2B, 0x81, 0x04, 0x00, 0x23} | ||
36 | ) | ||
37 | |||
38 | const maxOIDLength = 8 | ||
39 | |||
40 | // ecdsaKey stores the algorithm-specific fields for ECDSA keys. | ||
41 | // as defined in RFC 6637, Section 9. | ||
42 | type ecdsaKey struct { | ||
43 | // oid contains the OID byte sequence identifying the elliptic curve used | ||
44 | oid []byte | ||
45 | // p contains the elliptic curve point that represents the public key | ||
46 | p parsedMPI | ||
47 | } | ||
48 | |||
49 | // parseOID reads the OID for the curve as defined in RFC 6637, Section 9. | ||
50 | func parseOID(r io.Reader) (oid []byte, err error) { | ||
51 | buf := make([]byte, maxOIDLength) | ||
52 | if _, err = readFull(r, buf[:1]); err != nil { | ||
53 | return | ||
54 | } | ||
55 | oidLen := buf[0] | ||
56 | if int(oidLen) > len(buf) { | ||
57 | err = errors.UnsupportedError("invalid oid length: " + strconv.Itoa(int(oidLen))) | ||
58 | return | ||
59 | } | ||
60 | oid = buf[:oidLen] | ||
61 | _, err = readFull(r, oid) | ||
62 | return | ||
63 | } | ||
64 | |||
65 | func (f *ecdsaKey) parse(r io.Reader) (err error) { | ||
66 | if f.oid, err = parseOID(r); err != nil { | ||
67 | return err | ||
68 | } | ||
69 | f.p.bytes, f.p.bitLength, err = readMPI(r) | ||
70 | return | ||
71 | } | ||
72 | |||
73 | func (f *ecdsaKey) serialize(w io.Writer) (err error) { | ||
74 | buf := make([]byte, maxOIDLength+1) | ||
75 | buf[0] = byte(len(f.oid)) | ||
76 | copy(buf[1:], f.oid) | ||
77 | if _, err = w.Write(buf[:len(f.oid)+1]); err != nil { | ||
78 | return | ||
79 | } | ||
80 | return writeMPIs(w, f.p) | ||
81 | } | ||
82 | |||
83 | func (f *ecdsaKey) newECDSA() (*ecdsa.PublicKey, error) { | ||
84 | var c elliptic.Curve | ||
85 | if bytes.Equal(f.oid, oidCurveP256) { | ||
86 | c = elliptic.P256() | ||
87 | } else if bytes.Equal(f.oid, oidCurveP384) { | ||
88 | c = elliptic.P384() | ||
89 | } else if bytes.Equal(f.oid, oidCurveP521) { | ||
90 | c = elliptic.P521() | ||
91 | } else { | ||
92 | return nil, errors.UnsupportedError(fmt.Sprintf("unsupported oid: %x", f.oid)) | ||
93 | } | ||
94 | x, y := elliptic.Unmarshal(c, f.p.bytes) | ||
95 | if x == nil { | ||
96 | return nil, errors.UnsupportedError("failed to parse EC point") | ||
97 | } | ||
98 | return &ecdsa.PublicKey{Curve: c, X: x, Y: y}, nil | ||
99 | } | ||
100 | |||
101 | func (f *ecdsaKey) byteLen() int { | ||
102 | return 1 + len(f.oid) + 2 + len(f.p.bytes) | ||
103 | } | ||
104 | |||
105 | type kdfHashFunction byte | ||
106 | type kdfAlgorithm byte | ||
107 | |||
108 | // ecdhKdf stores key derivation function parameters | ||
109 | // used for ECDH encryption. See RFC 6637, Section 9. | ||
110 | type ecdhKdf struct { | ||
111 | KdfHash kdfHashFunction | ||
112 | KdfAlgo kdfAlgorithm | ||
113 | } | ||
114 | |||
115 | func (f *ecdhKdf) parse(r io.Reader) (err error) { | ||
116 | buf := make([]byte, 1) | ||
117 | if _, err = readFull(r, buf); err != nil { | ||
118 | return | ||
119 | } | ||
120 | kdfLen := int(buf[0]) | ||
121 | if kdfLen < 3 { | ||
122 | return errors.UnsupportedError("Unsupported ECDH KDF length: " + strconv.Itoa(kdfLen)) | ||
123 | } | ||
124 | buf = make([]byte, kdfLen) | ||
125 | if _, err = readFull(r, buf); err != nil { | ||
126 | return | ||
127 | } | ||
128 | reserved := int(buf[0]) | ||
129 | f.KdfHash = kdfHashFunction(buf[1]) | ||
130 | f.KdfAlgo = kdfAlgorithm(buf[2]) | ||
131 | if reserved != 0x01 { | ||
132 | return errors.UnsupportedError("Unsupported KDF reserved field: " + strconv.Itoa(reserved)) | ||
133 | } | ||
134 | return | ||
135 | } | ||
136 | |||
137 | func (f *ecdhKdf) serialize(w io.Writer) (err error) { | ||
138 | buf := make([]byte, 4) | ||
139 | // See RFC 6637, Section 9, Algorithm-Specific Fields for ECDH keys. | ||
140 | buf[0] = byte(0x03) // Length of the following fields | ||
141 | buf[1] = byte(0x01) // Reserved for future extensions, must be 1 for now | ||
142 | buf[2] = byte(f.KdfHash) | ||
143 | buf[3] = byte(f.KdfAlgo) | ||
144 | _, err = w.Write(buf[:]) | ||
145 | return | ||
146 | } | ||
147 | |||
148 | func (f *ecdhKdf) byteLen() int { | ||
149 | return 4 | ||
150 | } | ||
151 | |||
152 | // PublicKey represents an OpenPGP public key. See RFC 4880, section 5.5.2. | ||
153 | type PublicKey struct { | ||
154 | CreationTime time.Time | ||
155 | PubKeyAlgo PublicKeyAlgorithm | ||
156 | PublicKey interface{} // *rsa.PublicKey, *dsa.PublicKey or *ecdsa.PublicKey | ||
157 | Fingerprint [20]byte | ||
158 | KeyId uint64 | ||
159 | IsSubkey bool | ||
160 | |||
161 | n, e, p, q, g, y parsedMPI | ||
162 | |||
163 | // RFC 6637 fields | ||
164 | ec *ecdsaKey | ||
165 | ecdh *ecdhKdf | ||
166 | } | ||
167 | |||
168 | // signingKey provides a convenient abstraction over signature verification | ||
169 | // for v3 and v4 public keys. | ||
170 | type signingKey interface { | ||
171 | SerializeSignaturePrefix(io.Writer) | ||
172 | serializeWithoutHeaders(io.Writer) error | ||
173 | } | ||
174 | |||
175 | func fromBig(n *big.Int) parsedMPI { | ||
176 | return parsedMPI{ | ||
177 | bytes: n.Bytes(), | ||
178 | bitLength: uint16(n.BitLen()), | ||
179 | } | ||
180 | } | ||
181 | |||
182 | // NewRSAPublicKey returns a PublicKey that wraps the given rsa.PublicKey. | ||
183 | func NewRSAPublicKey(creationTime time.Time, pub *rsa.PublicKey) *PublicKey { | ||
184 | pk := &PublicKey{ | ||
185 | CreationTime: creationTime, | ||
186 | PubKeyAlgo: PubKeyAlgoRSA, | ||
187 | PublicKey: pub, | ||
188 | n: fromBig(pub.N), | ||
189 | e: fromBig(big.NewInt(int64(pub.E))), | ||
190 | } | ||
191 | |||
192 | pk.setFingerPrintAndKeyId() | ||
193 | return pk | ||
194 | } | ||
195 | |||
196 | // NewDSAPublicKey returns a PublicKey that wraps the given dsa.PublicKey. | ||
197 | func NewDSAPublicKey(creationTime time.Time, pub *dsa.PublicKey) *PublicKey { | ||
198 | pk := &PublicKey{ | ||
199 | CreationTime: creationTime, | ||
200 | PubKeyAlgo: PubKeyAlgoDSA, | ||
201 | PublicKey: pub, | ||
202 | p: fromBig(pub.P), | ||
203 | q: fromBig(pub.Q), | ||
204 | g: fromBig(pub.G), | ||
205 | y: fromBig(pub.Y), | ||
206 | } | ||
207 | |||
208 | pk.setFingerPrintAndKeyId() | ||
209 | return pk | ||
210 | } | ||
211 | |||
212 | // NewElGamalPublicKey returns a PublicKey that wraps the given elgamal.PublicKey. | ||
213 | func NewElGamalPublicKey(creationTime time.Time, pub *elgamal.PublicKey) *PublicKey { | ||
214 | pk := &PublicKey{ | ||
215 | CreationTime: creationTime, | ||
216 | PubKeyAlgo: PubKeyAlgoElGamal, | ||
217 | PublicKey: pub, | ||
218 | p: fromBig(pub.P), | ||
219 | g: fromBig(pub.G), | ||
220 | y: fromBig(pub.Y), | ||
221 | } | ||
222 | |||
223 | pk.setFingerPrintAndKeyId() | ||
224 | return pk | ||
225 | } | ||
226 | |||
227 | func NewECDSAPublicKey(creationTime time.Time, pub *ecdsa.PublicKey) *PublicKey { | ||
228 | pk := &PublicKey{ | ||
229 | CreationTime: creationTime, | ||
230 | PubKeyAlgo: PubKeyAlgoECDSA, | ||
231 | PublicKey: pub, | ||
232 | ec: new(ecdsaKey), | ||
233 | } | ||
234 | |||
235 | switch pub.Curve { | ||
236 | case elliptic.P256(): | ||
237 | pk.ec.oid = oidCurveP256 | ||
238 | case elliptic.P384(): | ||
239 | pk.ec.oid = oidCurveP384 | ||
240 | case elliptic.P521(): | ||
241 | pk.ec.oid = oidCurveP521 | ||
242 | default: | ||
243 | panic("unknown elliptic curve") | ||
244 | } | ||
245 | |||
246 | pk.ec.p.bytes = elliptic.Marshal(pub.Curve, pub.X, pub.Y) | ||
247 | pk.ec.p.bitLength = uint16(8 * len(pk.ec.p.bytes)) | ||
248 | |||
249 | pk.setFingerPrintAndKeyId() | ||
250 | return pk | ||
251 | } | ||
252 | |||
253 | func (pk *PublicKey) parse(r io.Reader) (err error) { | ||
254 | // RFC 4880, section 5.5.2 | ||
255 | var buf [6]byte | ||
256 | _, err = readFull(r, buf[:]) | ||
257 | if err != nil { | ||
258 | return | ||
259 | } | ||
260 | if buf[0] != 4 { | ||
261 | return errors.UnsupportedError("public key version") | ||
262 | } | ||
263 | pk.CreationTime = time.Unix(int64(uint32(buf[1])<<24|uint32(buf[2])<<16|uint32(buf[3])<<8|uint32(buf[4])), 0) | ||
264 | pk.PubKeyAlgo = PublicKeyAlgorithm(buf[5]) | ||
265 | switch pk.PubKeyAlgo { | ||
266 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: | ||
267 | err = pk.parseRSA(r) | ||
268 | case PubKeyAlgoDSA: | ||
269 | err = pk.parseDSA(r) | ||
270 | case PubKeyAlgoElGamal: | ||
271 | err = pk.parseElGamal(r) | ||
272 | case PubKeyAlgoECDSA: | ||
273 | pk.ec = new(ecdsaKey) | ||
274 | if err = pk.ec.parse(r); err != nil { | ||
275 | return err | ||
276 | } | ||
277 | pk.PublicKey, err = pk.ec.newECDSA() | ||
278 | case PubKeyAlgoECDH: | ||
279 | pk.ec = new(ecdsaKey) | ||
280 | if err = pk.ec.parse(r); err != nil { | ||
281 | return | ||
282 | } | ||
283 | pk.ecdh = new(ecdhKdf) | ||
284 | if err = pk.ecdh.parse(r); err != nil { | ||
285 | return | ||
286 | } | ||
287 | // The ECDH key is stored in an ecdsa.PublicKey for convenience. | ||
288 | pk.PublicKey, err = pk.ec.newECDSA() | ||
289 | default: | ||
290 | err = errors.UnsupportedError("public key type: " + strconv.Itoa(int(pk.PubKeyAlgo))) | ||
291 | } | ||
292 | if err != nil { | ||
293 | return | ||
294 | } | ||
295 | |||
296 | pk.setFingerPrintAndKeyId() | ||
297 | return | ||
298 | } | ||
299 | |||
300 | func (pk *PublicKey) setFingerPrintAndKeyId() { | ||
301 | // RFC 4880, section 12.2 | ||
302 | fingerPrint := sha1.New() | ||
303 | pk.SerializeSignaturePrefix(fingerPrint) | ||
304 | pk.serializeWithoutHeaders(fingerPrint) | ||
305 | copy(pk.Fingerprint[:], fingerPrint.Sum(nil)) | ||
306 | pk.KeyId = binary.BigEndian.Uint64(pk.Fingerprint[12:20]) | ||
307 | } | ||
308 | |||
309 | // parseRSA parses RSA public key material from the given Reader. See RFC 4880, | ||
310 | // section 5.5.2. | ||
311 | func (pk *PublicKey) parseRSA(r io.Reader) (err error) { | ||
312 | pk.n.bytes, pk.n.bitLength, err = readMPI(r) | ||
313 | if err != nil { | ||
314 | return | ||
315 | } | ||
316 | pk.e.bytes, pk.e.bitLength, err = readMPI(r) | ||
317 | if err != nil { | ||
318 | return | ||
319 | } | ||
320 | |||
321 | if len(pk.e.bytes) > 3 { | ||
322 | err = errors.UnsupportedError("large public exponent") | ||
323 | return | ||
324 | } | ||
325 | rsa := &rsa.PublicKey{ | ||
326 | N: new(big.Int).SetBytes(pk.n.bytes), | ||
327 | E: 0, | ||
328 | } | ||
329 | for i := 0; i < len(pk.e.bytes); i++ { | ||
330 | rsa.E <<= 8 | ||
331 | rsa.E |= int(pk.e.bytes[i]) | ||
332 | } | ||
333 | pk.PublicKey = rsa | ||
334 | return | ||
335 | } | ||
336 | |||
337 | // parseDSA parses DSA public key material from the given Reader. See RFC 4880, | ||
338 | // section 5.5.2. | ||
339 | func (pk *PublicKey) parseDSA(r io.Reader) (err error) { | ||
340 | pk.p.bytes, pk.p.bitLength, err = readMPI(r) | ||
341 | if err != nil { | ||
342 | return | ||
343 | } | ||
344 | pk.q.bytes, pk.q.bitLength, err = readMPI(r) | ||
345 | if err != nil { | ||
346 | return | ||
347 | } | ||
348 | pk.g.bytes, pk.g.bitLength, err = readMPI(r) | ||
349 | if err != nil { | ||
350 | return | ||
351 | } | ||
352 | pk.y.bytes, pk.y.bitLength, err = readMPI(r) | ||
353 | if err != nil { | ||
354 | return | ||
355 | } | ||
356 | |||
357 | dsa := new(dsa.PublicKey) | ||
358 | dsa.P = new(big.Int).SetBytes(pk.p.bytes) | ||
359 | dsa.Q = new(big.Int).SetBytes(pk.q.bytes) | ||
360 | dsa.G = new(big.Int).SetBytes(pk.g.bytes) | ||
361 | dsa.Y = new(big.Int).SetBytes(pk.y.bytes) | ||
362 | pk.PublicKey = dsa | ||
363 | return | ||
364 | } | ||
365 | |||
366 | // parseElGamal parses ElGamal public key material from the given Reader. See | ||
367 | // RFC 4880, section 5.5.2. | ||
368 | func (pk *PublicKey) parseElGamal(r io.Reader) (err error) { | ||
369 | pk.p.bytes, pk.p.bitLength, err = readMPI(r) | ||
370 | if err != nil { | ||
371 | return | ||
372 | } | ||
373 | pk.g.bytes, pk.g.bitLength, err = readMPI(r) | ||
374 | if err != nil { | ||
375 | return | ||
376 | } | ||
377 | pk.y.bytes, pk.y.bitLength, err = readMPI(r) | ||
378 | if err != nil { | ||
379 | return | ||
380 | } | ||
381 | |||
382 | elgamal := new(elgamal.PublicKey) | ||
383 | elgamal.P = new(big.Int).SetBytes(pk.p.bytes) | ||
384 | elgamal.G = new(big.Int).SetBytes(pk.g.bytes) | ||
385 | elgamal.Y = new(big.Int).SetBytes(pk.y.bytes) | ||
386 | pk.PublicKey = elgamal | ||
387 | return | ||
388 | } | ||
389 | |||
390 | // SerializeSignaturePrefix writes the prefix for this public key to the given Writer. | ||
391 | // The prefix is used when calculating a signature over this public key. See | ||
392 | // RFC 4880, section 5.2.4. | ||
393 | func (pk *PublicKey) SerializeSignaturePrefix(h io.Writer) { | ||
394 | var pLength uint16 | ||
395 | switch pk.PubKeyAlgo { | ||
396 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: | ||
397 | pLength += 2 + uint16(len(pk.n.bytes)) | ||
398 | pLength += 2 + uint16(len(pk.e.bytes)) | ||
399 | case PubKeyAlgoDSA: | ||
400 | pLength += 2 + uint16(len(pk.p.bytes)) | ||
401 | pLength += 2 + uint16(len(pk.q.bytes)) | ||
402 | pLength += 2 + uint16(len(pk.g.bytes)) | ||
403 | pLength += 2 + uint16(len(pk.y.bytes)) | ||
404 | case PubKeyAlgoElGamal: | ||
405 | pLength += 2 + uint16(len(pk.p.bytes)) | ||
406 | pLength += 2 + uint16(len(pk.g.bytes)) | ||
407 | pLength += 2 + uint16(len(pk.y.bytes)) | ||
408 | case PubKeyAlgoECDSA: | ||
409 | pLength += uint16(pk.ec.byteLen()) | ||
410 | case PubKeyAlgoECDH: | ||
411 | pLength += uint16(pk.ec.byteLen()) | ||
412 | pLength += uint16(pk.ecdh.byteLen()) | ||
413 | default: | ||
414 | panic("unknown public key algorithm") | ||
415 | } | ||
416 | pLength += 6 | ||
417 | h.Write([]byte{0x99, byte(pLength >> 8), byte(pLength)}) | ||
418 | return | ||
419 | } | ||
420 | |||
421 | func (pk *PublicKey) Serialize(w io.Writer) (err error) { | ||
422 | length := 6 // 6 byte header | ||
423 | |||
424 | switch pk.PubKeyAlgo { | ||
425 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: | ||
426 | length += 2 + len(pk.n.bytes) | ||
427 | length += 2 + len(pk.e.bytes) | ||
428 | case PubKeyAlgoDSA: | ||
429 | length += 2 + len(pk.p.bytes) | ||
430 | length += 2 + len(pk.q.bytes) | ||
431 | length += 2 + len(pk.g.bytes) | ||
432 | length += 2 + len(pk.y.bytes) | ||
433 | case PubKeyAlgoElGamal: | ||
434 | length += 2 + len(pk.p.bytes) | ||
435 | length += 2 + len(pk.g.bytes) | ||
436 | length += 2 + len(pk.y.bytes) | ||
437 | case PubKeyAlgoECDSA: | ||
438 | length += pk.ec.byteLen() | ||
439 | case PubKeyAlgoECDH: | ||
440 | length += pk.ec.byteLen() | ||
441 | length += pk.ecdh.byteLen() | ||
442 | default: | ||
443 | panic("unknown public key algorithm") | ||
444 | } | ||
445 | |||
446 | packetType := packetTypePublicKey | ||
447 | if pk.IsSubkey { | ||
448 | packetType = packetTypePublicSubkey | ||
449 | } | ||
450 | err = serializeHeader(w, packetType, length) | ||
451 | if err != nil { | ||
452 | return | ||
453 | } | ||
454 | return pk.serializeWithoutHeaders(w) | ||
455 | } | ||
456 | |||
457 | // serializeWithoutHeaders marshals the PublicKey to w in the form of an | ||
458 | // OpenPGP public key packet, not including the packet header. | ||
459 | func (pk *PublicKey) serializeWithoutHeaders(w io.Writer) (err error) { | ||
460 | var buf [6]byte | ||
461 | buf[0] = 4 | ||
462 | t := uint32(pk.CreationTime.Unix()) | ||
463 | buf[1] = byte(t >> 24) | ||
464 | buf[2] = byte(t >> 16) | ||
465 | buf[3] = byte(t >> 8) | ||
466 | buf[4] = byte(t) | ||
467 | buf[5] = byte(pk.PubKeyAlgo) | ||
468 | |||
469 | _, err = w.Write(buf[:]) | ||
470 | if err != nil { | ||
471 | return | ||
472 | } | ||
473 | |||
474 | switch pk.PubKeyAlgo { | ||
475 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: | ||
476 | return writeMPIs(w, pk.n, pk.e) | ||
477 | case PubKeyAlgoDSA: | ||
478 | return writeMPIs(w, pk.p, pk.q, pk.g, pk.y) | ||
479 | case PubKeyAlgoElGamal: | ||
480 | return writeMPIs(w, pk.p, pk.g, pk.y) | ||
481 | case PubKeyAlgoECDSA: | ||
482 | return pk.ec.serialize(w) | ||
483 | case PubKeyAlgoECDH: | ||
484 | if err = pk.ec.serialize(w); err != nil { | ||
485 | return | ||
486 | } | ||
487 | return pk.ecdh.serialize(w) | ||
488 | } | ||
489 | return errors.InvalidArgumentError("bad public-key algorithm") | ||
490 | } | ||
491 | |||
492 | // CanSign returns true iff this public key can generate signatures | ||
493 | func (pk *PublicKey) CanSign() bool { | ||
494 | return pk.PubKeyAlgo != PubKeyAlgoRSAEncryptOnly && pk.PubKeyAlgo != PubKeyAlgoElGamal | ||
495 | } | ||
496 | |||
497 | // VerifySignature returns nil iff sig is a valid signature, made by this | ||
498 | // public key, of the data hashed into signed. signed is mutated by this call. | ||
499 | func (pk *PublicKey) VerifySignature(signed hash.Hash, sig *Signature) (err error) { | ||
500 | if !pk.CanSign() { | ||
501 | return errors.InvalidArgumentError("public key cannot generate signatures") | ||
502 | } | ||
503 | |||
504 | signed.Write(sig.HashSuffix) | ||
505 | hashBytes := signed.Sum(nil) | ||
506 | |||
507 | if hashBytes[0] != sig.HashTag[0] || hashBytes[1] != sig.HashTag[1] { | ||
508 | return errors.SignatureError("hash tag doesn't match") | ||
509 | } | ||
510 | |||
511 | if pk.PubKeyAlgo != sig.PubKeyAlgo { | ||
512 | return errors.InvalidArgumentError("public key and signature use different algorithms") | ||
513 | } | ||
514 | |||
515 | switch pk.PubKeyAlgo { | ||
516 | case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: | ||
517 | rsaPublicKey, _ := pk.PublicKey.(*rsa.PublicKey) | ||
518 | err = rsa.VerifyPKCS1v15(rsaPublicKey, sig.Hash, hashBytes, sig.RSASignature.bytes) | ||
519 | if err != nil { | ||
520 | return errors.SignatureError("RSA verification failure") | ||
521 | } | ||
522 | return nil | ||
523 | case PubKeyAlgoDSA: | ||
524 | dsaPublicKey, _ := pk.PublicKey.(*dsa.PublicKey) | ||
525 | // Need to truncate hashBytes to match FIPS 186-3 section 4.6. | ||
526 | subgroupSize := (dsaPublicKey.Q.BitLen() + 7) / 8 | ||
527 | if len(hashBytes) > subgroupSize { | ||
528 | hashBytes = hashBytes[:subgroupSize] | ||
529 | } | ||
530 | if !dsa.Verify(dsaPublicKey, hashBytes, new(big.Int).SetBytes(sig.DSASigR.bytes), new(big.Int).SetBytes(sig.DSASigS.bytes)) { | ||
531 | return errors.SignatureError("DSA verification failure") | ||
532 | } | ||
533 | return nil | ||
534 | case PubKeyAlgoECDSA: | ||
535 | ecdsaPublicKey := pk.PublicKey.(*ecdsa.PublicKey) | ||
536 | if !ecdsa.Verify(ecdsaPublicKey, hashBytes, new(big.Int).SetBytes(sig.ECDSASigR.bytes), new(big.Int).SetBytes(sig.ECDSASigS.bytes)) { | ||
537 | return errors.SignatureError("ECDSA verification failure") | ||
538 | } | ||
539 | return nil | ||
540 | default: | ||
541 | return errors.SignatureError("Unsupported public key algorithm used in signature") | ||
542 | } | ||
543 | } | ||
544 | |||
545 | // VerifySignatureV3 returns nil iff sig is a valid signature, made by this | ||
546 | // public key, of the data hashed into signed. signed is mutated by this call. | ||
547 | func (pk *PublicKey) VerifySignatureV3(signed hash.Hash, sig *SignatureV3) (err error) { | ||
548 | if !pk.CanSign() { | ||
549 | return errors.InvalidArgumentError("public key cannot generate signatures") | ||
550 | } | ||
551 | |||
552 | suffix := make([]byte, 5) | ||
553 | suffix[0] = byte(sig.SigType) | ||
554 | binary.BigEndian.PutUint32(suffix[1:], uint32(sig.CreationTime.Unix())) | ||
555 | signed.Write(suffix) | ||
556 | hashBytes := signed.Sum(nil) | ||
557 | |||
558 | if hashBytes[0] != sig.HashTag[0] || hashBytes[1] != sig.HashTag[1] { | ||
559 | return errors.SignatureError("hash tag doesn't match") | ||
560 | } | ||
561 | |||
562 | if pk.PubKeyAlgo != sig.PubKeyAlgo { | ||
563 | return errors.InvalidArgumentError("public key and signature use different algorithms") | ||
564 | } | ||
565 | |||
566 | switch pk.PubKeyAlgo { | ||
567 | case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: | ||
568 | rsaPublicKey := pk.PublicKey.(*rsa.PublicKey) | ||
569 | if err = rsa.VerifyPKCS1v15(rsaPublicKey, sig.Hash, hashBytes, sig.RSASignature.bytes); err != nil { | ||
570 | return errors.SignatureError("RSA verification failure") | ||
571 | } | ||
572 | return | ||
573 | case PubKeyAlgoDSA: | ||
574 | dsaPublicKey := pk.PublicKey.(*dsa.PublicKey) | ||
575 | // Need to truncate hashBytes to match FIPS 186-3 section 4.6. | ||
576 | subgroupSize := (dsaPublicKey.Q.BitLen() + 7) / 8 | ||
577 | if len(hashBytes) > subgroupSize { | ||
578 | hashBytes = hashBytes[:subgroupSize] | ||
579 | } | ||
580 | if !dsa.Verify(dsaPublicKey, hashBytes, new(big.Int).SetBytes(sig.DSASigR.bytes), new(big.Int).SetBytes(sig.DSASigS.bytes)) { | ||
581 | return errors.SignatureError("DSA verification failure") | ||
582 | } | ||
583 | return nil | ||
584 | default: | ||
585 | panic("shouldn't happen") | ||
586 | } | ||
587 | } | ||
588 | |||
589 | // keySignatureHash returns a Hash of the message that needs to be signed for | ||
590 | // pk to assert a subkey relationship to signed. | ||
591 | func keySignatureHash(pk, signed signingKey, hashFunc crypto.Hash) (h hash.Hash, err error) { | ||
592 | if !hashFunc.Available() { | ||
593 | return nil, errors.UnsupportedError("hash function") | ||
594 | } | ||
595 | h = hashFunc.New() | ||
596 | |||
597 | // RFC 4880, section 5.2.4 | ||
598 | pk.SerializeSignaturePrefix(h) | ||
599 | pk.serializeWithoutHeaders(h) | ||
600 | signed.SerializeSignaturePrefix(h) | ||
601 | signed.serializeWithoutHeaders(h) | ||
602 | return | ||
603 | } | ||
604 | |||
605 | // VerifyKeySignature returns nil iff sig is a valid signature, made by this | ||
606 | // public key, of signed. | ||
607 | func (pk *PublicKey) VerifyKeySignature(signed *PublicKey, sig *Signature) error { | ||
608 | h, err := keySignatureHash(pk, signed, sig.Hash) | ||
609 | if err != nil { | ||
610 | return err | ||
611 | } | ||
612 | if err = pk.VerifySignature(h, sig); err != nil { | ||
613 | return err | ||
614 | } | ||
615 | |||
616 | if sig.FlagSign { | ||
617 | // Signing subkeys must be cross-signed. See | ||
618 | // https://www.gnupg.org/faq/subkey-cross-certify.html. | ||
619 | if sig.EmbeddedSignature == nil { | ||
620 | return errors.StructuralError("signing subkey is missing cross-signature") | ||
621 | } | ||
622 | // Verify the cross-signature. This is calculated over the same | ||
623 | // data as the main signature, so we cannot just recursively | ||
624 | // call signed.VerifyKeySignature(...) | ||
625 | if h, err = keySignatureHash(pk, signed, sig.EmbeddedSignature.Hash); err != nil { | ||
626 | return errors.StructuralError("error while hashing for cross-signature: " + err.Error()) | ||
627 | } | ||
628 | if err := signed.VerifySignature(h, sig.EmbeddedSignature); err != nil { | ||
629 | return errors.StructuralError("error while verifying cross-signature: " + err.Error()) | ||
630 | } | ||
631 | } | ||
632 | |||
633 | return nil | ||
634 | } | ||
635 | |||
636 | func keyRevocationHash(pk signingKey, hashFunc crypto.Hash) (h hash.Hash, err error) { | ||
637 | if !hashFunc.Available() { | ||
638 | return nil, errors.UnsupportedError("hash function") | ||
639 | } | ||
640 | h = hashFunc.New() | ||
641 | |||
642 | // RFC 4880, section 5.2.4 | ||
643 | pk.SerializeSignaturePrefix(h) | ||
644 | pk.serializeWithoutHeaders(h) | ||
645 | |||
646 | return | ||
647 | } | ||
648 | |||
649 | // VerifyRevocationSignature returns nil iff sig is a valid signature, made by this | ||
650 | // public key. | ||
651 | func (pk *PublicKey) VerifyRevocationSignature(sig *Signature) (err error) { | ||
652 | h, err := keyRevocationHash(pk, sig.Hash) | ||
653 | if err != nil { | ||
654 | return err | ||
655 | } | ||
656 | return pk.VerifySignature(h, sig) | ||
657 | } | ||
658 | |||
659 | // userIdSignatureHash returns a Hash of the message that needs to be signed | ||
660 | // to assert that pk is a valid key for id. | ||
661 | func userIdSignatureHash(id string, pk *PublicKey, hashFunc crypto.Hash) (h hash.Hash, err error) { | ||
662 | if !hashFunc.Available() { | ||
663 | return nil, errors.UnsupportedError("hash function") | ||
664 | } | ||
665 | h = hashFunc.New() | ||
666 | |||
667 | // RFC 4880, section 5.2.4 | ||
668 | pk.SerializeSignaturePrefix(h) | ||
669 | pk.serializeWithoutHeaders(h) | ||
670 | |||
671 | var buf [5]byte | ||
672 | buf[0] = 0xb4 | ||
673 | buf[1] = byte(len(id) >> 24) | ||
674 | buf[2] = byte(len(id) >> 16) | ||
675 | buf[3] = byte(len(id) >> 8) | ||
676 | buf[4] = byte(len(id)) | ||
677 | h.Write(buf[:]) | ||
678 | h.Write([]byte(id)) | ||
679 | |||
680 | return | ||
681 | } | ||
682 | |||
683 | // VerifyUserIdSignature returns nil iff sig is a valid signature, made by this | ||
684 | // public key, that id is the identity of pub. | ||
685 | func (pk *PublicKey) VerifyUserIdSignature(id string, pub *PublicKey, sig *Signature) (err error) { | ||
686 | h, err := userIdSignatureHash(id, pub, sig.Hash) | ||
687 | if err != nil { | ||
688 | return err | ||
689 | } | ||
690 | return pk.VerifySignature(h, sig) | ||
691 | } | ||
692 | |||
693 | // VerifyUserIdSignatureV3 returns nil iff sig is a valid signature, made by this | ||
694 | // public key, that id is the identity of pub. | ||
695 | func (pk *PublicKey) VerifyUserIdSignatureV3(id string, pub *PublicKey, sig *SignatureV3) (err error) { | ||
696 | h, err := userIdSignatureV3Hash(id, pub, sig.Hash) | ||
697 | if err != nil { | ||
698 | return err | ||
699 | } | ||
700 | return pk.VerifySignatureV3(h, sig) | ||
701 | } | ||
702 | |||
703 | // KeyIdString returns the public key's fingerprint in capital hex | ||
704 | // (e.g. "6C7EE1B8621CC013"). | ||
705 | func (pk *PublicKey) KeyIdString() string { | ||
706 | return fmt.Sprintf("%X", pk.Fingerprint[12:20]) | ||
707 | } | ||
708 | |||
709 | // KeyIdShortString returns the short form of public key's fingerprint | ||
710 | // in capital hex, as shown by gpg --list-keys (e.g. "621CC013"). | ||
711 | func (pk *PublicKey) KeyIdShortString() string { | ||
712 | return fmt.Sprintf("%X", pk.Fingerprint[16:20]) | ||
713 | } | ||
714 | |||
715 | // A parsedMPI is used to store the contents of a big integer, along with the | ||
716 | // bit length that was specified in the original input. This allows the MPI to | ||
717 | // be reserialized exactly. | ||
718 | type parsedMPI struct { | ||
719 | bytes []byte | ||
720 | bitLength uint16 | ||
721 | } | ||
722 | |||
723 | // writeMPIs is a utility function for serializing several big integers to the | ||
724 | // given Writer. | ||
725 | func writeMPIs(w io.Writer, mpis ...parsedMPI) (err error) { | ||
726 | for _, mpi := range mpis { | ||
727 | err = writeMPI(w, mpi.bitLength, mpi.bytes) | ||
728 | if err != nil { | ||
729 | return | ||
730 | } | ||
731 | } | ||
732 | return | ||
733 | } | ||
734 | |||
735 | // BitLength returns the bit length for the given public key. | ||
736 | func (pk *PublicKey) BitLength() (bitLength uint16, err error) { | ||
737 | switch pk.PubKeyAlgo { | ||
738 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: | ||
739 | bitLength = pk.n.bitLength | ||
740 | case PubKeyAlgoDSA: | ||
741 | bitLength = pk.p.bitLength | ||
742 | case PubKeyAlgoElGamal: | ||
743 | bitLength = pk.p.bitLength | ||
744 | default: | ||
745 | err = errors.InvalidArgumentError("bad public-key algorithm") | ||
746 | } | ||
747 | return | ||
748 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/public_key_v3.go b/vendor/golang.org/x/crypto/openpgp/packet/public_key_v3.go new file mode 100644 index 0000000..5daf7b6 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/public_key_v3.go | |||
@@ -0,0 +1,279 @@ | |||
1 | // Copyright 2013 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "crypto" | ||
9 | "crypto/md5" | ||
10 | "crypto/rsa" | ||
11 | "encoding/binary" | ||
12 | "fmt" | ||
13 | "hash" | ||
14 | "io" | ||
15 | "math/big" | ||
16 | "strconv" | ||
17 | "time" | ||
18 | |||
19 | "golang.org/x/crypto/openpgp/errors" | ||
20 | ) | ||
21 | |||
22 | // PublicKeyV3 represents older, version 3 public keys. These keys are less secure and | ||
23 | // should not be used for signing or encrypting. They are supported here only for | ||
24 | // parsing version 3 key material and validating signatures. | ||
25 | // See RFC 4880, section 5.5.2. | ||
26 | type PublicKeyV3 struct { | ||
27 | CreationTime time.Time | ||
28 | DaysToExpire uint16 | ||
29 | PubKeyAlgo PublicKeyAlgorithm | ||
30 | PublicKey *rsa.PublicKey | ||
31 | Fingerprint [16]byte | ||
32 | KeyId uint64 | ||
33 | IsSubkey bool | ||
34 | |||
35 | n, e parsedMPI | ||
36 | } | ||
37 | |||
38 | // newRSAPublicKeyV3 returns a PublicKey that wraps the given rsa.PublicKey. | ||
39 | // Included here for testing purposes only. RFC 4880, section 5.5.2: | ||
40 | // "an implementation MUST NOT generate a V3 key, but MAY accept it." | ||
41 | func newRSAPublicKeyV3(creationTime time.Time, pub *rsa.PublicKey) *PublicKeyV3 { | ||
42 | pk := &PublicKeyV3{ | ||
43 | CreationTime: creationTime, | ||
44 | PublicKey: pub, | ||
45 | n: fromBig(pub.N), | ||
46 | e: fromBig(big.NewInt(int64(pub.E))), | ||
47 | } | ||
48 | |||
49 | pk.setFingerPrintAndKeyId() | ||
50 | return pk | ||
51 | } | ||
52 | |||
53 | func (pk *PublicKeyV3) parse(r io.Reader) (err error) { | ||
54 | // RFC 4880, section 5.5.2 | ||
55 | var buf [8]byte | ||
56 | if _, err = readFull(r, buf[:]); err != nil { | ||
57 | return | ||
58 | } | ||
59 | if buf[0] < 2 || buf[0] > 3 { | ||
60 | return errors.UnsupportedError("public key version") | ||
61 | } | ||
62 | pk.CreationTime = time.Unix(int64(uint32(buf[1])<<24|uint32(buf[2])<<16|uint32(buf[3])<<8|uint32(buf[4])), 0) | ||
63 | pk.DaysToExpire = binary.BigEndian.Uint16(buf[5:7]) | ||
64 | pk.PubKeyAlgo = PublicKeyAlgorithm(buf[7]) | ||
65 | switch pk.PubKeyAlgo { | ||
66 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: | ||
67 | err = pk.parseRSA(r) | ||
68 | default: | ||
69 | err = errors.UnsupportedError("public key type: " + strconv.Itoa(int(pk.PubKeyAlgo))) | ||
70 | } | ||
71 | if err != nil { | ||
72 | return | ||
73 | } | ||
74 | |||
75 | pk.setFingerPrintAndKeyId() | ||
76 | return | ||
77 | } | ||
78 | |||
79 | func (pk *PublicKeyV3) setFingerPrintAndKeyId() { | ||
80 | // RFC 4880, section 12.2 | ||
81 | fingerPrint := md5.New() | ||
82 | fingerPrint.Write(pk.n.bytes) | ||
83 | fingerPrint.Write(pk.e.bytes) | ||
84 | fingerPrint.Sum(pk.Fingerprint[:0]) | ||
85 | pk.KeyId = binary.BigEndian.Uint64(pk.n.bytes[len(pk.n.bytes)-8:]) | ||
86 | } | ||
87 | |||
88 | // parseRSA parses RSA public key material from the given Reader. See RFC 4880, | ||
89 | // section 5.5.2. | ||
90 | func (pk *PublicKeyV3) parseRSA(r io.Reader) (err error) { | ||
91 | if pk.n.bytes, pk.n.bitLength, err = readMPI(r); err != nil { | ||
92 | return | ||
93 | } | ||
94 | if pk.e.bytes, pk.e.bitLength, err = readMPI(r); err != nil { | ||
95 | return | ||
96 | } | ||
97 | |||
98 | // RFC 4880 Section 12.2 requires the low 8 bytes of the | ||
99 | // modulus to form the key id. | ||
100 | if len(pk.n.bytes) < 8 { | ||
101 | return errors.StructuralError("v3 public key modulus is too short") | ||
102 | } | ||
103 | if len(pk.e.bytes) > 3 { | ||
104 | err = errors.UnsupportedError("large public exponent") | ||
105 | return | ||
106 | } | ||
107 | rsa := &rsa.PublicKey{N: new(big.Int).SetBytes(pk.n.bytes)} | ||
108 | for i := 0; i < len(pk.e.bytes); i++ { | ||
109 | rsa.E <<= 8 | ||
110 | rsa.E |= int(pk.e.bytes[i]) | ||
111 | } | ||
112 | pk.PublicKey = rsa | ||
113 | return | ||
114 | } | ||
115 | |||
116 | // SerializeSignaturePrefix writes the prefix for this public key to the given Writer. | ||
117 | // The prefix is used when calculating a signature over this public key. See | ||
118 | // RFC 4880, section 5.2.4. | ||
119 | func (pk *PublicKeyV3) SerializeSignaturePrefix(w io.Writer) { | ||
120 | var pLength uint16 | ||
121 | switch pk.PubKeyAlgo { | ||
122 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: | ||
123 | pLength += 2 + uint16(len(pk.n.bytes)) | ||
124 | pLength += 2 + uint16(len(pk.e.bytes)) | ||
125 | default: | ||
126 | panic("unknown public key algorithm") | ||
127 | } | ||
128 | pLength += 6 | ||
129 | w.Write([]byte{0x99, byte(pLength >> 8), byte(pLength)}) | ||
130 | return | ||
131 | } | ||
132 | |||
133 | func (pk *PublicKeyV3) Serialize(w io.Writer) (err error) { | ||
134 | length := 8 // 8 byte header | ||
135 | |||
136 | switch pk.PubKeyAlgo { | ||
137 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: | ||
138 | length += 2 + len(pk.n.bytes) | ||
139 | length += 2 + len(pk.e.bytes) | ||
140 | default: | ||
141 | panic("unknown public key algorithm") | ||
142 | } | ||
143 | |||
144 | packetType := packetTypePublicKey | ||
145 | if pk.IsSubkey { | ||
146 | packetType = packetTypePublicSubkey | ||
147 | } | ||
148 | if err = serializeHeader(w, packetType, length); err != nil { | ||
149 | return | ||
150 | } | ||
151 | return pk.serializeWithoutHeaders(w) | ||
152 | } | ||
153 | |||
154 | // serializeWithoutHeaders marshals the PublicKey to w in the form of an | ||
155 | // OpenPGP public key packet, not including the packet header. | ||
156 | func (pk *PublicKeyV3) serializeWithoutHeaders(w io.Writer) (err error) { | ||
157 | var buf [8]byte | ||
158 | // Version 3 | ||
159 | buf[0] = 3 | ||
160 | // Creation time | ||
161 | t := uint32(pk.CreationTime.Unix()) | ||
162 | buf[1] = byte(t >> 24) | ||
163 | buf[2] = byte(t >> 16) | ||
164 | buf[3] = byte(t >> 8) | ||
165 | buf[4] = byte(t) | ||
166 | // Days to expire | ||
167 | buf[5] = byte(pk.DaysToExpire >> 8) | ||
168 | buf[6] = byte(pk.DaysToExpire) | ||
169 | // Public key algorithm | ||
170 | buf[7] = byte(pk.PubKeyAlgo) | ||
171 | |||
172 | if _, err = w.Write(buf[:]); err != nil { | ||
173 | return | ||
174 | } | ||
175 | |||
176 | switch pk.PubKeyAlgo { | ||
177 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: | ||
178 | return writeMPIs(w, pk.n, pk.e) | ||
179 | } | ||
180 | return errors.InvalidArgumentError("bad public-key algorithm") | ||
181 | } | ||
182 | |||
183 | // CanSign returns true iff this public key can generate signatures | ||
184 | func (pk *PublicKeyV3) CanSign() bool { | ||
185 | return pk.PubKeyAlgo != PubKeyAlgoRSAEncryptOnly | ||
186 | } | ||
187 | |||
188 | // VerifySignatureV3 returns nil iff sig is a valid signature, made by this | ||
189 | // public key, of the data hashed into signed. signed is mutated by this call. | ||
190 | func (pk *PublicKeyV3) VerifySignatureV3(signed hash.Hash, sig *SignatureV3) (err error) { | ||
191 | if !pk.CanSign() { | ||
192 | return errors.InvalidArgumentError("public key cannot generate signatures") | ||
193 | } | ||
194 | |||
195 | suffix := make([]byte, 5) | ||
196 | suffix[0] = byte(sig.SigType) | ||
197 | binary.BigEndian.PutUint32(suffix[1:], uint32(sig.CreationTime.Unix())) | ||
198 | signed.Write(suffix) | ||
199 | hashBytes := signed.Sum(nil) | ||
200 | |||
201 | if hashBytes[0] != sig.HashTag[0] || hashBytes[1] != sig.HashTag[1] { | ||
202 | return errors.SignatureError("hash tag doesn't match") | ||
203 | } | ||
204 | |||
205 | if pk.PubKeyAlgo != sig.PubKeyAlgo { | ||
206 | return errors.InvalidArgumentError("public key and signature use different algorithms") | ||
207 | } | ||
208 | |||
209 | switch pk.PubKeyAlgo { | ||
210 | case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: | ||
211 | if err = rsa.VerifyPKCS1v15(pk.PublicKey, sig.Hash, hashBytes, sig.RSASignature.bytes); err != nil { | ||
212 | return errors.SignatureError("RSA verification failure") | ||
213 | } | ||
214 | return | ||
215 | default: | ||
216 | // V3 public keys only support RSA. | ||
217 | panic("shouldn't happen") | ||
218 | } | ||
219 | } | ||
220 | |||
221 | // VerifyUserIdSignatureV3 returns nil iff sig is a valid signature, made by this | ||
222 | // public key, that id is the identity of pub. | ||
223 | func (pk *PublicKeyV3) VerifyUserIdSignatureV3(id string, pub *PublicKeyV3, sig *SignatureV3) (err error) { | ||
224 | h, err := userIdSignatureV3Hash(id, pk, sig.Hash) | ||
225 | if err != nil { | ||
226 | return err | ||
227 | } | ||
228 | return pk.VerifySignatureV3(h, sig) | ||
229 | } | ||
230 | |||
231 | // VerifyKeySignatureV3 returns nil iff sig is a valid signature, made by this | ||
232 | // public key, of signed. | ||
233 | func (pk *PublicKeyV3) VerifyKeySignatureV3(signed *PublicKeyV3, sig *SignatureV3) (err error) { | ||
234 | h, err := keySignatureHash(pk, signed, sig.Hash) | ||
235 | if err != nil { | ||
236 | return err | ||
237 | } | ||
238 | return pk.VerifySignatureV3(h, sig) | ||
239 | } | ||
240 | |||
241 | // userIdSignatureV3Hash returns a Hash of the message that needs to be signed | ||
242 | // to assert that pk is a valid key for id. | ||
243 | func userIdSignatureV3Hash(id string, pk signingKey, hfn crypto.Hash) (h hash.Hash, err error) { | ||
244 | if !hfn.Available() { | ||
245 | return nil, errors.UnsupportedError("hash function") | ||
246 | } | ||
247 | h = hfn.New() | ||
248 | |||
249 | // RFC 4880, section 5.2.4 | ||
250 | pk.SerializeSignaturePrefix(h) | ||
251 | pk.serializeWithoutHeaders(h) | ||
252 | |||
253 | h.Write([]byte(id)) | ||
254 | |||
255 | return | ||
256 | } | ||
257 | |||
258 | // KeyIdString returns the public key's fingerprint in capital hex | ||
259 | // (e.g. "6C7EE1B8621CC013"). | ||
260 | func (pk *PublicKeyV3) KeyIdString() string { | ||
261 | return fmt.Sprintf("%X", pk.KeyId) | ||
262 | } | ||
263 | |||
264 | // KeyIdShortString returns the short form of public key's fingerprint | ||
265 | // in capital hex, as shown by gpg --list-keys (e.g. "621CC013"). | ||
266 | func (pk *PublicKeyV3) KeyIdShortString() string { | ||
267 | return fmt.Sprintf("%X", pk.KeyId&0xFFFFFFFF) | ||
268 | } | ||
269 | |||
270 | // BitLength returns the bit length for the given public key. | ||
271 | func (pk *PublicKeyV3) BitLength() (bitLength uint16, err error) { | ||
272 | switch pk.PubKeyAlgo { | ||
273 | case PubKeyAlgoRSA, PubKeyAlgoRSAEncryptOnly, PubKeyAlgoRSASignOnly: | ||
274 | bitLength = pk.n.bitLength | ||
275 | default: | ||
276 | err = errors.InvalidArgumentError("bad public-key algorithm") | ||
277 | } | ||
278 | return | ||
279 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/reader.go b/vendor/golang.org/x/crypto/openpgp/packet/reader.go new file mode 100644 index 0000000..34bc7c6 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/reader.go | |||
@@ -0,0 +1,76 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "golang.org/x/crypto/openpgp/errors" | ||
9 | "io" | ||
10 | ) | ||
11 | |||
12 | // Reader reads packets from an io.Reader and allows packets to be 'unread' so | ||
13 | // that they result from the next call to Next. | ||
14 | type Reader struct { | ||
15 | q []Packet | ||
16 | readers []io.Reader | ||
17 | } | ||
18 | |||
19 | // New io.Readers are pushed when a compressed or encrypted packet is processed | ||
20 | // and recursively treated as a new source of packets. However, a carefully | ||
21 | // crafted packet can trigger an infinite recursive sequence of packets. See | ||
22 | // http://mumble.net/~campbell/misc/pgp-quine | ||
23 | // https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-4402 | ||
24 | // This constant limits the number of recursive packets that may be pushed. | ||
25 | const maxReaders = 32 | ||
26 | |||
27 | // Next returns the most recently unread Packet, or reads another packet from | ||
28 | // the top-most io.Reader. Unknown packet types are skipped. | ||
29 | func (r *Reader) Next() (p Packet, err error) { | ||
30 | if len(r.q) > 0 { | ||
31 | p = r.q[len(r.q)-1] | ||
32 | r.q = r.q[:len(r.q)-1] | ||
33 | return | ||
34 | } | ||
35 | |||
36 | for len(r.readers) > 0 { | ||
37 | p, err = Read(r.readers[len(r.readers)-1]) | ||
38 | if err == nil { | ||
39 | return | ||
40 | } | ||
41 | if err == io.EOF { | ||
42 | r.readers = r.readers[:len(r.readers)-1] | ||
43 | continue | ||
44 | } | ||
45 | if _, ok := err.(errors.UnknownPacketTypeError); !ok { | ||
46 | return nil, err | ||
47 | } | ||
48 | } | ||
49 | |||
50 | return nil, io.EOF | ||
51 | } | ||
52 | |||
53 | // Push causes the Reader to start reading from a new io.Reader. When an EOF | ||
54 | // error is seen from the new io.Reader, it is popped and the Reader continues | ||
55 | // to read from the next most recent io.Reader. Push returns a StructuralError | ||
56 | // if pushing the reader would exceed the maximum recursion level, otherwise it | ||
57 | // returns nil. | ||
58 | func (r *Reader) Push(reader io.Reader) (err error) { | ||
59 | if len(r.readers) >= maxReaders { | ||
60 | return errors.StructuralError("too many layers of packets") | ||
61 | } | ||
62 | r.readers = append(r.readers, reader) | ||
63 | return nil | ||
64 | } | ||
65 | |||
66 | // Unread causes the given Packet to be returned from the next call to Next. | ||
67 | func (r *Reader) Unread(p Packet) { | ||
68 | r.q = append(r.q, p) | ||
69 | } | ||
70 | |||
71 | func NewReader(r io.Reader) *Reader { | ||
72 | return &Reader{ | ||
73 | q: nil, | ||
74 | readers: []io.Reader{r}, | ||
75 | } | ||
76 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/signature.go b/vendor/golang.org/x/crypto/openpgp/packet/signature.go new file mode 100644 index 0000000..6ce0cbe --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/signature.go | |||
@@ -0,0 +1,731 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "bytes" | ||
9 | "crypto" | ||
10 | "crypto/dsa" | ||
11 | "crypto/ecdsa" | ||
12 | "encoding/asn1" | ||
13 | "encoding/binary" | ||
14 | "hash" | ||
15 | "io" | ||
16 | "math/big" | ||
17 | "strconv" | ||
18 | "time" | ||
19 | |||
20 | "golang.org/x/crypto/openpgp/errors" | ||
21 | "golang.org/x/crypto/openpgp/s2k" | ||
22 | ) | ||
23 | |||
24 | const ( | ||
25 | // See RFC 4880, section 5.2.3.21 for details. | ||
26 | KeyFlagCertify = 1 << iota | ||
27 | KeyFlagSign | ||
28 | KeyFlagEncryptCommunications | ||
29 | KeyFlagEncryptStorage | ||
30 | ) | ||
31 | |||
32 | // Signature represents a signature. See RFC 4880, section 5.2. | ||
33 | type Signature struct { | ||
34 | SigType SignatureType | ||
35 | PubKeyAlgo PublicKeyAlgorithm | ||
36 | Hash crypto.Hash | ||
37 | |||
38 | // HashSuffix is extra data that is hashed in after the signed data. | ||
39 | HashSuffix []byte | ||
40 | // HashTag contains the first two bytes of the hash for fast rejection | ||
41 | // of bad signed data. | ||
42 | HashTag [2]byte | ||
43 | CreationTime time.Time | ||
44 | |||
45 | RSASignature parsedMPI | ||
46 | DSASigR, DSASigS parsedMPI | ||
47 | ECDSASigR, ECDSASigS parsedMPI | ||
48 | |||
49 | // rawSubpackets contains the unparsed subpackets, in order. | ||
50 | rawSubpackets []outputSubpacket | ||
51 | |||
52 | // The following are optional so are nil when not included in the | ||
53 | // signature. | ||
54 | |||
55 | SigLifetimeSecs, KeyLifetimeSecs *uint32 | ||
56 | PreferredSymmetric, PreferredHash, PreferredCompression []uint8 | ||
57 | IssuerKeyId *uint64 | ||
58 | IsPrimaryId *bool | ||
59 | |||
60 | // FlagsValid is set if any flags were given. See RFC 4880, section | ||
61 | // 5.2.3.21 for details. | ||
62 | FlagsValid bool | ||
63 | FlagCertify, FlagSign, FlagEncryptCommunications, FlagEncryptStorage bool | ||
64 | |||
65 | // RevocationReason is set if this signature has been revoked. | ||
66 | // See RFC 4880, section 5.2.3.23 for details. | ||
67 | RevocationReason *uint8 | ||
68 | RevocationReasonText string | ||
69 | |||
70 | // MDC is set if this signature has a feature packet that indicates | ||
71 | // support for MDC subpackets. | ||
72 | MDC bool | ||
73 | |||
74 | // EmbeddedSignature, if non-nil, is a signature of the parent key, by | ||
75 | // this key. This prevents an attacker from claiming another's signing | ||
76 | // subkey as their own. | ||
77 | EmbeddedSignature *Signature | ||
78 | |||
79 | outSubpackets []outputSubpacket | ||
80 | } | ||
81 | |||
82 | func (sig *Signature) parse(r io.Reader) (err error) { | ||
83 | // RFC 4880, section 5.2.3 | ||
84 | var buf [5]byte | ||
85 | _, err = readFull(r, buf[:1]) | ||
86 | if err != nil { | ||
87 | return | ||
88 | } | ||
89 | if buf[0] != 4 { | ||
90 | err = errors.UnsupportedError("signature packet version " + strconv.Itoa(int(buf[0]))) | ||
91 | return | ||
92 | } | ||
93 | |||
94 | _, err = readFull(r, buf[:5]) | ||
95 | if err != nil { | ||
96 | return | ||
97 | } | ||
98 | sig.SigType = SignatureType(buf[0]) | ||
99 | sig.PubKeyAlgo = PublicKeyAlgorithm(buf[1]) | ||
100 | switch sig.PubKeyAlgo { | ||
101 | case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly, PubKeyAlgoDSA, PubKeyAlgoECDSA: | ||
102 | default: | ||
103 | err = errors.UnsupportedError("public key algorithm " + strconv.Itoa(int(sig.PubKeyAlgo))) | ||
104 | return | ||
105 | } | ||
106 | |||
107 | var ok bool | ||
108 | sig.Hash, ok = s2k.HashIdToHash(buf[2]) | ||
109 | if !ok { | ||
110 | return errors.UnsupportedError("hash function " + strconv.Itoa(int(buf[2]))) | ||
111 | } | ||
112 | |||
113 | hashedSubpacketsLength := int(buf[3])<<8 | int(buf[4]) | ||
114 | l := 6 + hashedSubpacketsLength | ||
115 | sig.HashSuffix = make([]byte, l+6) | ||
116 | sig.HashSuffix[0] = 4 | ||
117 | copy(sig.HashSuffix[1:], buf[:5]) | ||
118 | hashedSubpackets := sig.HashSuffix[6:l] | ||
119 | _, err = readFull(r, hashedSubpackets) | ||
120 | if err != nil { | ||
121 | return | ||
122 | } | ||
123 | // See RFC 4880, section 5.2.4 | ||
124 | trailer := sig.HashSuffix[l:] | ||
125 | trailer[0] = 4 | ||
126 | trailer[1] = 0xff | ||
127 | trailer[2] = uint8(l >> 24) | ||
128 | trailer[3] = uint8(l >> 16) | ||
129 | trailer[4] = uint8(l >> 8) | ||
130 | trailer[5] = uint8(l) | ||
131 | |||
132 | err = parseSignatureSubpackets(sig, hashedSubpackets, true) | ||
133 | if err != nil { | ||
134 | return | ||
135 | } | ||
136 | |||
137 | _, err = readFull(r, buf[:2]) | ||
138 | if err != nil { | ||
139 | return | ||
140 | } | ||
141 | unhashedSubpacketsLength := int(buf[0])<<8 | int(buf[1]) | ||
142 | unhashedSubpackets := make([]byte, unhashedSubpacketsLength) | ||
143 | _, err = readFull(r, unhashedSubpackets) | ||
144 | if err != nil { | ||
145 | return | ||
146 | } | ||
147 | err = parseSignatureSubpackets(sig, unhashedSubpackets, false) | ||
148 | if err != nil { | ||
149 | return | ||
150 | } | ||
151 | |||
152 | _, err = readFull(r, sig.HashTag[:2]) | ||
153 | if err != nil { | ||
154 | return | ||
155 | } | ||
156 | |||
157 | switch sig.PubKeyAlgo { | ||
158 | case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: | ||
159 | sig.RSASignature.bytes, sig.RSASignature.bitLength, err = readMPI(r) | ||
160 | case PubKeyAlgoDSA: | ||
161 | sig.DSASigR.bytes, sig.DSASigR.bitLength, err = readMPI(r) | ||
162 | if err == nil { | ||
163 | sig.DSASigS.bytes, sig.DSASigS.bitLength, err = readMPI(r) | ||
164 | } | ||
165 | case PubKeyAlgoECDSA: | ||
166 | sig.ECDSASigR.bytes, sig.ECDSASigR.bitLength, err = readMPI(r) | ||
167 | if err == nil { | ||
168 | sig.ECDSASigS.bytes, sig.ECDSASigS.bitLength, err = readMPI(r) | ||
169 | } | ||
170 | default: | ||
171 | panic("unreachable") | ||
172 | } | ||
173 | return | ||
174 | } | ||
175 | |||
176 | // parseSignatureSubpackets parses subpackets of the main signature packet. See | ||
177 | // RFC 4880, section 5.2.3.1. | ||
178 | func parseSignatureSubpackets(sig *Signature, subpackets []byte, isHashed bool) (err error) { | ||
179 | for len(subpackets) > 0 { | ||
180 | subpackets, err = parseSignatureSubpacket(sig, subpackets, isHashed) | ||
181 | if err != nil { | ||
182 | return | ||
183 | } | ||
184 | } | ||
185 | |||
186 | if sig.CreationTime.IsZero() { | ||
187 | err = errors.StructuralError("no creation time in signature") | ||
188 | } | ||
189 | |||
190 | return | ||
191 | } | ||
192 | |||
193 | type signatureSubpacketType uint8 | ||
194 | |||
195 | const ( | ||
196 | creationTimeSubpacket signatureSubpacketType = 2 | ||
197 | signatureExpirationSubpacket signatureSubpacketType = 3 | ||
198 | keyExpirationSubpacket signatureSubpacketType = 9 | ||
199 | prefSymmetricAlgosSubpacket signatureSubpacketType = 11 | ||
200 | issuerSubpacket signatureSubpacketType = 16 | ||
201 | prefHashAlgosSubpacket signatureSubpacketType = 21 | ||
202 | prefCompressionSubpacket signatureSubpacketType = 22 | ||
203 | primaryUserIdSubpacket signatureSubpacketType = 25 | ||
204 | keyFlagsSubpacket signatureSubpacketType = 27 | ||
205 | reasonForRevocationSubpacket signatureSubpacketType = 29 | ||
206 | featuresSubpacket signatureSubpacketType = 30 | ||
207 | embeddedSignatureSubpacket signatureSubpacketType = 32 | ||
208 | ) | ||
209 | |||
210 | // parseSignatureSubpacket parses a single subpacket. len(subpacket) is >= 1. | ||
211 | func parseSignatureSubpacket(sig *Signature, subpacket []byte, isHashed bool) (rest []byte, err error) { | ||
212 | // RFC 4880, section 5.2.3.1 | ||
213 | var ( | ||
214 | length uint32 | ||
215 | packetType signatureSubpacketType | ||
216 | isCritical bool | ||
217 | ) | ||
218 | switch { | ||
219 | case subpacket[0] < 192: | ||
220 | length = uint32(subpacket[0]) | ||
221 | subpacket = subpacket[1:] | ||
222 | case subpacket[0] < 255: | ||
223 | if len(subpacket) < 2 { | ||
224 | goto Truncated | ||
225 | } | ||
226 | length = uint32(subpacket[0]-192)<<8 + uint32(subpacket[1]) + 192 | ||
227 | subpacket = subpacket[2:] | ||
228 | default: | ||
229 | if len(subpacket) < 5 { | ||
230 | goto Truncated | ||
231 | } | ||
232 | length = uint32(subpacket[1])<<24 | | ||
233 | uint32(subpacket[2])<<16 | | ||
234 | uint32(subpacket[3])<<8 | | ||
235 | uint32(subpacket[4]) | ||
236 | subpacket = subpacket[5:] | ||
237 | } | ||
238 | if length > uint32(len(subpacket)) { | ||
239 | goto Truncated | ||
240 | } | ||
241 | rest = subpacket[length:] | ||
242 | subpacket = subpacket[:length] | ||
243 | if len(subpacket) == 0 { | ||
244 | err = errors.StructuralError("zero length signature subpacket") | ||
245 | return | ||
246 | } | ||
247 | packetType = signatureSubpacketType(subpacket[0] & 0x7f) | ||
248 | isCritical = subpacket[0]&0x80 == 0x80 | ||
249 | subpacket = subpacket[1:] | ||
250 | sig.rawSubpackets = append(sig.rawSubpackets, outputSubpacket{isHashed, packetType, isCritical, subpacket}) | ||
251 | switch packetType { | ||
252 | case creationTimeSubpacket: | ||
253 | if !isHashed { | ||
254 | err = errors.StructuralError("signature creation time in non-hashed area") | ||
255 | return | ||
256 | } | ||
257 | if len(subpacket) != 4 { | ||
258 | err = errors.StructuralError("signature creation time not four bytes") | ||
259 | return | ||
260 | } | ||
261 | t := binary.BigEndian.Uint32(subpacket) | ||
262 | sig.CreationTime = time.Unix(int64(t), 0) | ||
263 | case signatureExpirationSubpacket: | ||
264 | // Signature expiration time, section 5.2.3.10 | ||
265 | if !isHashed { | ||
266 | return | ||
267 | } | ||
268 | if len(subpacket) != 4 { | ||
269 | err = errors.StructuralError("expiration subpacket with bad length") | ||
270 | return | ||
271 | } | ||
272 | sig.SigLifetimeSecs = new(uint32) | ||
273 | *sig.SigLifetimeSecs = binary.BigEndian.Uint32(subpacket) | ||
274 | case keyExpirationSubpacket: | ||
275 | // Key expiration time, section 5.2.3.6 | ||
276 | if !isHashed { | ||
277 | return | ||
278 | } | ||
279 | if len(subpacket) != 4 { | ||
280 | err = errors.StructuralError("key expiration subpacket with bad length") | ||
281 | return | ||
282 | } | ||
283 | sig.KeyLifetimeSecs = new(uint32) | ||
284 | *sig.KeyLifetimeSecs = binary.BigEndian.Uint32(subpacket) | ||
285 | case prefSymmetricAlgosSubpacket: | ||
286 | // Preferred symmetric algorithms, section 5.2.3.7 | ||
287 | if !isHashed { | ||
288 | return | ||
289 | } | ||
290 | sig.PreferredSymmetric = make([]byte, len(subpacket)) | ||
291 | copy(sig.PreferredSymmetric, subpacket) | ||
292 | case issuerSubpacket: | ||
293 | // Issuer, section 5.2.3.5 | ||
294 | if len(subpacket) != 8 { | ||
295 | err = errors.StructuralError("issuer subpacket with bad length") | ||
296 | return | ||
297 | } | ||
298 | sig.IssuerKeyId = new(uint64) | ||
299 | *sig.IssuerKeyId = binary.BigEndian.Uint64(subpacket) | ||
300 | case prefHashAlgosSubpacket: | ||
301 | // Preferred hash algorithms, section 5.2.3.8 | ||
302 | if !isHashed { | ||
303 | return | ||
304 | } | ||
305 | sig.PreferredHash = make([]byte, len(subpacket)) | ||
306 | copy(sig.PreferredHash, subpacket) | ||
307 | case prefCompressionSubpacket: | ||
308 | // Preferred compression algorithms, section 5.2.3.9 | ||
309 | if !isHashed { | ||
310 | return | ||
311 | } | ||
312 | sig.PreferredCompression = make([]byte, len(subpacket)) | ||
313 | copy(sig.PreferredCompression, subpacket) | ||
314 | case primaryUserIdSubpacket: | ||
315 | // Primary User ID, section 5.2.3.19 | ||
316 | if !isHashed { | ||
317 | return | ||
318 | } | ||
319 | if len(subpacket) != 1 { | ||
320 | err = errors.StructuralError("primary user id subpacket with bad length") | ||
321 | return | ||
322 | } | ||
323 | sig.IsPrimaryId = new(bool) | ||
324 | if subpacket[0] > 0 { | ||
325 | *sig.IsPrimaryId = true | ||
326 | } | ||
327 | case keyFlagsSubpacket: | ||
328 | // Key flags, section 5.2.3.21 | ||
329 | if !isHashed { | ||
330 | return | ||
331 | } | ||
332 | if len(subpacket) == 0 { | ||
333 | err = errors.StructuralError("empty key flags subpacket") | ||
334 | return | ||
335 | } | ||
336 | sig.FlagsValid = true | ||
337 | if subpacket[0]&KeyFlagCertify != 0 { | ||
338 | sig.FlagCertify = true | ||
339 | } | ||
340 | if subpacket[0]&KeyFlagSign != 0 { | ||
341 | sig.FlagSign = true | ||
342 | } | ||
343 | if subpacket[0]&KeyFlagEncryptCommunications != 0 { | ||
344 | sig.FlagEncryptCommunications = true | ||
345 | } | ||
346 | if subpacket[0]&KeyFlagEncryptStorage != 0 { | ||
347 | sig.FlagEncryptStorage = true | ||
348 | } | ||
349 | case reasonForRevocationSubpacket: | ||
350 | // Reason For Revocation, section 5.2.3.23 | ||
351 | if !isHashed { | ||
352 | return | ||
353 | } | ||
354 | if len(subpacket) == 0 { | ||
355 | err = errors.StructuralError("empty revocation reason subpacket") | ||
356 | return | ||
357 | } | ||
358 | sig.RevocationReason = new(uint8) | ||
359 | *sig.RevocationReason = subpacket[0] | ||
360 | sig.RevocationReasonText = string(subpacket[1:]) | ||
361 | case featuresSubpacket: | ||
362 | // Features subpacket, section 5.2.3.24 specifies a very general | ||
363 | // mechanism for OpenPGP implementations to signal support for new | ||
364 | // features. In practice, the subpacket is used exclusively to | ||
365 | // indicate support for MDC-protected encryption. | ||
366 | sig.MDC = len(subpacket) >= 1 && subpacket[0]&1 == 1 | ||
367 | case embeddedSignatureSubpacket: | ||
368 | // Only usage is in signatures that cross-certify | ||
369 | // signing subkeys. section 5.2.3.26 describes the | ||
370 | // format, with its usage described in section 11.1 | ||
371 | if sig.EmbeddedSignature != nil { | ||
372 | err = errors.StructuralError("Cannot have multiple embedded signatures") | ||
373 | return | ||
374 | } | ||
375 | sig.EmbeddedSignature = new(Signature) | ||
376 | // Embedded signatures are required to be v4 signatures see | ||
377 | // section 12.1. However, we only parse v4 signatures in this | ||
378 | // file anyway. | ||
379 | if err := sig.EmbeddedSignature.parse(bytes.NewBuffer(subpacket)); err != nil { | ||
380 | return nil, err | ||
381 | } | ||
382 | if sigType := sig.EmbeddedSignature.SigType; sigType != SigTypePrimaryKeyBinding { | ||
383 | return nil, errors.StructuralError("cross-signature has unexpected type " + strconv.Itoa(int(sigType))) | ||
384 | } | ||
385 | default: | ||
386 | if isCritical { | ||
387 | err = errors.UnsupportedError("unknown critical signature subpacket type " + strconv.Itoa(int(packetType))) | ||
388 | return | ||
389 | } | ||
390 | } | ||
391 | return | ||
392 | |||
393 | Truncated: | ||
394 | err = errors.StructuralError("signature subpacket truncated") | ||
395 | return | ||
396 | } | ||
397 | |||
398 | // subpacketLengthLength returns the length, in bytes, of an encoded length value. | ||
399 | func subpacketLengthLength(length int) int { | ||
400 | if length < 192 { | ||
401 | return 1 | ||
402 | } | ||
403 | if length < 16320 { | ||
404 | return 2 | ||
405 | } | ||
406 | return 5 | ||
407 | } | ||
408 | |||
409 | // serializeSubpacketLength marshals the given length into to. | ||
410 | func serializeSubpacketLength(to []byte, length int) int { | ||
411 | // RFC 4880, Section 4.2.2. | ||
412 | if length < 192 { | ||
413 | to[0] = byte(length) | ||
414 | return 1 | ||
415 | } | ||
416 | if length < 16320 { | ||
417 | length -= 192 | ||
418 | to[0] = byte((length >> 8) + 192) | ||
419 | to[1] = byte(length) | ||
420 | return 2 | ||
421 | } | ||
422 | to[0] = 255 | ||
423 | to[1] = byte(length >> 24) | ||
424 | to[2] = byte(length >> 16) | ||
425 | to[3] = byte(length >> 8) | ||
426 | to[4] = byte(length) | ||
427 | return 5 | ||
428 | } | ||
429 | |||
430 | // subpacketsLength returns the serialized length, in bytes, of the given | ||
431 | // subpackets. | ||
432 | func subpacketsLength(subpackets []outputSubpacket, hashed bool) (length int) { | ||
433 | for _, subpacket := range subpackets { | ||
434 | if subpacket.hashed == hashed { | ||
435 | length += subpacketLengthLength(len(subpacket.contents) + 1) | ||
436 | length += 1 // type byte | ||
437 | length += len(subpacket.contents) | ||
438 | } | ||
439 | } | ||
440 | return | ||
441 | } | ||
442 | |||
443 | // serializeSubpackets marshals the given subpackets into to. | ||
444 | func serializeSubpackets(to []byte, subpackets []outputSubpacket, hashed bool) { | ||
445 | for _, subpacket := range subpackets { | ||
446 | if subpacket.hashed == hashed { | ||
447 | n := serializeSubpacketLength(to, len(subpacket.contents)+1) | ||
448 | to[n] = byte(subpacket.subpacketType) | ||
449 | to = to[1+n:] | ||
450 | n = copy(to, subpacket.contents) | ||
451 | to = to[n:] | ||
452 | } | ||
453 | } | ||
454 | return | ||
455 | } | ||
456 | |||
457 | // KeyExpired returns whether sig is a self-signature of a key that has | ||
458 | // expired. | ||
459 | func (sig *Signature) KeyExpired(currentTime time.Time) bool { | ||
460 | if sig.KeyLifetimeSecs == nil { | ||
461 | return false | ||
462 | } | ||
463 | expiry := sig.CreationTime.Add(time.Duration(*sig.KeyLifetimeSecs) * time.Second) | ||
464 | return currentTime.After(expiry) | ||
465 | } | ||
466 | |||
467 | // buildHashSuffix constructs the HashSuffix member of sig in preparation for signing. | ||
468 | func (sig *Signature) buildHashSuffix() (err error) { | ||
469 | hashedSubpacketsLen := subpacketsLength(sig.outSubpackets, true) | ||
470 | |||
471 | var ok bool | ||
472 | l := 6 + hashedSubpacketsLen | ||
473 | sig.HashSuffix = make([]byte, l+6) | ||
474 | sig.HashSuffix[0] = 4 | ||
475 | sig.HashSuffix[1] = uint8(sig.SigType) | ||
476 | sig.HashSuffix[2] = uint8(sig.PubKeyAlgo) | ||
477 | sig.HashSuffix[3], ok = s2k.HashToHashId(sig.Hash) | ||
478 | if !ok { | ||
479 | sig.HashSuffix = nil | ||
480 | return errors.InvalidArgumentError("hash cannot be represented in OpenPGP: " + strconv.Itoa(int(sig.Hash))) | ||
481 | } | ||
482 | sig.HashSuffix[4] = byte(hashedSubpacketsLen >> 8) | ||
483 | sig.HashSuffix[5] = byte(hashedSubpacketsLen) | ||
484 | serializeSubpackets(sig.HashSuffix[6:l], sig.outSubpackets, true) | ||
485 | trailer := sig.HashSuffix[l:] | ||
486 | trailer[0] = 4 | ||
487 | trailer[1] = 0xff | ||
488 | trailer[2] = byte(l >> 24) | ||
489 | trailer[3] = byte(l >> 16) | ||
490 | trailer[4] = byte(l >> 8) | ||
491 | trailer[5] = byte(l) | ||
492 | return | ||
493 | } | ||
494 | |||
495 | func (sig *Signature) signPrepareHash(h hash.Hash) (digest []byte, err error) { | ||
496 | err = sig.buildHashSuffix() | ||
497 | if err != nil { | ||
498 | return | ||
499 | } | ||
500 | |||
501 | h.Write(sig.HashSuffix) | ||
502 | digest = h.Sum(nil) | ||
503 | copy(sig.HashTag[:], digest) | ||
504 | return | ||
505 | } | ||
506 | |||
507 | // Sign signs a message with a private key. The hash, h, must contain | ||
508 | // the hash of the message to be signed and will be mutated by this function. | ||
509 | // On success, the signature is stored in sig. Call Serialize to write it out. | ||
510 | // If config is nil, sensible defaults will be used. | ||
511 | func (sig *Signature) Sign(h hash.Hash, priv *PrivateKey, config *Config) (err error) { | ||
512 | sig.outSubpackets = sig.buildSubpackets() | ||
513 | digest, err := sig.signPrepareHash(h) | ||
514 | if err != nil { | ||
515 | return | ||
516 | } | ||
517 | |||
518 | switch priv.PubKeyAlgo { | ||
519 | case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: | ||
520 | // supports both *rsa.PrivateKey and crypto.Signer | ||
521 | sig.RSASignature.bytes, err = priv.PrivateKey.(crypto.Signer).Sign(config.Random(), digest, sig.Hash) | ||
522 | sig.RSASignature.bitLength = uint16(8 * len(sig.RSASignature.bytes)) | ||
523 | case PubKeyAlgoDSA: | ||
524 | dsaPriv := priv.PrivateKey.(*dsa.PrivateKey) | ||
525 | |||
526 | // Need to truncate hashBytes to match FIPS 186-3 section 4.6. | ||
527 | subgroupSize := (dsaPriv.Q.BitLen() + 7) / 8 | ||
528 | if len(digest) > subgroupSize { | ||
529 | digest = digest[:subgroupSize] | ||
530 | } | ||
531 | r, s, err := dsa.Sign(config.Random(), dsaPriv, digest) | ||
532 | if err == nil { | ||
533 | sig.DSASigR.bytes = r.Bytes() | ||
534 | sig.DSASigR.bitLength = uint16(8 * len(sig.DSASigR.bytes)) | ||
535 | sig.DSASigS.bytes = s.Bytes() | ||
536 | sig.DSASigS.bitLength = uint16(8 * len(sig.DSASigS.bytes)) | ||
537 | } | ||
538 | case PubKeyAlgoECDSA: | ||
539 | var r, s *big.Int | ||
540 | if pk, ok := priv.PrivateKey.(*ecdsa.PrivateKey); ok { | ||
541 | // direct support, avoid asn1 wrapping/unwrapping | ||
542 | r, s, err = ecdsa.Sign(config.Random(), pk, digest) | ||
543 | } else { | ||
544 | var b []byte | ||
545 | b, err = priv.PrivateKey.(crypto.Signer).Sign(config.Random(), digest, nil) | ||
546 | if err == nil { | ||
547 | r, s, err = unwrapECDSASig(b) | ||
548 | } | ||
549 | } | ||
550 | if err == nil { | ||
551 | sig.ECDSASigR = fromBig(r) | ||
552 | sig.ECDSASigS = fromBig(s) | ||
553 | } | ||
554 | default: | ||
555 | err = errors.UnsupportedError("public key algorithm: " + strconv.Itoa(int(sig.PubKeyAlgo))) | ||
556 | } | ||
557 | |||
558 | return | ||
559 | } | ||
560 | |||
561 | // unwrapECDSASig parses the two integer components of an ASN.1-encoded ECDSA | ||
562 | // signature. | ||
563 | func unwrapECDSASig(b []byte) (r, s *big.Int, err error) { | ||
564 | var ecsdaSig struct { | ||
565 | R, S *big.Int | ||
566 | } | ||
567 | _, err = asn1.Unmarshal(b, &ecsdaSig) | ||
568 | if err != nil { | ||
569 | return | ||
570 | } | ||
571 | return ecsdaSig.R, ecsdaSig.S, nil | ||
572 | } | ||
573 | |||
574 | // SignUserId computes a signature from priv, asserting that pub is a valid | ||
575 | // key for the identity id. On success, the signature is stored in sig. Call | ||
576 | // Serialize to write it out. | ||
577 | // If config is nil, sensible defaults will be used. | ||
578 | func (sig *Signature) SignUserId(id string, pub *PublicKey, priv *PrivateKey, config *Config) error { | ||
579 | h, err := userIdSignatureHash(id, pub, sig.Hash) | ||
580 | if err != nil { | ||
581 | return err | ||
582 | } | ||
583 | return sig.Sign(h, priv, config) | ||
584 | } | ||
585 | |||
586 | // SignKey computes a signature from priv, asserting that pub is a subkey. On | ||
587 | // success, the signature is stored in sig. Call Serialize to write it out. | ||
588 | // If config is nil, sensible defaults will be used. | ||
589 | func (sig *Signature) SignKey(pub *PublicKey, priv *PrivateKey, config *Config) error { | ||
590 | h, err := keySignatureHash(&priv.PublicKey, pub, sig.Hash) | ||
591 | if err != nil { | ||
592 | return err | ||
593 | } | ||
594 | return sig.Sign(h, priv, config) | ||
595 | } | ||
596 | |||
597 | // Serialize marshals sig to w. Sign, SignUserId or SignKey must have been | ||
598 | // called first. | ||
599 | func (sig *Signature) Serialize(w io.Writer) (err error) { | ||
600 | if len(sig.outSubpackets) == 0 { | ||
601 | sig.outSubpackets = sig.rawSubpackets | ||
602 | } | ||
603 | if sig.RSASignature.bytes == nil && sig.DSASigR.bytes == nil && sig.ECDSASigR.bytes == nil { | ||
604 | return errors.InvalidArgumentError("Signature: need to call Sign, SignUserId or SignKey before Serialize") | ||
605 | } | ||
606 | |||
607 | sigLength := 0 | ||
608 | switch sig.PubKeyAlgo { | ||
609 | case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: | ||
610 | sigLength = 2 + len(sig.RSASignature.bytes) | ||
611 | case PubKeyAlgoDSA: | ||
612 | sigLength = 2 + len(sig.DSASigR.bytes) | ||
613 | sigLength += 2 + len(sig.DSASigS.bytes) | ||
614 | case PubKeyAlgoECDSA: | ||
615 | sigLength = 2 + len(sig.ECDSASigR.bytes) | ||
616 | sigLength += 2 + len(sig.ECDSASigS.bytes) | ||
617 | default: | ||
618 | panic("impossible") | ||
619 | } | ||
620 | |||
621 | unhashedSubpacketsLen := subpacketsLength(sig.outSubpackets, false) | ||
622 | length := len(sig.HashSuffix) - 6 /* trailer not included */ + | ||
623 | 2 /* length of unhashed subpackets */ + unhashedSubpacketsLen + | ||
624 | 2 /* hash tag */ + sigLength | ||
625 | err = serializeHeader(w, packetTypeSignature, length) | ||
626 | if err != nil { | ||
627 | return | ||
628 | } | ||
629 | |||
630 | _, err = w.Write(sig.HashSuffix[:len(sig.HashSuffix)-6]) | ||
631 | if err != nil { | ||
632 | return | ||
633 | } | ||
634 | |||
635 | unhashedSubpackets := make([]byte, 2+unhashedSubpacketsLen) | ||
636 | unhashedSubpackets[0] = byte(unhashedSubpacketsLen >> 8) | ||
637 | unhashedSubpackets[1] = byte(unhashedSubpacketsLen) | ||
638 | serializeSubpackets(unhashedSubpackets[2:], sig.outSubpackets, false) | ||
639 | |||
640 | _, err = w.Write(unhashedSubpackets) | ||
641 | if err != nil { | ||
642 | return | ||
643 | } | ||
644 | _, err = w.Write(sig.HashTag[:]) | ||
645 | if err != nil { | ||
646 | return | ||
647 | } | ||
648 | |||
649 | switch sig.PubKeyAlgo { | ||
650 | case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: | ||
651 | err = writeMPIs(w, sig.RSASignature) | ||
652 | case PubKeyAlgoDSA: | ||
653 | err = writeMPIs(w, sig.DSASigR, sig.DSASigS) | ||
654 | case PubKeyAlgoECDSA: | ||
655 | err = writeMPIs(w, sig.ECDSASigR, sig.ECDSASigS) | ||
656 | default: | ||
657 | panic("impossible") | ||
658 | } | ||
659 | return | ||
660 | } | ||
661 | |||
662 | // outputSubpacket represents a subpacket to be marshaled. | ||
663 | type outputSubpacket struct { | ||
664 | hashed bool // true if this subpacket is in the hashed area. | ||
665 | subpacketType signatureSubpacketType | ||
666 | isCritical bool | ||
667 | contents []byte | ||
668 | } | ||
669 | |||
670 | func (sig *Signature) buildSubpackets() (subpackets []outputSubpacket) { | ||
671 | creationTime := make([]byte, 4) | ||
672 | binary.BigEndian.PutUint32(creationTime, uint32(sig.CreationTime.Unix())) | ||
673 | subpackets = append(subpackets, outputSubpacket{true, creationTimeSubpacket, false, creationTime}) | ||
674 | |||
675 | if sig.IssuerKeyId != nil { | ||
676 | keyId := make([]byte, 8) | ||
677 | binary.BigEndian.PutUint64(keyId, *sig.IssuerKeyId) | ||
678 | subpackets = append(subpackets, outputSubpacket{true, issuerSubpacket, false, keyId}) | ||
679 | } | ||
680 | |||
681 | if sig.SigLifetimeSecs != nil && *sig.SigLifetimeSecs != 0 { | ||
682 | sigLifetime := make([]byte, 4) | ||
683 | binary.BigEndian.PutUint32(sigLifetime, *sig.SigLifetimeSecs) | ||
684 | subpackets = append(subpackets, outputSubpacket{true, signatureExpirationSubpacket, true, sigLifetime}) | ||
685 | } | ||
686 | |||
687 | // Key flags may only appear in self-signatures or certification signatures. | ||
688 | |||
689 | if sig.FlagsValid { | ||
690 | var flags byte | ||
691 | if sig.FlagCertify { | ||
692 | flags |= KeyFlagCertify | ||
693 | } | ||
694 | if sig.FlagSign { | ||
695 | flags |= KeyFlagSign | ||
696 | } | ||
697 | if sig.FlagEncryptCommunications { | ||
698 | flags |= KeyFlagEncryptCommunications | ||
699 | } | ||
700 | if sig.FlagEncryptStorage { | ||
701 | flags |= KeyFlagEncryptStorage | ||
702 | } | ||
703 | subpackets = append(subpackets, outputSubpacket{true, keyFlagsSubpacket, false, []byte{flags}}) | ||
704 | } | ||
705 | |||
706 | // The following subpackets may only appear in self-signatures | ||
707 | |||
708 | if sig.KeyLifetimeSecs != nil && *sig.KeyLifetimeSecs != 0 { | ||
709 | keyLifetime := make([]byte, 4) | ||
710 | binary.BigEndian.PutUint32(keyLifetime, *sig.KeyLifetimeSecs) | ||
711 | subpackets = append(subpackets, outputSubpacket{true, keyExpirationSubpacket, true, keyLifetime}) | ||
712 | } | ||
713 | |||
714 | if sig.IsPrimaryId != nil && *sig.IsPrimaryId { | ||
715 | subpackets = append(subpackets, outputSubpacket{true, primaryUserIdSubpacket, false, []byte{1}}) | ||
716 | } | ||
717 | |||
718 | if len(sig.PreferredSymmetric) > 0 { | ||
719 | subpackets = append(subpackets, outputSubpacket{true, prefSymmetricAlgosSubpacket, false, sig.PreferredSymmetric}) | ||
720 | } | ||
721 | |||
722 | if len(sig.PreferredHash) > 0 { | ||
723 | subpackets = append(subpackets, outputSubpacket{true, prefHashAlgosSubpacket, false, sig.PreferredHash}) | ||
724 | } | ||
725 | |||
726 | if len(sig.PreferredCompression) > 0 { | ||
727 | subpackets = append(subpackets, outputSubpacket{true, prefCompressionSubpacket, false, sig.PreferredCompression}) | ||
728 | } | ||
729 | |||
730 | return | ||
731 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/signature_v3.go b/vendor/golang.org/x/crypto/openpgp/packet/signature_v3.go new file mode 100644 index 0000000..6edff88 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/signature_v3.go | |||
@@ -0,0 +1,146 @@ | |||
1 | // Copyright 2013 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "crypto" | ||
9 | "encoding/binary" | ||
10 | "fmt" | ||
11 | "io" | ||
12 | "strconv" | ||
13 | "time" | ||
14 | |||
15 | "golang.org/x/crypto/openpgp/errors" | ||
16 | "golang.org/x/crypto/openpgp/s2k" | ||
17 | ) | ||
18 | |||
19 | // SignatureV3 represents older version 3 signatures. These signatures are less secure | ||
20 | // than version 4 and should not be used to create new signatures. They are included | ||
21 | // here for backwards compatibility to read and validate with older key material. | ||
22 | // See RFC 4880, section 5.2.2. | ||
23 | type SignatureV3 struct { | ||
24 | SigType SignatureType | ||
25 | CreationTime time.Time | ||
26 | IssuerKeyId uint64 | ||
27 | PubKeyAlgo PublicKeyAlgorithm | ||
28 | Hash crypto.Hash | ||
29 | HashTag [2]byte | ||
30 | |||
31 | RSASignature parsedMPI | ||
32 | DSASigR, DSASigS parsedMPI | ||
33 | } | ||
34 | |||
35 | func (sig *SignatureV3) parse(r io.Reader) (err error) { | ||
36 | // RFC 4880, section 5.2.2 | ||
37 | var buf [8]byte | ||
38 | if _, err = readFull(r, buf[:1]); err != nil { | ||
39 | return | ||
40 | } | ||
41 | if buf[0] < 2 || buf[0] > 3 { | ||
42 | err = errors.UnsupportedError("signature packet version " + strconv.Itoa(int(buf[0]))) | ||
43 | return | ||
44 | } | ||
45 | if _, err = readFull(r, buf[:1]); err != nil { | ||
46 | return | ||
47 | } | ||
48 | if buf[0] != 5 { | ||
49 | err = errors.UnsupportedError( | ||
50 | "invalid hashed material length " + strconv.Itoa(int(buf[0]))) | ||
51 | return | ||
52 | } | ||
53 | |||
54 | // Read hashed material: signature type + creation time | ||
55 | if _, err = readFull(r, buf[:5]); err != nil { | ||
56 | return | ||
57 | } | ||
58 | sig.SigType = SignatureType(buf[0]) | ||
59 | t := binary.BigEndian.Uint32(buf[1:5]) | ||
60 | sig.CreationTime = time.Unix(int64(t), 0) | ||
61 | |||
62 | // Eight-octet Key ID of signer. | ||
63 | if _, err = readFull(r, buf[:8]); err != nil { | ||
64 | return | ||
65 | } | ||
66 | sig.IssuerKeyId = binary.BigEndian.Uint64(buf[:]) | ||
67 | |||
68 | // Public-key and hash algorithm | ||
69 | if _, err = readFull(r, buf[:2]); err != nil { | ||
70 | return | ||
71 | } | ||
72 | sig.PubKeyAlgo = PublicKeyAlgorithm(buf[0]) | ||
73 | switch sig.PubKeyAlgo { | ||
74 | case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly, PubKeyAlgoDSA: | ||
75 | default: | ||
76 | err = errors.UnsupportedError("public key algorithm " + strconv.Itoa(int(sig.PubKeyAlgo))) | ||
77 | return | ||
78 | } | ||
79 | var ok bool | ||
80 | if sig.Hash, ok = s2k.HashIdToHash(buf[1]); !ok { | ||
81 | return errors.UnsupportedError("hash function " + strconv.Itoa(int(buf[2]))) | ||
82 | } | ||
83 | |||
84 | // Two-octet field holding left 16 bits of signed hash value. | ||
85 | if _, err = readFull(r, sig.HashTag[:2]); err != nil { | ||
86 | return | ||
87 | } | ||
88 | |||
89 | switch sig.PubKeyAlgo { | ||
90 | case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: | ||
91 | sig.RSASignature.bytes, sig.RSASignature.bitLength, err = readMPI(r) | ||
92 | case PubKeyAlgoDSA: | ||
93 | if sig.DSASigR.bytes, sig.DSASigR.bitLength, err = readMPI(r); err != nil { | ||
94 | return | ||
95 | } | ||
96 | sig.DSASigS.bytes, sig.DSASigS.bitLength, err = readMPI(r) | ||
97 | default: | ||
98 | panic("unreachable") | ||
99 | } | ||
100 | return | ||
101 | } | ||
102 | |||
103 | // Serialize marshals sig to w. Sign, SignUserId or SignKey must have been | ||
104 | // called first. | ||
105 | func (sig *SignatureV3) Serialize(w io.Writer) (err error) { | ||
106 | buf := make([]byte, 8) | ||
107 | |||
108 | // Write the sig type and creation time | ||
109 | buf[0] = byte(sig.SigType) | ||
110 | binary.BigEndian.PutUint32(buf[1:5], uint32(sig.CreationTime.Unix())) | ||
111 | if _, err = w.Write(buf[:5]); err != nil { | ||
112 | return | ||
113 | } | ||
114 | |||
115 | // Write the issuer long key ID | ||
116 | binary.BigEndian.PutUint64(buf[:8], sig.IssuerKeyId) | ||
117 | if _, err = w.Write(buf[:8]); err != nil { | ||
118 | return | ||
119 | } | ||
120 | |||
121 | // Write public key algorithm, hash ID, and hash value | ||
122 | buf[0] = byte(sig.PubKeyAlgo) | ||
123 | hashId, ok := s2k.HashToHashId(sig.Hash) | ||
124 | if !ok { | ||
125 | return errors.UnsupportedError(fmt.Sprintf("hash function %v", sig.Hash)) | ||
126 | } | ||
127 | buf[1] = hashId | ||
128 | copy(buf[2:4], sig.HashTag[:]) | ||
129 | if _, err = w.Write(buf[:4]); err != nil { | ||
130 | return | ||
131 | } | ||
132 | |||
133 | if sig.RSASignature.bytes == nil && sig.DSASigR.bytes == nil { | ||
134 | return errors.InvalidArgumentError("Signature: need to call Sign, SignUserId or SignKey before Serialize") | ||
135 | } | ||
136 | |||
137 | switch sig.PubKeyAlgo { | ||
138 | case PubKeyAlgoRSA, PubKeyAlgoRSASignOnly: | ||
139 | err = writeMPIs(w, sig.RSASignature) | ||
140 | case PubKeyAlgoDSA: | ||
141 | err = writeMPIs(w, sig.DSASigR, sig.DSASigS) | ||
142 | default: | ||
143 | panic("impossible") | ||
144 | } | ||
145 | return | ||
146 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/symmetric_key_encrypted.go b/vendor/golang.org/x/crypto/openpgp/packet/symmetric_key_encrypted.go new file mode 100644 index 0000000..744c2d2 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/symmetric_key_encrypted.go | |||
@@ -0,0 +1,155 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "bytes" | ||
9 | "crypto/cipher" | ||
10 | "io" | ||
11 | "strconv" | ||
12 | |||
13 | "golang.org/x/crypto/openpgp/errors" | ||
14 | "golang.org/x/crypto/openpgp/s2k" | ||
15 | ) | ||
16 | |||
17 | // This is the largest session key that we'll support. Since no 512-bit cipher | ||
18 | // has even been seriously used, this is comfortably large. | ||
19 | const maxSessionKeySizeInBytes = 64 | ||
20 | |||
21 | // SymmetricKeyEncrypted represents a passphrase protected session key. See RFC | ||
22 | // 4880, section 5.3. | ||
23 | type SymmetricKeyEncrypted struct { | ||
24 | CipherFunc CipherFunction | ||
25 | s2k func(out, in []byte) | ||
26 | encryptedKey []byte | ||
27 | } | ||
28 | |||
29 | const symmetricKeyEncryptedVersion = 4 | ||
30 | |||
31 | func (ske *SymmetricKeyEncrypted) parse(r io.Reader) error { | ||
32 | // RFC 4880, section 5.3. | ||
33 | var buf [2]byte | ||
34 | if _, err := readFull(r, buf[:]); err != nil { | ||
35 | return err | ||
36 | } | ||
37 | if buf[0] != symmetricKeyEncryptedVersion { | ||
38 | return errors.UnsupportedError("SymmetricKeyEncrypted version") | ||
39 | } | ||
40 | ske.CipherFunc = CipherFunction(buf[1]) | ||
41 | |||
42 | if ske.CipherFunc.KeySize() == 0 { | ||
43 | return errors.UnsupportedError("unknown cipher: " + strconv.Itoa(int(buf[1]))) | ||
44 | } | ||
45 | |||
46 | var err error | ||
47 | ske.s2k, err = s2k.Parse(r) | ||
48 | if err != nil { | ||
49 | return err | ||
50 | } | ||
51 | |||
52 | encryptedKey := make([]byte, maxSessionKeySizeInBytes) | ||
53 | // The session key may follow. We just have to try and read to find | ||
54 | // out. If it exists then we limit it to maxSessionKeySizeInBytes. | ||
55 | n, err := readFull(r, encryptedKey) | ||
56 | if err != nil && err != io.ErrUnexpectedEOF { | ||
57 | return err | ||
58 | } | ||
59 | |||
60 | if n != 0 { | ||
61 | if n == maxSessionKeySizeInBytes { | ||
62 | return errors.UnsupportedError("oversized encrypted session key") | ||
63 | } | ||
64 | ske.encryptedKey = encryptedKey[:n] | ||
65 | } | ||
66 | |||
67 | return nil | ||
68 | } | ||
69 | |||
70 | // Decrypt attempts to decrypt an encrypted session key and returns the key and | ||
71 | // the cipher to use when decrypting a subsequent Symmetrically Encrypted Data | ||
72 | // packet. | ||
73 | func (ske *SymmetricKeyEncrypted) Decrypt(passphrase []byte) ([]byte, CipherFunction, error) { | ||
74 | key := make([]byte, ske.CipherFunc.KeySize()) | ||
75 | ske.s2k(key, passphrase) | ||
76 | |||
77 | if len(ske.encryptedKey) == 0 { | ||
78 | return key, ske.CipherFunc, nil | ||
79 | } | ||
80 | |||
81 | // the IV is all zeros | ||
82 | iv := make([]byte, ske.CipherFunc.blockSize()) | ||
83 | c := cipher.NewCFBDecrypter(ske.CipherFunc.new(key), iv) | ||
84 | plaintextKey := make([]byte, len(ske.encryptedKey)) | ||
85 | c.XORKeyStream(plaintextKey, ske.encryptedKey) | ||
86 | cipherFunc := CipherFunction(plaintextKey[0]) | ||
87 | if cipherFunc.blockSize() == 0 { | ||
88 | return nil, ske.CipherFunc, errors.UnsupportedError("unknown cipher: " + strconv.Itoa(int(cipherFunc))) | ||
89 | } | ||
90 | plaintextKey = plaintextKey[1:] | ||
91 | if l, cipherKeySize := len(plaintextKey), cipherFunc.KeySize(); l != cipherFunc.KeySize() { | ||
92 | return nil, cipherFunc, errors.StructuralError("length of decrypted key (" + strconv.Itoa(l) + ") " + | ||
93 | "not equal to cipher keysize (" + strconv.Itoa(cipherKeySize) + ")") | ||
94 | } | ||
95 | return plaintextKey, cipherFunc, nil | ||
96 | } | ||
97 | |||
98 | // SerializeSymmetricKeyEncrypted serializes a symmetric key packet to w. The | ||
99 | // packet contains a random session key, encrypted by a key derived from the | ||
100 | // given passphrase. The session key is returned and must be passed to | ||
101 | // SerializeSymmetricallyEncrypted. | ||
102 | // If config is nil, sensible defaults will be used. | ||
103 | func SerializeSymmetricKeyEncrypted(w io.Writer, passphrase []byte, config *Config) (key []byte, err error) { | ||
104 | cipherFunc := config.Cipher() | ||
105 | keySize := cipherFunc.KeySize() | ||
106 | if keySize == 0 { | ||
107 | return nil, errors.UnsupportedError("unknown cipher: " + strconv.Itoa(int(cipherFunc))) | ||
108 | } | ||
109 | |||
110 | s2kBuf := new(bytes.Buffer) | ||
111 | keyEncryptingKey := make([]byte, keySize) | ||
112 | // s2k.Serialize salts and stretches the passphrase, and writes the | ||
113 | // resulting key to keyEncryptingKey and the s2k descriptor to s2kBuf. | ||
114 | err = s2k.Serialize(s2kBuf, keyEncryptingKey, config.Random(), passphrase, &s2k.Config{Hash: config.Hash(), S2KCount: config.PasswordHashIterations()}) | ||
115 | if err != nil { | ||
116 | return | ||
117 | } | ||
118 | s2kBytes := s2kBuf.Bytes() | ||
119 | |||
120 | packetLength := 2 /* header */ + len(s2kBytes) + 1 /* cipher type */ + keySize | ||
121 | err = serializeHeader(w, packetTypeSymmetricKeyEncrypted, packetLength) | ||
122 | if err != nil { | ||
123 | return | ||
124 | } | ||
125 | |||
126 | var buf [2]byte | ||
127 | buf[0] = symmetricKeyEncryptedVersion | ||
128 | buf[1] = byte(cipherFunc) | ||
129 | _, err = w.Write(buf[:]) | ||
130 | if err != nil { | ||
131 | return | ||
132 | } | ||
133 | _, err = w.Write(s2kBytes) | ||
134 | if err != nil { | ||
135 | return | ||
136 | } | ||
137 | |||
138 | sessionKey := make([]byte, keySize) | ||
139 | _, err = io.ReadFull(config.Random(), sessionKey) | ||
140 | if err != nil { | ||
141 | return | ||
142 | } | ||
143 | iv := make([]byte, cipherFunc.blockSize()) | ||
144 | c := cipher.NewCFBEncrypter(cipherFunc.new(keyEncryptingKey), iv) | ||
145 | encryptedCipherAndKey := make([]byte, keySize+1) | ||
146 | c.XORKeyStream(encryptedCipherAndKey, buf[1:]) | ||
147 | c.XORKeyStream(encryptedCipherAndKey[1:], sessionKey) | ||
148 | _, err = w.Write(encryptedCipherAndKey) | ||
149 | if err != nil { | ||
150 | return | ||
151 | } | ||
152 | |||
153 | key = sessionKey | ||
154 | return | ||
155 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/symmetrically_encrypted.go b/vendor/golang.org/x/crypto/openpgp/packet/symmetrically_encrypted.go new file mode 100644 index 0000000..6126030 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/symmetrically_encrypted.go | |||
@@ -0,0 +1,290 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "crypto/cipher" | ||
9 | "crypto/sha1" | ||
10 | "crypto/subtle" | ||
11 | "golang.org/x/crypto/openpgp/errors" | ||
12 | "hash" | ||
13 | "io" | ||
14 | "strconv" | ||
15 | ) | ||
16 | |||
17 | // SymmetricallyEncrypted represents a symmetrically encrypted byte string. The | ||
18 | // encrypted contents will consist of more OpenPGP packets. See RFC 4880, | ||
19 | // sections 5.7 and 5.13. | ||
20 | type SymmetricallyEncrypted struct { | ||
21 | MDC bool // true iff this is a type 18 packet and thus has an embedded MAC. | ||
22 | contents io.Reader | ||
23 | prefix []byte | ||
24 | } | ||
25 | |||
26 | const symmetricallyEncryptedVersion = 1 | ||
27 | |||
28 | func (se *SymmetricallyEncrypted) parse(r io.Reader) error { | ||
29 | if se.MDC { | ||
30 | // See RFC 4880, section 5.13. | ||
31 | var buf [1]byte | ||
32 | _, err := readFull(r, buf[:]) | ||
33 | if err != nil { | ||
34 | return err | ||
35 | } | ||
36 | if buf[0] != symmetricallyEncryptedVersion { | ||
37 | return errors.UnsupportedError("unknown SymmetricallyEncrypted version") | ||
38 | } | ||
39 | } | ||
40 | se.contents = r | ||
41 | return nil | ||
42 | } | ||
43 | |||
44 | // Decrypt returns a ReadCloser, from which the decrypted contents of the | ||
45 | // packet can be read. An incorrect key can, with high probability, be detected | ||
46 | // immediately and this will result in a KeyIncorrect error being returned. | ||
47 | func (se *SymmetricallyEncrypted) Decrypt(c CipherFunction, key []byte) (io.ReadCloser, error) { | ||
48 | keySize := c.KeySize() | ||
49 | if keySize == 0 { | ||
50 | return nil, errors.UnsupportedError("unknown cipher: " + strconv.Itoa(int(c))) | ||
51 | } | ||
52 | if len(key) != keySize { | ||
53 | return nil, errors.InvalidArgumentError("SymmetricallyEncrypted: incorrect key length") | ||
54 | } | ||
55 | |||
56 | if se.prefix == nil { | ||
57 | se.prefix = make([]byte, c.blockSize()+2) | ||
58 | _, err := readFull(se.contents, se.prefix) | ||
59 | if err != nil { | ||
60 | return nil, err | ||
61 | } | ||
62 | } else if len(se.prefix) != c.blockSize()+2 { | ||
63 | return nil, errors.InvalidArgumentError("can't try ciphers with different block lengths") | ||
64 | } | ||
65 | |||
66 | ocfbResync := OCFBResync | ||
67 | if se.MDC { | ||
68 | // MDC packets use a different form of OCFB mode. | ||
69 | ocfbResync = OCFBNoResync | ||
70 | } | ||
71 | |||
72 | s := NewOCFBDecrypter(c.new(key), se.prefix, ocfbResync) | ||
73 | if s == nil { | ||
74 | return nil, errors.ErrKeyIncorrect | ||
75 | } | ||
76 | |||
77 | plaintext := cipher.StreamReader{S: s, R: se.contents} | ||
78 | |||
79 | if se.MDC { | ||
80 | // MDC packets have an embedded hash that we need to check. | ||
81 | h := sha1.New() | ||
82 | h.Write(se.prefix) | ||
83 | return &seMDCReader{in: plaintext, h: h}, nil | ||
84 | } | ||
85 | |||
86 | // Otherwise, we just need to wrap plaintext so that it's a valid ReadCloser. | ||
87 | return seReader{plaintext}, nil | ||
88 | } | ||
89 | |||
90 | // seReader wraps an io.Reader with a no-op Close method. | ||
91 | type seReader struct { | ||
92 | in io.Reader | ||
93 | } | ||
94 | |||
95 | func (ser seReader) Read(buf []byte) (int, error) { | ||
96 | return ser.in.Read(buf) | ||
97 | } | ||
98 | |||
99 | func (ser seReader) Close() error { | ||
100 | return nil | ||
101 | } | ||
102 | |||
103 | const mdcTrailerSize = 1 /* tag byte */ + 1 /* length byte */ + sha1.Size | ||
104 | |||
105 | // An seMDCReader wraps an io.Reader, maintains a running hash and keeps hold | ||
106 | // of the most recent 22 bytes (mdcTrailerSize). Upon EOF, those bytes form an | ||
107 | // MDC packet containing a hash of the previous contents which is checked | ||
108 | // against the running hash. See RFC 4880, section 5.13. | ||
109 | type seMDCReader struct { | ||
110 | in io.Reader | ||
111 | h hash.Hash | ||
112 | trailer [mdcTrailerSize]byte | ||
113 | scratch [mdcTrailerSize]byte | ||
114 | trailerUsed int | ||
115 | error bool | ||
116 | eof bool | ||
117 | } | ||
118 | |||
119 | func (ser *seMDCReader) Read(buf []byte) (n int, err error) { | ||
120 | if ser.error { | ||
121 | err = io.ErrUnexpectedEOF | ||
122 | return | ||
123 | } | ||
124 | if ser.eof { | ||
125 | err = io.EOF | ||
126 | return | ||
127 | } | ||
128 | |||
129 | // If we haven't yet filled the trailer buffer then we must do that | ||
130 | // first. | ||
131 | for ser.trailerUsed < mdcTrailerSize { | ||
132 | n, err = ser.in.Read(ser.trailer[ser.trailerUsed:]) | ||
133 | ser.trailerUsed += n | ||
134 | if err == io.EOF { | ||
135 | if ser.trailerUsed != mdcTrailerSize { | ||
136 | n = 0 | ||
137 | err = io.ErrUnexpectedEOF | ||
138 | ser.error = true | ||
139 | return | ||
140 | } | ||
141 | ser.eof = true | ||
142 | n = 0 | ||
143 | return | ||
144 | } | ||
145 | |||
146 | if err != nil { | ||
147 | n = 0 | ||
148 | return | ||
149 | } | ||
150 | } | ||
151 | |||
152 | // If it's a short read then we read into a temporary buffer and shift | ||
153 | // the data into the caller's buffer. | ||
154 | if len(buf) <= mdcTrailerSize { | ||
155 | n, err = readFull(ser.in, ser.scratch[:len(buf)]) | ||
156 | copy(buf, ser.trailer[:n]) | ||
157 | ser.h.Write(buf[:n]) | ||
158 | copy(ser.trailer[:], ser.trailer[n:]) | ||
159 | copy(ser.trailer[mdcTrailerSize-n:], ser.scratch[:]) | ||
160 | if n < len(buf) { | ||
161 | ser.eof = true | ||
162 | err = io.EOF | ||
163 | } | ||
164 | return | ||
165 | } | ||
166 | |||
167 | n, err = ser.in.Read(buf[mdcTrailerSize:]) | ||
168 | copy(buf, ser.trailer[:]) | ||
169 | ser.h.Write(buf[:n]) | ||
170 | copy(ser.trailer[:], buf[n:]) | ||
171 | |||
172 | if err == io.EOF { | ||
173 | ser.eof = true | ||
174 | } | ||
175 | return | ||
176 | } | ||
177 | |||
178 | // This is a new-format packet tag byte for a type 19 (MDC) packet. | ||
179 | const mdcPacketTagByte = byte(0x80) | 0x40 | 19 | ||
180 | |||
181 | func (ser *seMDCReader) Close() error { | ||
182 | if ser.error { | ||
183 | return errors.SignatureError("error during reading") | ||
184 | } | ||
185 | |||
186 | for !ser.eof { | ||
187 | // We haven't seen EOF so we need to read to the end | ||
188 | var buf [1024]byte | ||
189 | _, err := ser.Read(buf[:]) | ||
190 | if err == io.EOF { | ||
191 | break | ||
192 | } | ||
193 | if err != nil { | ||
194 | return errors.SignatureError("error during reading") | ||
195 | } | ||
196 | } | ||
197 | |||
198 | if ser.trailer[0] != mdcPacketTagByte || ser.trailer[1] != sha1.Size { | ||
199 | return errors.SignatureError("MDC packet not found") | ||
200 | } | ||
201 | ser.h.Write(ser.trailer[:2]) | ||
202 | |||
203 | final := ser.h.Sum(nil) | ||
204 | if subtle.ConstantTimeCompare(final, ser.trailer[2:]) != 1 { | ||
205 | return errors.SignatureError("hash mismatch") | ||
206 | } | ||
207 | return nil | ||
208 | } | ||
209 | |||
210 | // An seMDCWriter writes through to an io.WriteCloser while maintains a running | ||
211 | // hash of the data written. On close, it emits an MDC packet containing the | ||
212 | // running hash. | ||
213 | type seMDCWriter struct { | ||
214 | w io.WriteCloser | ||
215 | h hash.Hash | ||
216 | } | ||
217 | |||
218 | func (w *seMDCWriter) Write(buf []byte) (n int, err error) { | ||
219 | w.h.Write(buf) | ||
220 | return w.w.Write(buf) | ||
221 | } | ||
222 | |||
223 | func (w *seMDCWriter) Close() (err error) { | ||
224 | var buf [mdcTrailerSize]byte | ||
225 | |||
226 | buf[0] = mdcPacketTagByte | ||
227 | buf[1] = sha1.Size | ||
228 | w.h.Write(buf[:2]) | ||
229 | digest := w.h.Sum(nil) | ||
230 | copy(buf[2:], digest) | ||
231 | |||
232 | _, err = w.w.Write(buf[:]) | ||
233 | if err != nil { | ||
234 | return | ||
235 | } | ||
236 | return w.w.Close() | ||
237 | } | ||
238 | |||
239 | // noOpCloser is like an ioutil.NopCloser, but for an io.Writer. | ||
240 | type noOpCloser struct { | ||
241 | w io.Writer | ||
242 | } | ||
243 | |||
244 | func (c noOpCloser) Write(data []byte) (n int, err error) { | ||
245 | return c.w.Write(data) | ||
246 | } | ||
247 | |||
248 | func (c noOpCloser) Close() error { | ||
249 | return nil | ||
250 | } | ||
251 | |||
252 | // SerializeSymmetricallyEncrypted serializes a symmetrically encrypted packet | ||
253 | // to w and returns a WriteCloser to which the to-be-encrypted packets can be | ||
254 | // written. | ||
255 | // If config is nil, sensible defaults will be used. | ||
256 | func SerializeSymmetricallyEncrypted(w io.Writer, c CipherFunction, key []byte, config *Config) (contents io.WriteCloser, err error) { | ||
257 | if c.KeySize() != len(key) { | ||
258 | return nil, errors.InvalidArgumentError("SymmetricallyEncrypted.Serialize: bad key length") | ||
259 | } | ||
260 | writeCloser := noOpCloser{w} | ||
261 | ciphertext, err := serializeStreamHeader(writeCloser, packetTypeSymmetricallyEncryptedMDC) | ||
262 | if err != nil { | ||
263 | return | ||
264 | } | ||
265 | |||
266 | _, err = ciphertext.Write([]byte{symmetricallyEncryptedVersion}) | ||
267 | if err != nil { | ||
268 | return | ||
269 | } | ||
270 | |||
271 | block := c.new(key) | ||
272 | blockSize := block.BlockSize() | ||
273 | iv := make([]byte, blockSize) | ||
274 | _, err = config.Random().Read(iv) | ||
275 | if err != nil { | ||
276 | return | ||
277 | } | ||
278 | s, prefix := NewOCFBEncrypter(block, iv, OCFBNoResync) | ||
279 | _, err = ciphertext.Write(prefix) | ||
280 | if err != nil { | ||
281 | return | ||
282 | } | ||
283 | plaintext := cipher.StreamWriter{S: s, W: ciphertext} | ||
284 | |||
285 | h := sha1.New() | ||
286 | h.Write(iv) | ||
287 | h.Write(iv[blockSize-2:]) | ||
288 | contents = &seMDCWriter{w: plaintext, h: h} | ||
289 | return | ||
290 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/userattribute.go b/vendor/golang.org/x/crypto/openpgp/packet/userattribute.go new file mode 100644 index 0000000..96a2b38 --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/userattribute.go | |||
@@ -0,0 +1,91 @@ | |||
1 | // Copyright 2013 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "bytes" | ||
9 | "image" | ||
10 | "image/jpeg" | ||
11 | "io" | ||
12 | "io/ioutil" | ||
13 | ) | ||
14 | |||
15 | const UserAttrImageSubpacket = 1 | ||
16 | |||
17 | // UserAttribute is capable of storing other types of data about a user | ||
18 | // beyond name, email and a text comment. In practice, user attributes are typically used | ||
19 | // to store a signed thumbnail photo JPEG image of the user. | ||
20 | // See RFC 4880, section 5.12. | ||
21 | type UserAttribute struct { | ||
22 | Contents []*OpaqueSubpacket | ||
23 | } | ||
24 | |||
25 | // NewUserAttributePhoto creates a user attribute packet | ||
26 | // containing the given images. | ||
27 | func NewUserAttributePhoto(photos ...image.Image) (uat *UserAttribute, err error) { | ||
28 | uat = new(UserAttribute) | ||
29 | for _, photo := range photos { | ||
30 | var buf bytes.Buffer | ||
31 | // RFC 4880, Section 5.12.1. | ||
32 | data := []byte{ | ||
33 | 0x10, 0x00, // Little-endian image header length (16 bytes) | ||
34 | 0x01, // Image header version 1 | ||
35 | 0x01, // JPEG | ||
36 | 0, 0, 0, 0, // 12 reserved octets, must be all zero. | ||
37 | 0, 0, 0, 0, | ||
38 | 0, 0, 0, 0} | ||
39 | if _, err = buf.Write(data); err != nil { | ||
40 | return | ||
41 | } | ||
42 | if err = jpeg.Encode(&buf, photo, nil); err != nil { | ||
43 | return | ||
44 | } | ||
45 | uat.Contents = append(uat.Contents, &OpaqueSubpacket{ | ||
46 | SubType: UserAttrImageSubpacket, | ||
47 | Contents: buf.Bytes()}) | ||
48 | } | ||
49 | return | ||
50 | } | ||
51 | |||
52 | // NewUserAttribute creates a new user attribute packet containing the given subpackets. | ||
53 | func NewUserAttribute(contents ...*OpaqueSubpacket) *UserAttribute { | ||
54 | return &UserAttribute{Contents: contents} | ||
55 | } | ||
56 | |||
57 | func (uat *UserAttribute) parse(r io.Reader) (err error) { | ||
58 | // RFC 4880, section 5.13 | ||
59 | b, err := ioutil.ReadAll(r) | ||
60 | if err != nil { | ||
61 | return | ||
62 | } | ||
63 | uat.Contents, err = OpaqueSubpackets(b) | ||
64 | return | ||
65 | } | ||
66 | |||
67 | // Serialize marshals the user attribute to w in the form of an OpenPGP packet, including | ||
68 | // header. | ||
69 | func (uat *UserAttribute) Serialize(w io.Writer) (err error) { | ||
70 | var buf bytes.Buffer | ||
71 | for _, sp := range uat.Contents { | ||
72 | sp.Serialize(&buf) | ||
73 | } | ||
74 | if err = serializeHeader(w, packetTypeUserAttribute, buf.Len()); err != nil { | ||
75 | return err | ||
76 | } | ||
77 | _, err = w.Write(buf.Bytes()) | ||
78 | return | ||
79 | } | ||
80 | |||
81 | // ImageData returns zero or more byte slices, each containing | ||
82 | // JPEG File Interchange Format (JFIF), for each photo in the | ||
83 | // the user attribute packet. | ||
84 | func (uat *UserAttribute) ImageData() (imageData [][]byte) { | ||
85 | for _, sp := range uat.Contents { | ||
86 | if sp.SubType == UserAttrImageSubpacket && len(sp.Contents) > 16 { | ||
87 | imageData = append(imageData, sp.Contents[16:]) | ||
88 | } | ||
89 | } | ||
90 | return | ||
91 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/packet/userid.go b/vendor/golang.org/x/crypto/openpgp/packet/userid.go new file mode 100644 index 0000000..d6bea7d --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/packet/userid.go | |||
@@ -0,0 +1,160 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package packet | ||
6 | |||
7 | import ( | ||
8 | "io" | ||
9 | "io/ioutil" | ||
10 | "strings" | ||
11 | ) | ||
12 | |||
13 | // UserId contains text that is intended to represent the name and email | ||
14 | // address of the key holder. See RFC 4880, section 5.11. By convention, this | ||
15 | // takes the form "Full Name (Comment) <email@example.com>" | ||
16 | type UserId struct { | ||
17 | Id string // By convention, this takes the form "Full Name (Comment) <email@example.com>" which is split out in the fields below. | ||
18 | |||
19 | Name, Comment, Email string | ||
20 | } | ||
21 | |||
22 | func hasInvalidCharacters(s string) bool { | ||
23 | for _, c := range s { | ||
24 | switch c { | ||
25 | case '(', ')', '<', '>', 0: | ||
26 | return true | ||
27 | } | ||
28 | } | ||
29 | return false | ||
30 | } | ||
31 | |||
32 | // NewUserId returns a UserId or nil if any of the arguments contain invalid | ||
33 | // characters. The invalid characters are '\x00', '(', ')', '<' and '>' | ||
34 | func NewUserId(name, comment, email string) *UserId { | ||
35 | // RFC 4880 doesn't deal with the structure of userid strings; the | ||
36 | // name, comment and email form is just a convention. However, there's | ||
37 | // no convention about escaping the metacharacters and GPG just refuses | ||
38 | // to create user ids where, say, the name contains a '('. We mirror | ||
39 | // this behaviour. | ||
40 | |||
41 | if hasInvalidCharacters(name) || hasInvalidCharacters(comment) || hasInvalidCharacters(email) { | ||
42 | return nil | ||
43 | } | ||
44 | |||
45 | uid := new(UserId) | ||
46 | uid.Name, uid.Comment, uid.Email = name, comment, email | ||
47 | uid.Id = name | ||
48 | if len(comment) > 0 { | ||
49 | if len(uid.Id) > 0 { | ||
50 | uid.Id += " " | ||
51 | } | ||
52 | uid.Id += "(" | ||
53 | uid.Id += comment | ||
54 | uid.Id += ")" | ||
55 | } | ||
56 | if len(email) > 0 { | ||
57 | if len(uid.Id) > 0 { | ||
58 | uid.Id += " " | ||
59 | } | ||
60 | uid.Id += "<" | ||
61 | uid.Id += email | ||
62 | uid.Id += ">" | ||
63 | } | ||
64 | return uid | ||
65 | } | ||
66 | |||
67 | func (uid *UserId) parse(r io.Reader) (err error) { | ||
68 | // RFC 4880, section 5.11 | ||
69 | b, err := ioutil.ReadAll(r) | ||
70 | if err != nil { | ||
71 | return | ||
72 | } | ||
73 | uid.Id = string(b) | ||
74 | uid.Name, uid.Comment, uid.Email = parseUserId(uid.Id) | ||
75 | return | ||
76 | } | ||
77 | |||
78 | // Serialize marshals uid to w in the form of an OpenPGP packet, including | ||
79 | // header. | ||
80 | func (uid *UserId) Serialize(w io.Writer) error { | ||
81 | err := serializeHeader(w, packetTypeUserId, len(uid.Id)) | ||
82 | if err != nil { | ||
83 | return err | ||
84 | } | ||
85 | _, err = w.Write([]byte(uid.Id)) | ||
86 | return err | ||
87 | } | ||
88 | |||
89 | // parseUserId extracts the name, comment and email from a user id string that | ||
90 | // is formatted as "Full Name (Comment) <email@example.com>". | ||
91 | func parseUserId(id string) (name, comment, email string) { | ||
92 | var n, c, e struct { | ||
93 | start, end int | ||
94 | } | ||
95 | var state int | ||
96 | |||
97 | for offset, rune := range id { | ||
98 | switch state { | ||
99 | case 0: | ||
100 | // Entering name | ||
101 | n.start = offset | ||
102 | state = 1 | ||
103 | fallthrough | ||
104 | case 1: | ||
105 | // In name | ||
106 | if rune == '(' { | ||
107 | state = 2 | ||
108 | n.end = offset | ||
109 | } else if rune == '<' { | ||
110 | state = 5 | ||
111 | n.end = offset | ||
112 | } | ||
113 | case 2: | ||
114 | // Entering comment | ||
115 | c.start = offset | ||
116 | state = 3 | ||
117 | fallthrough | ||
118 | case 3: | ||
119 | // In comment | ||
120 | if rune == ')' { | ||
121 | state = 4 | ||
122 | c.end = offset | ||
123 | } | ||
124 | case 4: | ||
125 | // Between comment and email | ||
126 | if rune == '<' { | ||
127 | state = 5 | ||
128 | } | ||
129 | case 5: | ||
130 | // Entering email | ||
131 | e.start = offset | ||
132 | state = 6 | ||
133 | fallthrough | ||
134 | case 6: | ||
135 | // In email | ||
136 | if rune == '>' { | ||
137 | state = 7 | ||
138 | e.end = offset | ||
139 | } | ||
140 | default: | ||
141 | // After email | ||
142 | } | ||
143 | } | ||
144 | switch state { | ||
145 | case 1: | ||
146 | // ended in the name | ||
147 | n.end = len(id) | ||
148 | case 3: | ||
149 | // ended in comment | ||
150 | c.end = len(id) | ||
151 | case 6: | ||
152 | // ended in email | ||
153 | e.end = len(id) | ||
154 | } | ||
155 | |||
156 | name = strings.TrimSpace(id[n.start:n.end]) | ||
157 | comment = strings.TrimSpace(id[c.start:c.end]) | ||
158 | email = strings.TrimSpace(id[e.start:e.end]) | ||
159 | return | ||
160 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/read.go b/vendor/golang.org/x/crypto/openpgp/read.go new file mode 100644 index 0000000..6ec664f --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/read.go | |||
@@ -0,0 +1,442 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | // Package openpgp implements high level operations on OpenPGP messages. | ||
6 | package openpgp // import "golang.org/x/crypto/openpgp" | ||
7 | |||
8 | import ( | ||
9 | "crypto" | ||
10 | _ "crypto/sha256" | ||
11 | "hash" | ||
12 | "io" | ||
13 | "strconv" | ||
14 | |||
15 | "golang.org/x/crypto/openpgp/armor" | ||
16 | "golang.org/x/crypto/openpgp/errors" | ||
17 | "golang.org/x/crypto/openpgp/packet" | ||
18 | ) | ||
19 | |||
20 | // SignatureType is the armor type for a PGP signature. | ||
21 | var SignatureType = "PGP SIGNATURE" | ||
22 | |||
23 | // readArmored reads an armored block with the given type. | ||
24 | func readArmored(r io.Reader, expectedType string) (body io.Reader, err error) { | ||
25 | block, err := armor.Decode(r) | ||
26 | if err != nil { | ||
27 | return | ||
28 | } | ||
29 | |||
30 | if block.Type != expectedType { | ||
31 | return nil, errors.InvalidArgumentError("expected '" + expectedType + "', got: " + block.Type) | ||
32 | } | ||
33 | |||
34 | return block.Body, nil | ||
35 | } | ||
36 | |||
37 | // MessageDetails contains the result of parsing an OpenPGP encrypted and/or | ||
38 | // signed message. | ||
39 | type MessageDetails struct { | ||
40 | IsEncrypted bool // true if the message was encrypted. | ||
41 | EncryptedToKeyIds []uint64 // the list of recipient key ids. | ||
42 | IsSymmetricallyEncrypted bool // true if a passphrase could have decrypted the message. | ||
43 | DecryptedWith Key // the private key used to decrypt the message, if any. | ||
44 | IsSigned bool // true if the message is signed. | ||
45 | SignedByKeyId uint64 // the key id of the signer, if any. | ||
46 | SignedBy *Key // the key of the signer, if available. | ||
47 | LiteralData *packet.LiteralData // the metadata of the contents | ||
48 | UnverifiedBody io.Reader // the contents of the message. | ||
49 | |||
50 | // If IsSigned is true and SignedBy is non-zero then the signature will | ||
51 | // be verified as UnverifiedBody is read. The signature cannot be | ||
52 | // checked until the whole of UnverifiedBody is read so UnverifiedBody | ||
53 | // must be consumed until EOF before the data can be trusted. Even if a | ||
54 | // message isn't signed (or the signer is unknown) the data may contain | ||
55 | // an authentication code that is only checked once UnverifiedBody has | ||
56 | // been consumed. Once EOF has been seen, the following fields are | ||
57 | // valid. (An authentication code failure is reported as a | ||
58 | // SignatureError error when reading from UnverifiedBody.) | ||
59 | SignatureError error // nil if the signature is good. | ||
60 | Signature *packet.Signature // the signature packet itself, if v4 (default) | ||
61 | SignatureV3 *packet.SignatureV3 // the signature packet if it is a v2 or v3 signature | ||
62 | |||
63 | decrypted io.ReadCloser | ||
64 | } | ||
65 | |||
66 | // A PromptFunction is used as a callback by functions that may need to decrypt | ||
67 | // a private key, or prompt for a passphrase. It is called with a list of | ||
68 | // acceptable, encrypted private keys and a boolean that indicates whether a | ||
69 | // passphrase is usable. It should either decrypt a private key or return a | ||
70 | // passphrase to try. If the decrypted private key or given passphrase isn't | ||
71 | // correct, the function will be called again, forever. Any error returned will | ||
72 | // be passed up. | ||
73 | type PromptFunction func(keys []Key, symmetric bool) ([]byte, error) | ||
74 | |||
75 | // A keyEnvelopePair is used to store a private key with the envelope that | ||
76 | // contains a symmetric key, encrypted with that key. | ||
77 | type keyEnvelopePair struct { | ||
78 | key Key | ||
79 | encryptedKey *packet.EncryptedKey | ||
80 | } | ||
81 | |||
82 | // ReadMessage parses an OpenPGP message that may be signed and/or encrypted. | ||
83 | // The given KeyRing should contain both public keys (for signature | ||
84 | // verification) and, possibly encrypted, private keys for decrypting. | ||
85 | // If config is nil, sensible defaults will be used. | ||
86 | func ReadMessage(r io.Reader, keyring KeyRing, prompt PromptFunction, config *packet.Config) (md *MessageDetails, err error) { | ||
87 | var p packet.Packet | ||
88 | |||
89 | var symKeys []*packet.SymmetricKeyEncrypted | ||
90 | var pubKeys []keyEnvelopePair | ||
91 | var se *packet.SymmetricallyEncrypted | ||
92 | |||
93 | packets := packet.NewReader(r) | ||
94 | md = new(MessageDetails) | ||
95 | md.IsEncrypted = true | ||
96 | |||
97 | // The message, if encrypted, starts with a number of packets | ||
98 | // containing an encrypted decryption key. The decryption key is either | ||
99 | // encrypted to a public key, or with a passphrase. This loop | ||
100 | // collects these packets. | ||
101 | ParsePackets: | ||
102 | for { | ||
103 | p, err = packets.Next() | ||
104 | if err != nil { | ||
105 | return nil, err | ||
106 | } | ||
107 | switch p := p.(type) { | ||
108 | case *packet.SymmetricKeyEncrypted: | ||
109 | // This packet contains the decryption key encrypted with a passphrase. | ||
110 | md.IsSymmetricallyEncrypted = true | ||
111 | symKeys = append(symKeys, p) | ||
112 | case *packet.EncryptedKey: | ||
113 | // This packet contains the decryption key encrypted to a public key. | ||
114 | md.EncryptedToKeyIds = append(md.EncryptedToKeyIds, p.KeyId) | ||
115 | switch p.Algo { | ||
116 | case packet.PubKeyAlgoRSA, packet.PubKeyAlgoRSAEncryptOnly, packet.PubKeyAlgoElGamal: | ||
117 | break | ||
118 | default: | ||
119 | continue | ||
120 | } | ||
121 | var keys []Key | ||
122 | if p.KeyId == 0 { | ||
123 | keys = keyring.DecryptionKeys() | ||
124 | } else { | ||
125 | keys = keyring.KeysById(p.KeyId) | ||
126 | } | ||
127 | for _, k := range keys { | ||
128 | pubKeys = append(pubKeys, keyEnvelopePair{k, p}) | ||
129 | } | ||
130 | case *packet.SymmetricallyEncrypted: | ||
131 | se = p | ||
132 | break ParsePackets | ||
133 | case *packet.Compressed, *packet.LiteralData, *packet.OnePassSignature: | ||
134 | // This message isn't encrypted. | ||
135 | if len(symKeys) != 0 || len(pubKeys) != 0 { | ||
136 | return nil, errors.StructuralError("key material not followed by encrypted message") | ||
137 | } | ||
138 | packets.Unread(p) | ||
139 | return readSignedMessage(packets, nil, keyring) | ||
140 | } | ||
141 | } | ||
142 | |||
143 | var candidates []Key | ||
144 | var decrypted io.ReadCloser | ||
145 | |||
146 | // Now that we have the list of encrypted keys we need to decrypt at | ||
147 | // least one of them or, if we cannot, we need to call the prompt | ||
148 | // function so that it can decrypt a key or give us a passphrase. | ||
149 | FindKey: | ||
150 | for { | ||
151 | // See if any of the keys already have a private key available | ||
152 | candidates = candidates[:0] | ||
153 | candidateFingerprints := make(map[string]bool) | ||
154 | |||
155 | for _, pk := range pubKeys { | ||
156 | if pk.key.PrivateKey == nil { | ||
157 | continue | ||
158 | } | ||
159 | if !pk.key.PrivateKey.Encrypted { | ||
160 | if len(pk.encryptedKey.Key) == 0 { | ||
161 | pk.encryptedKey.Decrypt(pk.key.PrivateKey, config) | ||
162 | } | ||
163 | if len(pk.encryptedKey.Key) == 0 { | ||
164 | continue | ||
165 | } | ||
166 | decrypted, err = se.Decrypt(pk.encryptedKey.CipherFunc, pk.encryptedKey.Key) | ||
167 | if err != nil && err != errors.ErrKeyIncorrect { | ||
168 | return nil, err | ||
169 | } | ||
170 | if decrypted != nil { | ||
171 | md.DecryptedWith = pk.key | ||
172 | break FindKey | ||
173 | } | ||
174 | } else { | ||
175 | fpr := string(pk.key.PublicKey.Fingerprint[:]) | ||
176 | if v := candidateFingerprints[fpr]; v { | ||
177 | continue | ||
178 | } | ||
179 | candidates = append(candidates, pk.key) | ||
180 | candidateFingerprints[fpr] = true | ||
181 | } | ||
182 | } | ||
183 | |||
184 | if len(candidates) == 0 && len(symKeys) == 0 { | ||
185 | return nil, errors.ErrKeyIncorrect | ||
186 | } | ||
187 | |||
188 | if prompt == nil { | ||
189 | return nil, errors.ErrKeyIncorrect | ||
190 | } | ||
191 | |||
192 | passphrase, err := prompt(candidates, len(symKeys) != 0) | ||
193 | if err != nil { | ||
194 | return nil, err | ||
195 | } | ||
196 | |||
197 | // Try the symmetric passphrase first | ||
198 | if len(symKeys) != 0 && passphrase != nil { | ||
199 | for _, s := range symKeys { | ||
200 | key, cipherFunc, err := s.Decrypt(passphrase) | ||
201 | if err == nil { | ||
202 | decrypted, err = se.Decrypt(cipherFunc, key) | ||
203 | if err != nil && err != errors.ErrKeyIncorrect { | ||
204 | return nil, err | ||
205 | } | ||
206 | if decrypted != nil { | ||
207 | break FindKey | ||
208 | } | ||
209 | } | ||
210 | |||
211 | } | ||
212 | } | ||
213 | } | ||
214 | |||
215 | md.decrypted = decrypted | ||
216 | if err := packets.Push(decrypted); err != nil { | ||
217 | return nil, err | ||
218 | } | ||
219 | return readSignedMessage(packets, md, keyring) | ||
220 | } | ||
221 | |||
222 | // readSignedMessage reads a possibly signed message if mdin is non-zero then | ||
223 | // that structure is updated and returned. Otherwise a fresh MessageDetails is | ||
224 | // used. | ||
225 | func readSignedMessage(packets *packet.Reader, mdin *MessageDetails, keyring KeyRing) (md *MessageDetails, err error) { | ||
226 | if mdin == nil { | ||
227 | mdin = new(MessageDetails) | ||
228 | } | ||
229 | md = mdin | ||
230 | |||
231 | var p packet.Packet | ||
232 | var h hash.Hash | ||
233 | var wrappedHash hash.Hash | ||
234 | FindLiteralData: | ||
235 | for { | ||
236 | p, err = packets.Next() | ||
237 | if err != nil { | ||
238 | return nil, err | ||
239 | } | ||
240 | switch p := p.(type) { | ||
241 | case *packet.Compressed: | ||
242 | if err := packets.Push(p.Body); err != nil { | ||
243 | return nil, err | ||
244 | } | ||
245 | case *packet.OnePassSignature: | ||
246 | if !p.IsLast { | ||
247 | return nil, errors.UnsupportedError("nested signatures") | ||
248 | } | ||
249 | |||
250 | h, wrappedHash, err = hashForSignature(p.Hash, p.SigType) | ||
251 | if err != nil { | ||
252 | md = nil | ||
253 | return | ||
254 | } | ||
255 | |||
256 | md.IsSigned = true | ||
257 | md.SignedByKeyId = p.KeyId | ||
258 | keys := keyring.KeysByIdUsage(p.KeyId, packet.KeyFlagSign) | ||
259 | if len(keys) > 0 { | ||
260 | md.SignedBy = &keys[0] | ||
261 | } | ||
262 | case *packet.LiteralData: | ||
263 | md.LiteralData = p | ||
264 | break FindLiteralData | ||
265 | } | ||
266 | } | ||
267 | |||
268 | if md.SignedBy != nil { | ||
269 | md.UnverifiedBody = &signatureCheckReader{packets, h, wrappedHash, md} | ||
270 | } else if md.decrypted != nil { | ||
271 | md.UnverifiedBody = checkReader{md} | ||
272 | } else { | ||
273 | md.UnverifiedBody = md.LiteralData.Body | ||
274 | } | ||
275 | |||
276 | return md, nil | ||
277 | } | ||
278 | |||
279 | // hashForSignature returns a pair of hashes that can be used to verify a | ||
280 | // signature. The signature may specify that the contents of the signed message | ||
281 | // should be preprocessed (i.e. to normalize line endings). Thus this function | ||
282 | // returns two hashes. The second should be used to hash the message itself and | ||
283 | // performs any needed preprocessing. | ||
284 | func hashForSignature(hashId crypto.Hash, sigType packet.SignatureType) (hash.Hash, hash.Hash, error) { | ||
285 | if !hashId.Available() { | ||
286 | return nil, nil, errors.UnsupportedError("hash not available: " + strconv.Itoa(int(hashId))) | ||
287 | } | ||
288 | h := hashId.New() | ||
289 | |||
290 | switch sigType { | ||
291 | case packet.SigTypeBinary: | ||
292 | return h, h, nil | ||
293 | case packet.SigTypeText: | ||
294 | return h, NewCanonicalTextHash(h), nil | ||
295 | } | ||
296 | |||
297 | return nil, nil, errors.UnsupportedError("unsupported signature type: " + strconv.Itoa(int(sigType))) | ||
298 | } | ||
299 | |||
300 | // checkReader wraps an io.Reader from a LiteralData packet. When it sees EOF | ||
301 | // it closes the ReadCloser from any SymmetricallyEncrypted packet to trigger | ||
302 | // MDC checks. | ||
303 | type checkReader struct { | ||
304 | md *MessageDetails | ||
305 | } | ||
306 | |||
307 | func (cr checkReader) Read(buf []byte) (n int, err error) { | ||
308 | n, err = cr.md.LiteralData.Body.Read(buf) | ||
309 | if err == io.EOF { | ||
310 | mdcErr := cr.md.decrypted.Close() | ||
311 | if mdcErr != nil { | ||
312 | err = mdcErr | ||
313 | } | ||
314 | } | ||
315 | return | ||
316 | } | ||
317 | |||
318 | // signatureCheckReader wraps an io.Reader from a LiteralData packet and hashes | ||
319 | // the data as it is read. When it sees an EOF from the underlying io.Reader | ||
320 | // it parses and checks a trailing Signature packet and triggers any MDC checks. | ||
321 | type signatureCheckReader struct { | ||
322 | packets *packet.Reader | ||
323 | h, wrappedHash hash.Hash | ||
324 | md *MessageDetails | ||
325 | } | ||
326 | |||
327 | func (scr *signatureCheckReader) Read(buf []byte) (n int, err error) { | ||
328 | n, err = scr.md.LiteralData.Body.Read(buf) | ||
329 | scr.wrappedHash.Write(buf[:n]) | ||
330 | if err == io.EOF { | ||
331 | var p packet.Packet | ||
332 | p, scr.md.SignatureError = scr.packets.Next() | ||
333 | if scr.md.SignatureError != nil { | ||
334 | return | ||
335 | } | ||
336 | |||
337 | var ok bool | ||
338 | if scr.md.Signature, ok = p.(*packet.Signature); ok { | ||
339 | scr.md.SignatureError = scr.md.SignedBy.PublicKey.VerifySignature(scr.h, scr.md.Signature) | ||
340 | } else if scr.md.SignatureV3, ok = p.(*packet.SignatureV3); ok { | ||
341 | scr.md.SignatureError = scr.md.SignedBy.PublicKey.VerifySignatureV3(scr.h, scr.md.SignatureV3) | ||
342 | } else { | ||
343 | scr.md.SignatureError = errors.StructuralError("LiteralData not followed by Signature") | ||
344 | return | ||
345 | } | ||
346 | |||
347 | // The SymmetricallyEncrypted packet, if any, might have an | ||
348 | // unsigned hash of its own. In order to check this we need to | ||
349 | // close that Reader. | ||
350 | if scr.md.decrypted != nil { | ||
351 | mdcErr := scr.md.decrypted.Close() | ||
352 | if mdcErr != nil { | ||
353 | err = mdcErr | ||
354 | } | ||
355 | } | ||
356 | } | ||
357 | return | ||
358 | } | ||
359 | |||
360 | // CheckDetachedSignature takes a signed file and a detached signature and | ||
361 | // returns the signer if the signature is valid. If the signer isn't known, | ||
362 | // ErrUnknownIssuer is returned. | ||
363 | func CheckDetachedSignature(keyring KeyRing, signed, signature io.Reader) (signer *Entity, err error) { | ||
364 | var issuerKeyId uint64 | ||
365 | var hashFunc crypto.Hash | ||
366 | var sigType packet.SignatureType | ||
367 | var keys []Key | ||
368 | var p packet.Packet | ||
369 | |||
370 | packets := packet.NewReader(signature) | ||
371 | for { | ||
372 | p, err = packets.Next() | ||
373 | if err == io.EOF { | ||
374 | return nil, errors.ErrUnknownIssuer | ||
375 | } | ||
376 | if err != nil { | ||
377 | return nil, err | ||
378 | } | ||
379 | |||
380 | switch sig := p.(type) { | ||
381 | case *packet.Signature: | ||
382 | if sig.IssuerKeyId == nil { | ||
383 | return nil, errors.StructuralError("signature doesn't have an issuer") | ||
384 | } | ||
385 | issuerKeyId = *sig.IssuerKeyId | ||
386 | hashFunc = sig.Hash | ||
387 | sigType = sig.SigType | ||
388 | case *packet.SignatureV3: | ||
389 | issuerKeyId = sig.IssuerKeyId | ||
390 | hashFunc = sig.Hash | ||
391 | sigType = sig.SigType | ||
392 | default: | ||
393 | return nil, errors.StructuralError("non signature packet found") | ||
394 | } | ||
395 | |||
396 | keys = keyring.KeysByIdUsage(issuerKeyId, packet.KeyFlagSign) | ||
397 | if len(keys) > 0 { | ||
398 | break | ||
399 | } | ||
400 | } | ||
401 | |||
402 | if len(keys) == 0 { | ||
403 | panic("unreachable") | ||
404 | } | ||
405 | |||
406 | h, wrappedHash, err := hashForSignature(hashFunc, sigType) | ||
407 | if err != nil { | ||
408 | return nil, err | ||
409 | } | ||
410 | |||
411 | if _, err := io.Copy(wrappedHash, signed); err != nil && err != io.EOF { | ||
412 | return nil, err | ||
413 | } | ||
414 | |||
415 | for _, key := range keys { | ||
416 | switch sig := p.(type) { | ||
417 | case *packet.Signature: | ||
418 | err = key.PublicKey.VerifySignature(h, sig) | ||
419 | case *packet.SignatureV3: | ||
420 | err = key.PublicKey.VerifySignatureV3(h, sig) | ||
421 | default: | ||
422 | panic("unreachable") | ||
423 | } | ||
424 | |||
425 | if err == nil { | ||
426 | return key.Entity, nil | ||
427 | } | ||
428 | } | ||
429 | |||
430 | return nil, err | ||
431 | } | ||
432 | |||
433 | // CheckArmoredDetachedSignature performs the same actions as | ||
434 | // CheckDetachedSignature but expects the signature to be armored. | ||
435 | func CheckArmoredDetachedSignature(keyring KeyRing, signed, signature io.Reader) (signer *Entity, err error) { | ||
436 | body, err := readArmored(signature, SignatureType) | ||
437 | if err != nil { | ||
438 | return | ||
439 | } | ||
440 | |||
441 | return CheckDetachedSignature(keyring, signed, body) | ||
442 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/s2k/s2k.go b/vendor/golang.org/x/crypto/openpgp/s2k/s2k.go new file mode 100644 index 0000000..4b9a44c --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/s2k/s2k.go | |||
@@ -0,0 +1,273 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | // Package s2k implements the various OpenPGP string-to-key transforms as | ||
6 | // specified in RFC 4800 section 3.7.1. | ||
7 | package s2k // import "golang.org/x/crypto/openpgp/s2k" | ||
8 | |||
9 | import ( | ||
10 | "crypto" | ||
11 | "hash" | ||
12 | "io" | ||
13 | "strconv" | ||
14 | |||
15 | "golang.org/x/crypto/openpgp/errors" | ||
16 | ) | ||
17 | |||
18 | // Config collects configuration parameters for s2k key-stretching | ||
19 | // transformatioms. A nil *Config is valid and results in all default | ||
20 | // values. Currently, Config is used only by the Serialize function in | ||
21 | // this package. | ||
22 | type Config struct { | ||
23 | // Hash is the default hash function to be used. If | ||
24 | // nil, SHA1 is used. | ||
25 | Hash crypto.Hash | ||
26 | // S2KCount is only used for symmetric encryption. It | ||
27 | // determines the strength of the passphrase stretching when | ||
28 | // the said passphrase is hashed to produce a key. S2KCount | ||
29 | // should be between 1024 and 65011712, inclusive. If Config | ||
30 | // is nil or S2KCount is 0, the value 65536 used. Not all | ||
31 | // values in the above range can be represented. S2KCount will | ||
32 | // be rounded up to the next representable value if it cannot | ||
33 | // be encoded exactly. When set, it is strongly encrouraged to | ||
34 | // use a value that is at least 65536. See RFC 4880 Section | ||
35 | // 3.7.1.3. | ||
36 | S2KCount int | ||
37 | } | ||
38 | |||
39 | func (c *Config) hash() crypto.Hash { | ||
40 | if c == nil || uint(c.Hash) == 0 { | ||
41 | // SHA1 is the historical default in this package. | ||
42 | return crypto.SHA1 | ||
43 | } | ||
44 | |||
45 | return c.Hash | ||
46 | } | ||
47 | |||
48 | func (c *Config) encodedCount() uint8 { | ||
49 | if c == nil || c.S2KCount == 0 { | ||
50 | return 96 // The common case. Correspoding to 65536 | ||
51 | } | ||
52 | |||
53 | i := c.S2KCount | ||
54 | switch { | ||
55 | // Behave like GPG. Should we make 65536 the lowest value used? | ||
56 | case i < 1024: | ||
57 | i = 1024 | ||
58 | case i > 65011712: | ||
59 | i = 65011712 | ||
60 | } | ||
61 | |||
62 | return encodeCount(i) | ||
63 | } | ||
64 | |||
65 | // encodeCount converts an iterative "count" in the range 1024 to | ||
66 | // 65011712, inclusive, to an encoded count. The return value is the | ||
67 | // octet that is actually stored in the GPG file. encodeCount panics | ||
68 | // if i is not in the above range (encodedCount above takes care to | ||
69 | // pass i in the correct range). See RFC 4880 Section 3.7.7.1. | ||
70 | func encodeCount(i int) uint8 { | ||
71 | if i < 1024 || i > 65011712 { | ||
72 | panic("count arg i outside the required range") | ||
73 | } | ||
74 | |||
75 | for encoded := 0; encoded < 256; encoded++ { | ||
76 | count := decodeCount(uint8(encoded)) | ||
77 | if count >= i { | ||
78 | return uint8(encoded) | ||
79 | } | ||
80 | } | ||
81 | |||
82 | return 255 | ||
83 | } | ||
84 | |||
85 | // decodeCount returns the s2k mode 3 iterative "count" corresponding to | ||
86 | // the encoded octet c. | ||
87 | func decodeCount(c uint8) int { | ||
88 | return (16 + int(c&15)) << (uint32(c>>4) + 6) | ||
89 | } | ||
90 | |||
91 | // Simple writes to out the result of computing the Simple S2K function (RFC | ||
92 | // 4880, section 3.7.1.1) using the given hash and input passphrase. | ||
93 | func Simple(out []byte, h hash.Hash, in []byte) { | ||
94 | Salted(out, h, in, nil) | ||
95 | } | ||
96 | |||
97 | var zero [1]byte | ||
98 | |||
99 | // Salted writes to out the result of computing the Salted S2K function (RFC | ||
100 | // 4880, section 3.7.1.2) using the given hash, input passphrase and salt. | ||
101 | func Salted(out []byte, h hash.Hash, in []byte, salt []byte) { | ||
102 | done := 0 | ||
103 | var digest []byte | ||
104 | |||
105 | for i := 0; done < len(out); i++ { | ||
106 | h.Reset() | ||
107 | for j := 0; j < i; j++ { | ||
108 | h.Write(zero[:]) | ||
109 | } | ||
110 | h.Write(salt) | ||
111 | h.Write(in) | ||
112 | digest = h.Sum(digest[:0]) | ||
113 | n := copy(out[done:], digest) | ||
114 | done += n | ||
115 | } | ||
116 | } | ||
117 | |||
118 | // Iterated writes to out the result of computing the Iterated and Salted S2K | ||
119 | // function (RFC 4880, section 3.7.1.3) using the given hash, input passphrase, | ||
120 | // salt and iteration count. | ||
121 | func Iterated(out []byte, h hash.Hash, in []byte, salt []byte, count int) { | ||
122 | combined := make([]byte, len(in)+len(salt)) | ||
123 | copy(combined, salt) | ||
124 | copy(combined[len(salt):], in) | ||
125 | |||
126 | if count < len(combined) { | ||
127 | count = len(combined) | ||
128 | } | ||
129 | |||
130 | done := 0 | ||
131 | var digest []byte | ||
132 | for i := 0; done < len(out); i++ { | ||
133 | h.Reset() | ||
134 | for j := 0; j < i; j++ { | ||
135 | h.Write(zero[:]) | ||
136 | } | ||
137 | written := 0 | ||
138 | for written < count { | ||
139 | if written+len(combined) > count { | ||
140 | todo := count - written | ||
141 | h.Write(combined[:todo]) | ||
142 | written = count | ||
143 | } else { | ||
144 | h.Write(combined) | ||
145 | written += len(combined) | ||
146 | } | ||
147 | } | ||
148 | digest = h.Sum(digest[:0]) | ||
149 | n := copy(out[done:], digest) | ||
150 | done += n | ||
151 | } | ||
152 | } | ||
153 | |||
154 | // Parse reads a binary specification for a string-to-key transformation from r | ||
155 | // and returns a function which performs that transform. | ||
156 | func Parse(r io.Reader) (f func(out, in []byte), err error) { | ||
157 | var buf [9]byte | ||
158 | |||
159 | _, err = io.ReadFull(r, buf[:2]) | ||
160 | if err != nil { | ||
161 | return | ||
162 | } | ||
163 | |||
164 | hash, ok := HashIdToHash(buf[1]) | ||
165 | if !ok { | ||
166 | return nil, errors.UnsupportedError("hash for S2K function: " + strconv.Itoa(int(buf[1]))) | ||
167 | } | ||
168 | if !hash.Available() { | ||
169 | return nil, errors.UnsupportedError("hash not available: " + strconv.Itoa(int(hash))) | ||
170 | } | ||
171 | h := hash.New() | ||
172 | |||
173 | switch buf[0] { | ||
174 | case 0: | ||
175 | f := func(out, in []byte) { | ||
176 | Simple(out, h, in) | ||
177 | } | ||
178 | return f, nil | ||
179 | case 1: | ||
180 | _, err = io.ReadFull(r, buf[:8]) | ||
181 | if err != nil { | ||
182 | return | ||
183 | } | ||
184 | f := func(out, in []byte) { | ||
185 | Salted(out, h, in, buf[:8]) | ||
186 | } | ||
187 | return f, nil | ||
188 | case 3: | ||
189 | _, err = io.ReadFull(r, buf[:9]) | ||
190 | if err != nil { | ||
191 | return | ||
192 | } | ||
193 | count := decodeCount(buf[8]) | ||
194 | f := func(out, in []byte) { | ||
195 | Iterated(out, h, in, buf[:8], count) | ||
196 | } | ||
197 | return f, nil | ||
198 | } | ||
199 | |||
200 | return nil, errors.UnsupportedError("S2K function") | ||
201 | } | ||
202 | |||
203 | // Serialize salts and stretches the given passphrase and writes the | ||
204 | // resulting key into key. It also serializes an S2K descriptor to | ||
205 | // w. The key stretching can be configured with c, which may be | ||
206 | // nil. In that case, sensible defaults will be used. | ||
207 | func Serialize(w io.Writer, key []byte, rand io.Reader, passphrase []byte, c *Config) error { | ||
208 | var buf [11]byte | ||
209 | buf[0] = 3 /* iterated and salted */ | ||
210 | buf[1], _ = HashToHashId(c.hash()) | ||
211 | salt := buf[2:10] | ||
212 | if _, err := io.ReadFull(rand, salt); err != nil { | ||
213 | return err | ||
214 | } | ||
215 | encodedCount := c.encodedCount() | ||
216 | count := decodeCount(encodedCount) | ||
217 | buf[10] = encodedCount | ||
218 | if _, err := w.Write(buf[:]); err != nil { | ||
219 | return err | ||
220 | } | ||
221 | |||
222 | Iterated(key, c.hash().New(), passphrase, salt, count) | ||
223 | return nil | ||
224 | } | ||
225 | |||
226 | // hashToHashIdMapping contains pairs relating OpenPGP's hash identifier with | ||
227 | // Go's crypto.Hash type. See RFC 4880, section 9.4. | ||
228 | var hashToHashIdMapping = []struct { | ||
229 | id byte | ||
230 | hash crypto.Hash | ||
231 | name string | ||
232 | }{ | ||
233 | {1, crypto.MD5, "MD5"}, | ||
234 | {2, crypto.SHA1, "SHA1"}, | ||
235 | {3, crypto.RIPEMD160, "RIPEMD160"}, | ||
236 | {8, crypto.SHA256, "SHA256"}, | ||
237 | {9, crypto.SHA384, "SHA384"}, | ||
238 | {10, crypto.SHA512, "SHA512"}, | ||
239 | {11, crypto.SHA224, "SHA224"}, | ||
240 | } | ||
241 | |||
242 | // HashIdToHash returns a crypto.Hash which corresponds to the given OpenPGP | ||
243 | // hash id. | ||
244 | func HashIdToHash(id byte) (h crypto.Hash, ok bool) { | ||
245 | for _, m := range hashToHashIdMapping { | ||
246 | if m.id == id { | ||
247 | return m.hash, true | ||
248 | } | ||
249 | } | ||
250 | return 0, false | ||
251 | } | ||
252 | |||
253 | // HashIdToString returns the name of the hash function corresponding to the | ||
254 | // given OpenPGP hash id. | ||
255 | func HashIdToString(id byte) (name string, ok bool) { | ||
256 | for _, m := range hashToHashIdMapping { | ||
257 | if m.id == id { | ||
258 | return m.name, true | ||
259 | } | ||
260 | } | ||
261 | |||
262 | return "", false | ||
263 | } | ||
264 | |||
265 | // HashIdToHash returns an OpenPGP hash id which corresponds the given Hash. | ||
266 | func HashToHashId(h crypto.Hash) (id byte, ok bool) { | ||
267 | for _, m := range hashToHashIdMapping { | ||
268 | if m.hash == h { | ||
269 | return m.id, true | ||
270 | } | ||
271 | } | ||
272 | return 0, false | ||
273 | } | ||
diff --git a/vendor/golang.org/x/crypto/openpgp/write.go b/vendor/golang.org/x/crypto/openpgp/write.go new file mode 100644 index 0000000..65a304c --- /dev/null +++ b/vendor/golang.org/x/crypto/openpgp/write.go | |||
@@ -0,0 +1,378 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package openpgp | ||
6 | |||
7 | import ( | ||
8 | "crypto" | ||
9 | "hash" | ||
10 | "io" | ||
11 | "strconv" | ||
12 | "time" | ||
13 | |||
14 | "golang.org/x/crypto/openpgp/armor" | ||
15 | "golang.org/x/crypto/openpgp/errors" | ||
16 | "golang.org/x/crypto/openpgp/packet" | ||
17 | "golang.org/x/crypto/openpgp/s2k" | ||
18 | ) | ||
19 | |||
20 | // DetachSign signs message with the private key from signer (which must | ||
21 | // already have been decrypted) and writes the signature to w. | ||
22 | // If config is nil, sensible defaults will be used. | ||
23 | func DetachSign(w io.Writer, signer *Entity, message io.Reader, config *packet.Config) error { | ||
24 | return detachSign(w, signer, message, packet.SigTypeBinary, config) | ||
25 | } | ||
26 | |||
27 | // ArmoredDetachSign signs message with the private key from signer (which | ||
28 | // must already have been decrypted) and writes an armored signature to w. | ||
29 | // If config is nil, sensible defaults will be used. | ||
30 | func ArmoredDetachSign(w io.Writer, signer *Entity, message io.Reader, config *packet.Config) (err error) { | ||
31 | return armoredDetachSign(w, signer, message, packet.SigTypeBinary, config) | ||
32 | } | ||
33 | |||
34 | // DetachSignText signs message (after canonicalising the line endings) with | ||
35 | // the private key from signer (which must already have been decrypted) and | ||
36 | // writes the signature to w. | ||
37 | // If config is nil, sensible defaults will be used. | ||
38 | func DetachSignText(w io.Writer, signer *Entity, message io.Reader, config *packet.Config) error { | ||
39 | return detachSign(w, signer, message, packet.SigTypeText, config) | ||
40 | } | ||
41 | |||
42 | // ArmoredDetachSignText signs message (after canonicalising the line endings) | ||
43 | // with the private key from signer (which must already have been decrypted) | ||
44 | // and writes an armored signature to w. | ||
45 | // If config is nil, sensible defaults will be used. | ||
46 | func ArmoredDetachSignText(w io.Writer, signer *Entity, message io.Reader, config *packet.Config) error { | ||
47 | return armoredDetachSign(w, signer, message, packet.SigTypeText, config) | ||
48 | } | ||
49 | |||
50 | func armoredDetachSign(w io.Writer, signer *Entity, message io.Reader, sigType packet.SignatureType, config *packet.Config) (err error) { | ||
51 | out, err := armor.Encode(w, SignatureType, nil) | ||
52 | if err != nil { | ||
53 | return | ||
54 | } | ||
55 | err = detachSign(out, signer, message, sigType, config) | ||
56 | if err != nil { | ||
57 | return | ||
58 | } | ||
59 | return out.Close() | ||
60 | } | ||
61 | |||
62 | func detachSign(w io.Writer, signer *Entity, message io.Reader, sigType packet.SignatureType, config *packet.Config) (err error) { | ||
63 | if signer.PrivateKey == nil { | ||
64 | return errors.InvalidArgumentError("signing key doesn't have a private key") | ||
65 | } | ||
66 | if signer.PrivateKey.Encrypted { | ||
67 | return errors.InvalidArgumentError("signing key is encrypted") | ||
68 | } | ||
69 | |||
70 | sig := new(packet.Signature) | ||
71 | sig.SigType = sigType | ||
72 | sig.PubKeyAlgo = signer.PrivateKey.PubKeyAlgo | ||
73 | sig.Hash = config.Hash() | ||
74 | sig.CreationTime = config.Now() | ||
75 | sig.IssuerKeyId = &signer.PrivateKey.KeyId | ||
76 | |||
77 | h, wrappedHash, err := hashForSignature(sig.Hash, sig.SigType) | ||
78 | if err != nil { | ||
79 | return | ||
80 | } | ||
81 | io.Copy(wrappedHash, message) | ||
82 | |||
83 | err = sig.Sign(h, signer.PrivateKey, config) | ||
84 | if err != nil { | ||
85 | return | ||
86 | } | ||
87 | |||
88 | return sig.Serialize(w) | ||
89 | } | ||
90 | |||
91 | // FileHints contains metadata about encrypted files. This metadata is, itself, | ||
92 | // encrypted. | ||
93 | type FileHints struct { | ||
94 | // IsBinary can be set to hint that the contents are binary data. | ||
95 | IsBinary bool | ||
96 | // FileName hints at the name of the file that should be written. It's | ||
97 | // truncated to 255 bytes if longer. It may be empty to suggest that the | ||
98 | // file should not be written to disk. It may be equal to "_CONSOLE" to | ||
99 | // suggest the data should not be written to disk. | ||
100 | FileName string | ||
101 | // ModTime contains the modification time of the file, or the zero time if not applicable. | ||
102 | ModTime time.Time | ||
103 | } | ||
104 | |||
105 | // SymmetricallyEncrypt acts like gpg -c: it encrypts a file with a passphrase. | ||
106 | // The resulting WriteCloser must be closed after the contents of the file have | ||
107 | // been written. | ||
108 | // If config is nil, sensible defaults will be used. | ||
109 | func SymmetricallyEncrypt(ciphertext io.Writer, passphrase []byte, hints *FileHints, config *packet.Config) (plaintext io.WriteCloser, err error) { | ||
110 | if hints == nil { | ||
111 | hints = &FileHints{} | ||
112 | } | ||
113 | |||
114 | key, err := packet.SerializeSymmetricKeyEncrypted(ciphertext, passphrase, config) | ||
115 | if err != nil { | ||
116 | return | ||
117 | } | ||
118 | w, err := packet.SerializeSymmetricallyEncrypted(ciphertext, config.Cipher(), key, config) | ||
119 | if err != nil { | ||
120 | return | ||
121 | } | ||
122 | |||
123 | literaldata := w | ||
124 | if algo := config.Compression(); algo != packet.CompressionNone { | ||
125 | var compConfig *packet.CompressionConfig | ||
126 | if config != nil { | ||
127 | compConfig = config.CompressionConfig | ||
128 | } | ||
129 | literaldata, err = packet.SerializeCompressed(w, algo, compConfig) | ||
130 | if err != nil { | ||
131 | return | ||
132 | } | ||
133 | } | ||
134 | |||
135 | var epochSeconds uint32 | ||
136 | if !hints.ModTime.IsZero() { | ||
137 | epochSeconds = uint32(hints.ModTime.Unix()) | ||
138 | } | ||
139 | return packet.SerializeLiteral(literaldata, hints.IsBinary, hints.FileName, epochSeconds) | ||
140 | } | ||
141 | |||
142 | // intersectPreferences mutates and returns a prefix of a that contains only | ||
143 | // the values in the intersection of a and b. The order of a is preserved. | ||
144 | func intersectPreferences(a []uint8, b []uint8) (intersection []uint8) { | ||
145 | var j int | ||
146 | for _, v := range a { | ||
147 | for _, v2 := range b { | ||
148 | if v == v2 { | ||
149 | a[j] = v | ||
150 | j++ | ||
151 | break | ||
152 | } | ||
153 | } | ||
154 | } | ||
155 | |||
156 | return a[:j] | ||
157 | } | ||
158 | |||
159 | func hashToHashId(h crypto.Hash) uint8 { | ||
160 | v, ok := s2k.HashToHashId(h) | ||
161 | if !ok { | ||
162 | panic("tried to convert unknown hash") | ||
163 | } | ||
164 | return v | ||
165 | } | ||
166 | |||
167 | // Encrypt encrypts a message to a number of recipients and, optionally, signs | ||
168 | // it. hints contains optional information, that is also encrypted, that aids | ||
169 | // the recipients in processing the message. The resulting WriteCloser must | ||
170 | // be closed after the contents of the file have been written. | ||
171 | // If config is nil, sensible defaults will be used. | ||
172 | func Encrypt(ciphertext io.Writer, to []*Entity, signed *Entity, hints *FileHints, config *packet.Config) (plaintext io.WriteCloser, err error) { | ||
173 | var signer *packet.PrivateKey | ||
174 | if signed != nil { | ||
175 | signKey, ok := signed.signingKey(config.Now()) | ||
176 | if !ok { | ||
177 | return nil, errors.InvalidArgumentError("no valid signing keys") | ||
178 | } | ||
179 | signer = signKey.PrivateKey | ||
180 | if signer == nil { | ||
181 | return nil, errors.InvalidArgumentError("no private key in signing key") | ||
182 | } | ||
183 | if signer.Encrypted { | ||
184 | return nil, errors.InvalidArgumentError("signing key must be decrypted") | ||
185 | } | ||
186 | } | ||
187 | |||
188 | // These are the possible ciphers that we'll use for the message. | ||
189 | candidateCiphers := []uint8{ | ||
190 | uint8(packet.CipherAES128), | ||
191 | uint8(packet.CipherAES256), | ||
192 | uint8(packet.CipherCAST5), | ||
193 | } | ||
194 | // These are the possible hash functions that we'll use for the signature. | ||
195 | candidateHashes := []uint8{ | ||
196 | hashToHashId(crypto.SHA256), | ||
197 | hashToHashId(crypto.SHA512), | ||
198 | hashToHashId(crypto.SHA1), | ||
199 | hashToHashId(crypto.RIPEMD160), | ||
200 | } | ||
201 | // In the event that a recipient doesn't specify any supported ciphers | ||
202 | // or hash functions, these are the ones that we assume that every | ||
203 | // implementation supports. | ||
204 | defaultCiphers := candidateCiphers[len(candidateCiphers)-1:] | ||
205 | defaultHashes := candidateHashes[len(candidateHashes)-1:] | ||
206 | |||
207 | encryptKeys := make([]Key, len(to)) | ||
208 | for i := range to { | ||
209 | var ok bool | ||
210 | encryptKeys[i], ok = to[i].encryptionKey(config.Now()) | ||
211 | if !ok { | ||
212 | return nil, errors.InvalidArgumentError("cannot encrypt a message to key id " + strconv.FormatUint(to[i].PrimaryKey.KeyId, 16) + " because it has no encryption keys") | ||
213 | } | ||
214 | |||
215 | sig := to[i].primaryIdentity().SelfSignature | ||
216 | |||
217 | preferredSymmetric := sig.PreferredSymmetric | ||
218 | if len(preferredSymmetric) == 0 { | ||
219 | preferredSymmetric = defaultCiphers | ||
220 | } | ||
221 | preferredHashes := sig.PreferredHash | ||
222 | if len(preferredHashes) == 0 { | ||
223 | preferredHashes = defaultHashes | ||
224 | } | ||
225 | candidateCiphers = intersectPreferences(candidateCiphers, preferredSymmetric) | ||
226 | candidateHashes = intersectPreferences(candidateHashes, preferredHashes) | ||
227 | } | ||
228 | |||
229 | if len(candidateCiphers) == 0 || len(candidateHashes) == 0 { | ||
230 | return nil, errors.InvalidArgumentError("cannot encrypt because recipient set shares no common algorithms") | ||
231 | } | ||
232 | |||
233 | cipher := packet.CipherFunction(candidateCiphers[0]) | ||
234 | // If the cipher specified by config is a candidate, we'll use that. | ||
235 | configuredCipher := config.Cipher() | ||
236 | for _, c := range candidateCiphers { | ||
237 | cipherFunc := packet.CipherFunction(c) | ||
238 | if cipherFunc == configuredCipher { | ||
239 | cipher = cipherFunc | ||
240 | break | ||
241 | } | ||
242 | } | ||
243 | |||
244 | var hash crypto.Hash | ||
245 | for _, hashId := range candidateHashes { | ||
246 | if h, ok := s2k.HashIdToHash(hashId); ok && h.Available() { | ||
247 | hash = h | ||
248 | break | ||
249 | } | ||
250 | } | ||
251 | |||
252 | // If the hash specified by config is a candidate, we'll use that. | ||
253 | if configuredHash := config.Hash(); configuredHash.Available() { | ||
254 | for _, hashId := range candidateHashes { | ||
255 | if h, ok := s2k.HashIdToHash(hashId); ok && h == configuredHash { | ||
256 | hash = h | ||
257 | break | ||
258 | } | ||
259 | } | ||
260 | } | ||
261 | |||
262 | if hash == 0 { | ||
263 | hashId := candidateHashes[0] | ||
264 | name, ok := s2k.HashIdToString(hashId) | ||
265 | if !ok { | ||
266 | name = "#" + strconv.Itoa(int(hashId)) | ||
267 | } | ||
268 | return nil, errors.InvalidArgumentError("cannot encrypt because no candidate hash functions are compiled in. (Wanted " + name + " in this case.)") | ||
269 | } | ||
270 | |||
271 | symKey := make([]byte, cipher.KeySize()) | ||
272 | if _, err := io.ReadFull(config.Random(), symKey); err != nil { | ||
273 | return nil, err | ||
274 | } | ||
275 | |||
276 | for _, key := range encryptKeys { | ||
277 | if err := packet.SerializeEncryptedKey(ciphertext, key.PublicKey, cipher, symKey, config); err != nil { | ||
278 | return nil, err | ||
279 | } | ||
280 | } | ||
281 | |||
282 | encryptedData, err := packet.SerializeSymmetricallyEncrypted(ciphertext, cipher, symKey, config) | ||
283 | if err != nil { | ||
284 | return | ||
285 | } | ||
286 | |||
287 | if signer != nil { | ||
288 | ops := &packet.OnePassSignature{ | ||
289 | SigType: packet.SigTypeBinary, | ||
290 | Hash: hash, | ||
291 | PubKeyAlgo: signer.PubKeyAlgo, | ||
292 | KeyId: signer.KeyId, | ||
293 | IsLast: true, | ||
294 | } | ||
295 | if err := ops.Serialize(encryptedData); err != nil { | ||
296 | return nil, err | ||
297 | } | ||
298 | } | ||
299 | |||
300 | if hints == nil { | ||
301 | hints = &FileHints{} | ||
302 | } | ||
303 | |||
304 | w := encryptedData | ||
305 | if signer != nil { | ||
306 | // If we need to write a signature packet after the literal | ||
307 | // data then we need to stop literalData from closing | ||
308 | // encryptedData. | ||
309 | w = noOpCloser{encryptedData} | ||
310 | |||
311 | } | ||
312 | var epochSeconds uint32 | ||
313 | if !hints.ModTime.IsZero() { | ||
314 | epochSeconds = uint32(hints.ModTime.Unix()) | ||
315 | } | ||
316 | literalData, err := packet.SerializeLiteral(w, hints.IsBinary, hints.FileName, epochSeconds) | ||
317 | if err != nil { | ||
318 | return nil, err | ||
319 | } | ||
320 | |||
321 | if signer != nil { | ||
322 | return signatureWriter{encryptedData, literalData, hash, hash.New(), signer, config}, nil | ||
323 | } | ||
324 | return literalData, nil | ||
325 | } | ||
326 | |||
327 | // signatureWriter hashes the contents of a message while passing it along to | ||
328 | // literalData. When closed, it closes literalData, writes a signature packet | ||
329 | // to encryptedData and then also closes encryptedData. | ||
330 | type signatureWriter struct { | ||
331 | encryptedData io.WriteCloser | ||
332 | literalData io.WriteCloser | ||
333 | hashType crypto.Hash | ||
334 | h hash.Hash | ||
335 | signer *packet.PrivateKey | ||
336 | config *packet.Config | ||
337 | } | ||
338 | |||
339 | func (s signatureWriter) Write(data []byte) (int, error) { | ||
340 | s.h.Write(data) | ||
341 | return s.literalData.Write(data) | ||
342 | } | ||
343 | |||
344 | func (s signatureWriter) Close() error { | ||
345 | sig := &packet.Signature{ | ||
346 | SigType: packet.SigTypeBinary, | ||
347 | PubKeyAlgo: s.signer.PubKeyAlgo, | ||
348 | Hash: s.hashType, | ||
349 | CreationTime: s.config.Now(), | ||
350 | IssuerKeyId: &s.signer.KeyId, | ||
351 | } | ||
352 | |||
353 | if err := sig.Sign(s.h, s.signer, s.config); err != nil { | ||
354 | return err | ||
355 | } | ||
356 | if err := s.literalData.Close(); err != nil { | ||
357 | return err | ||
358 | } | ||
359 | if err := sig.Serialize(s.encryptedData); err != nil { | ||
360 | return err | ||
361 | } | ||
362 | return s.encryptedData.Close() | ||
363 | } | ||
364 | |||
365 | // noOpCloser is like an ioutil.NopCloser, but for an io.Writer. | ||
366 | // TODO: we have two of these in OpenPGP packages alone. This probably needs | ||
367 | // to be promoted somewhere more common. | ||
368 | type noOpCloser struct { | ||
369 | w io.Writer | ||
370 | } | ||
371 | |||
372 | func (c noOpCloser) Write(data []byte) (n int, err error) { | ||
373 | return c.w.Write(data) | ||
374 | } | ||
375 | |||
376 | func (c noOpCloser) Close() error { | ||
377 | return nil | ||
378 | } | ||
diff --git a/vendor/golang.org/x/net/LICENSE b/vendor/golang.org/x/net/LICENSE new file mode 100644 index 0000000..6a66aea --- /dev/null +++ b/vendor/golang.org/x/net/LICENSE | |||
@@ -0,0 +1,27 @@ | |||
1 | Copyright (c) 2009 The Go Authors. All rights reserved. | ||
2 | |||
3 | Redistribution and use in source and binary forms, with or without | ||
4 | modification, are permitted provided that the following conditions are | ||
5 | met: | ||
6 | |||
7 | * Redistributions of source code must retain the above copyright | ||
8 | notice, this list of conditions and the following disclaimer. | ||
9 | * Redistributions in binary form must reproduce the above | ||
10 | copyright notice, this list of conditions and the following disclaimer | ||
11 | in the documentation and/or other materials provided with the | ||
12 | distribution. | ||
13 | * Neither the name of Google Inc. nor the names of its | ||
14 | contributors may be used to endorse or promote products derived from | ||
15 | this software without specific prior written permission. | ||
16 | |||
17 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS | ||
18 | "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT | ||
19 | LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR | ||
20 | A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT | ||
21 | OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, | ||
22 | SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT | ||
23 | LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, | ||
24 | DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY | ||
25 | THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT | ||
26 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE | ||
27 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | ||
diff --git a/vendor/golang.org/x/net/PATENTS b/vendor/golang.org/x/net/PATENTS new file mode 100644 index 0000000..7330990 --- /dev/null +++ b/vendor/golang.org/x/net/PATENTS | |||
@@ -0,0 +1,22 @@ | |||
1 | Additional IP Rights Grant (Patents) | ||
2 | |||
3 | "This implementation" means the copyrightable works distributed by | ||
4 | Google as part of the Go project. | ||
5 | |||
6 | Google hereby grants to You a perpetual, worldwide, non-exclusive, | ||
7 | no-charge, royalty-free, irrevocable (except as stated in this section) | ||
8 | patent license to make, have made, use, offer to sell, sell, import, | ||
9 | transfer and otherwise run, modify and propagate the contents of this | ||
10 | implementation of Go, where such license applies only to those patent | ||
11 | claims, both currently owned or controlled by Google and acquired in | ||
12 | the future, licensable by Google that are necessarily infringed by this | ||
13 | implementation of Go. This grant does not include claims that would be | ||
14 | infringed only as a consequence of further modification of this | ||
15 | implementation. If you or your agent or exclusive licensee institute or | ||
16 | order or agree to the institution of patent litigation against any | ||
17 | entity (including a cross-claim or counterclaim in a lawsuit) alleging | ||
18 | that this implementation of Go or any code incorporated within this | ||
19 | implementation of Go constitutes direct or contributory patent | ||
20 | infringement, or inducement of patent infringement, then any patent | ||
21 | rights granted to you under this License for this implementation of Go | ||
22 | shall terminate as of the date such litigation is filed. | ||
diff --git a/vendor/golang.org/x/net/html/atom/atom.go b/vendor/golang.org/x/net/html/atom/atom.go new file mode 100644 index 0000000..cd0a8ac --- /dev/null +++ b/vendor/golang.org/x/net/html/atom/atom.go | |||
@@ -0,0 +1,78 @@ | |||
1 | // Copyright 2012 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | // Package atom provides integer codes (also known as atoms) for a fixed set of | ||
6 | // frequently occurring HTML strings: tag names and attribute keys such as "p" | ||
7 | // and "id". | ||
8 | // | ||
9 | // Sharing an atom's name between all elements with the same tag can result in | ||
10 | // fewer string allocations when tokenizing and parsing HTML. Integer | ||
11 | // comparisons are also generally faster than string comparisons. | ||
12 | // | ||
13 | // The value of an atom's particular code is not guaranteed to stay the same | ||
14 | // between versions of this package. Neither is any ordering guaranteed: | ||
15 | // whether atom.H1 < atom.H2 may also change. The codes are not guaranteed to | ||
16 | // be dense. The only guarantees are that e.g. looking up "div" will yield | ||
17 | // atom.Div, calling atom.Div.String will return "div", and atom.Div != 0. | ||
18 | package atom // import "golang.org/x/net/html/atom" | ||
19 | |||
20 | // Atom is an integer code for a string. The zero value maps to "". | ||
21 | type Atom uint32 | ||
22 | |||
23 | // String returns the atom's name. | ||
24 | func (a Atom) String() string { | ||
25 | start := uint32(a >> 8) | ||
26 | n := uint32(a & 0xff) | ||
27 | if start+n > uint32(len(atomText)) { | ||
28 | return "" | ||
29 | } | ||
30 | return atomText[start : start+n] | ||
31 | } | ||
32 | |||
33 | func (a Atom) string() string { | ||
34 | return atomText[a>>8 : a>>8+a&0xff] | ||
35 | } | ||
36 | |||
37 | // fnv computes the FNV hash with an arbitrary starting value h. | ||
38 | func fnv(h uint32, s []byte) uint32 { | ||
39 | for i := range s { | ||
40 | h ^= uint32(s[i]) | ||
41 | h *= 16777619 | ||
42 | } | ||
43 | return h | ||
44 | } | ||
45 | |||
46 | func match(s string, t []byte) bool { | ||
47 | for i, c := range t { | ||
48 | if s[i] != c { | ||
49 | return false | ||
50 | } | ||
51 | } | ||
52 | return true | ||
53 | } | ||
54 | |||
55 | // Lookup returns the atom whose name is s. It returns zero if there is no | ||
56 | // such atom. The lookup is case sensitive. | ||
57 | func Lookup(s []byte) Atom { | ||
58 | if len(s) == 0 || len(s) > maxAtomLen { | ||
59 | return 0 | ||
60 | } | ||
61 | h := fnv(hash0, s) | ||
62 | if a := table[h&uint32(len(table)-1)]; int(a&0xff) == len(s) && match(a.string(), s) { | ||
63 | return a | ||
64 | } | ||
65 | if a := table[(h>>16)&uint32(len(table)-1)]; int(a&0xff) == len(s) && match(a.string(), s) { | ||
66 | return a | ||
67 | } | ||
68 | return 0 | ||
69 | } | ||
70 | |||
71 | // String returns a string whose contents are equal to s. In that sense, it is | ||
72 | // equivalent to string(s) but may be more efficient. | ||
73 | func String(s []byte) string { | ||
74 | if a := Lookup(s); a != 0 { | ||
75 | return a.String() | ||
76 | } | ||
77 | return string(s) | ||
78 | } | ||
diff --git a/vendor/golang.org/x/net/html/atom/table.go b/vendor/golang.org/x/net/html/atom/table.go new file mode 100644 index 0000000..2605ba3 --- /dev/null +++ b/vendor/golang.org/x/net/html/atom/table.go | |||
@@ -0,0 +1,713 @@ | |||
1 | // generated by go run gen.go; DO NOT EDIT | ||
2 | |||
3 | package atom | ||
4 | |||
5 | const ( | ||
6 | A Atom = 0x1 | ||
7 | Abbr Atom = 0x4 | ||
8 | Accept Atom = 0x2106 | ||
9 | AcceptCharset Atom = 0x210e | ||
10 | Accesskey Atom = 0x3309 | ||
11 | Action Atom = 0x1f606 | ||
12 | Address Atom = 0x4f307 | ||
13 | Align Atom = 0x1105 | ||
14 | Alt Atom = 0x4503 | ||
15 | Annotation Atom = 0x1670a | ||
16 | AnnotationXml Atom = 0x1670e | ||
17 | Applet Atom = 0x2b306 | ||
18 | Area Atom = 0x2fa04 | ||
19 | Article Atom = 0x38807 | ||
20 | Aside Atom = 0x8305 | ||
21 | Async Atom = 0x7b05 | ||
22 | Audio Atom = 0xa605 | ||
23 | Autocomplete Atom = 0x1fc0c | ||
24 | Autofocus Atom = 0xb309 | ||
25 | Autoplay Atom = 0xce08 | ||
26 | B Atom = 0x101 | ||
27 | Base Atom = 0xd604 | ||
28 | Basefont Atom = 0xd608 | ||
29 | Bdi Atom = 0x1a03 | ||
30 | Bdo Atom = 0xe703 | ||
31 | Bgsound Atom = 0x11807 | ||
32 | Big Atom = 0x12403 | ||
33 | Blink Atom = 0x12705 | ||
34 | Blockquote Atom = 0x12c0a | ||
35 | Body Atom = 0x2f04 | ||
36 | Br Atom = 0x202 | ||
37 | Button Atom = 0x13606 | ||
38 | Canvas Atom = 0x7f06 | ||
39 | Caption Atom = 0x1bb07 | ||
40 | Center Atom = 0x5b506 | ||
41 | Challenge Atom = 0x21f09 | ||
42 | Charset Atom = 0x2807 | ||
43 | Checked Atom = 0x32807 | ||
44 | Cite Atom = 0x3c804 | ||
45 | Class Atom = 0x4de05 | ||
46 | Code Atom = 0x14904 | ||
47 | Col Atom = 0x15003 | ||
48 | Colgroup Atom = 0x15008 | ||
49 | Color Atom = 0x15d05 | ||
50 | Cols Atom = 0x16204 | ||
51 | Colspan Atom = 0x16207 | ||
52 | Command Atom = 0x17507 | ||
53 | Content Atom = 0x42307 | ||
54 | Contenteditable Atom = 0x4230f | ||
55 | Contextmenu Atom = 0x3310b | ||
56 | Controls Atom = 0x18808 | ||
57 | Coords Atom = 0x19406 | ||
58 | Crossorigin Atom = 0x19f0b | ||
59 | Data Atom = 0x44a04 | ||
60 | Datalist Atom = 0x44a08 | ||
61 | Datetime Atom = 0x23c08 | ||
62 | Dd Atom = 0x26702 | ||
63 | Default Atom = 0x8607 | ||
64 | Defer Atom = 0x14b05 | ||
65 | Del Atom = 0x3ef03 | ||
66 | Desc Atom = 0x4db04 | ||
67 | Details Atom = 0x4807 | ||
68 | Dfn Atom = 0x6103 | ||
69 | Dialog Atom = 0x1b06 | ||
70 | Dir Atom = 0x6903 | ||
71 | Dirname Atom = 0x6907 | ||
72 | Disabled Atom = 0x10c08 | ||
73 | Div Atom = 0x11303 | ||
74 | Dl Atom = 0x11e02 | ||
75 | Download Atom = 0x40008 | ||
76 | Draggable Atom = 0x17b09 | ||
77 | Dropzone Atom = 0x39108 | ||
78 | Dt Atom = 0x50902 | ||
79 | Em Atom = 0x6502 | ||
80 | Embed Atom = 0x6505 | ||
81 | Enctype Atom = 0x21107 | ||
82 | Face Atom = 0x5b304 | ||
83 | Fieldset Atom = 0x1b008 | ||
84 | Figcaption Atom = 0x1b80a | ||
85 | Figure Atom = 0x1cc06 | ||
86 | Font Atom = 0xda04 | ||
87 | Footer Atom = 0x8d06 | ||
88 | For Atom = 0x1d803 | ||
89 | ForeignObject Atom = 0x1d80d | ||
90 | Foreignobject Atom = 0x1e50d | ||
91 | Form Atom = 0x1f204 | ||
92 | Formaction Atom = 0x1f20a | ||
93 | Formenctype Atom = 0x20d0b | ||
94 | Formmethod Atom = 0x2280a | ||
95 | Formnovalidate Atom = 0x2320e | ||
96 | Formtarget Atom = 0x2470a | ||
97 | Frame Atom = 0x9a05 | ||
98 | Frameset Atom = 0x9a08 | ||
99 | H1 Atom = 0x26e02 | ||
100 | H2 Atom = 0x29402 | ||
101 | H3 Atom = 0x2a702 | ||
102 | H4 Atom = 0x2e902 | ||
103 | H5 Atom = 0x2f302 | ||
104 | H6 Atom = 0x50b02 | ||
105 | Head Atom = 0x2d504 | ||
106 | Header Atom = 0x2d506 | ||
107 | Headers Atom = 0x2d507 | ||
108 | Height Atom = 0x25106 | ||
109 | Hgroup Atom = 0x25906 | ||
110 | Hidden Atom = 0x26506 | ||
111 | High Atom = 0x26b04 | ||
112 | Hr Atom = 0x27002 | ||
113 | Href Atom = 0x27004 | ||
114 | Hreflang Atom = 0x27008 | ||
115 | Html Atom = 0x25504 | ||
116 | HttpEquiv Atom = 0x2780a | ||
117 | I Atom = 0x601 | ||
118 | Icon Atom = 0x42204 | ||
119 | Id Atom = 0x8502 | ||
120 | Iframe Atom = 0x29606 | ||
121 | Image Atom = 0x29c05 | ||
122 | Img Atom = 0x2a103 | ||
123 | Input Atom = 0x3e805 | ||
124 | Inputmode Atom = 0x3e809 | ||
125 | Ins Atom = 0x1a803 | ||
126 | Isindex Atom = 0x2a907 | ||
127 | Ismap Atom = 0x2b005 | ||
128 | Itemid Atom = 0x33c06 | ||
129 | Itemprop Atom = 0x3c908 | ||
130 | Itemref Atom = 0x5ad07 | ||
131 | Itemscope Atom = 0x2b909 | ||
132 | Itemtype Atom = 0x2c308 | ||
133 | Kbd Atom = 0x1903 | ||
134 | Keygen Atom = 0x3906 | ||
135 | Keytype Atom = 0x53707 | ||
136 | Kind Atom = 0x10904 | ||
137 | Label Atom = 0xf005 | ||
138 | Lang Atom = 0x27404 | ||
139 | Legend Atom = 0x18206 | ||
140 | Li Atom = 0x1202 | ||
141 | Link Atom = 0x12804 | ||
142 | List Atom = 0x44e04 | ||
143 | Listing Atom = 0x44e07 | ||
144 | Loop Atom = 0xf404 | ||
145 | Low Atom = 0x11f03 | ||
146 | Malignmark Atom = 0x100a | ||
147 | Manifest Atom = 0x5f108 | ||
148 | Map Atom = 0x2b203 | ||
149 | Mark Atom = 0x1604 | ||
150 | Marquee Atom = 0x2cb07 | ||
151 | Math Atom = 0x2d204 | ||
152 | Max Atom = 0x2e103 | ||
153 | Maxlength Atom = 0x2e109 | ||
154 | Media Atom = 0x6e05 | ||
155 | Mediagroup Atom = 0x6e0a | ||
156 | Menu Atom = 0x33804 | ||
157 | Menuitem Atom = 0x33808 | ||
158 | Meta Atom = 0x45d04 | ||
159 | Meter Atom = 0x24205 | ||
160 | Method Atom = 0x22c06 | ||
161 | Mglyph Atom = 0x2a206 | ||
162 | Mi Atom = 0x2eb02 | ||
163 | Min Atom = 0x2eb03 | ||
164 | Minlength Atom = 0x2eb09 | ||
165 | Mn Atom = 0x23502 | ||
166 | Mo Atom = 0x3ed02 | ||
167 | Ms Atom = 0x2bc02 | ||
168 | Mtext Atom = 0x2f505 | ||
169 | Multiple Atom = 0x30308 | ||
170 | Muted Atom = 0x30b05 | ||
171 | Name Atom = 0x6c04 | ||
172 | Nav Atom = 0x3e03 | ||
173 | Nobr Atom = 0x5704 | ||
174 | Noembed Atom = 0x6307 | ||
175 | Noframes Atom = 0x9808 | ||
176 | Noscript Atom = 0x3d208 | ||
177 | Novalidate Atom = 0x2360a | ||
178 | Object Atom = 0x1ec06 | ||
179 | Ol Atom = 0xc902 | ||
180 | Onabort Atom = 0x13a07 | ||
181 | Onafterprint Atom = 0x1c00c | ||
182 | Onautocomplete Atom = 0x1fa0e | ||
183 | Onautocompleteerror Atom = 0x1fa13 | ||
184 | Onbeforeprint Atom = 0x6040d | ||
185 | Onbeforeunload Atom = 0x4e70e | ||
186 | Onblur Atom = 0xaa06 | ||
187 | Oncancel Atom = 0xe908 | ||
188 | Oncanplay Atom = 0x28509 | ||
189 | Oncanplaythrough Atom = 0x28510 | ||
190 | Onchange Atom = 0x3a708 | ||
191 | Onclick Atom = 0x31007 | ||
192 | Onclose Atom = 0x31707 | ||
193 | Oncontextmenu Atom = 0x32f0d | ||
194 | Oncuechange Atom = 0x3420b | ||
195 | Ondblclick Atom = 0x34d0a | ||
196 | Ondrag Atom = 0x35706 | ||
197 | Ondragend Atom = 0x35709 | ||
198 | Ondragenter Atom = 0x3600b | ||
199 | Ondragleave Atom = 0x36b0b | ||
200 | Ondragover Atom = 0x3760a | ||
201 | Ondragstart Atom = 0x3800b | ||
202 | Ondrop Atom = 0x38f06 | ||
203 | Ondurationchange Atom = 0x39f10 | ||
204 | Onemptied Atom = 0x39609 | ||
205 | Onended Atom = 0x3af07 | ||
206 | Onerror Atom = 0x3b607 | ||
207 | Onfocus Atom = 0x3bd07 | ||
208 | Onhashchange Atom = 0x3da0c | ||
209 | Oninput Atom = 0x3e607 | ||
210 | Oninvalid Atom = 0x3f209 | ||
211 | Onkeydown Atom = 0x3fb09 | ||
212 | Onkeypress Atom = 0x4080a | ||
213 | Onkeyup Atom = 0x41807 | ||
214 | Onlanguagechange Atom = 0x43210 | ||
215 | Onload Atom = 0x44206 | ||
216 | Onloadeddata Atom = 0x4420c | ||
217 | Onloadedmetadata Atom = 0x45510 | ||
218 | Onloadstart Atom = 0x46b0b | ||
219 | Onmessage Atom = 0x47609 | ||
220 | Onmousedown Atom = 0x47f0b | ||
221 | Onmousemove Atom = 0x48a0b | ||
222 | Onmouseout Atom = 0x4950a | ||
223 | Onmouseover Atom = 0x4a20b | ||
224 | Onmouseup Atom = 0x4ad09 | ||
225 | Onmousewheel Atom = 0x4b60c | ||
226 | Onoffline Atom = 0x4c209 | ||
227 | Ononline Atom = 0x4cb08 | ||
228 | Onpagehide Atom = 0x4d30a | ||
229 | Onpageshow Atom = 0x4fe0a | ||
230 | Onpause Atom = 0x50d07 | ||
231 | Onplay Atom = 0x51706 | ||
232 | Onplaying Atom = 0x51709 | ||
233 | Onpopstate Atom = 0x5200a | ||
234 | Onprogress Atom = 0x52a0a | ||
235 | Onratechange Atom = 0x53e0c | ||
236 | Onreset Atom = 0x54a07 | ||
237 | Onresize Atom = 0x55108 | ||
238 | Onscroll Atom = 0x55f08 | ||
239 | Onseeked Atom = 0x56708 | ||
240 | Onseeking Atom = 0x56f09 | ||
241 | Onselect Atom = 0x57808 | ||
242 | Onshow Atom = 0x58206 | ||
243 | Onsort Atom = 0x58b06 | ||
244 | Onstalled Atom = 0x59509 | ||
245 | Onstorage Atom = 0x59e09 | ||
246 | Onsubmit Atom = 0x5a708 | ||
247 | Onsuspend Atom = 0x5bb09 | ||
248 | Ontimeupdate Atom = 0xdb0c | ||
249 | Ontoggle Atom = 0x5c408 | ||
250 | Onunload Atom = 0x5cc08 | ||
251 | Onvolumechange Atom = 0x5d40e | ||
252 | Onwaiting Atom = 0x5e209 | ||
253 | Open Atom = 0x3cf04 | ||
254 | Optgroup Atom = 0xf608 | ||
255 | Optimum Atom = 0x5eb07 | ||
256 | Option Atom = 0x60006 | ||
257 | Output Atom = 0x49c06 | ||
258 | P Atom = 0xc01 | ||
259 | Param Atom = 0xc05 | ||
260 | Pattern Atom = 0x5107 | ||
261 | Ping Atom = 0x7704 | ||
262 | Placeholder Atom = 0xc30b | ||
263 | Plaintext Atom = 0xfd09 | ||
264 | Poster Atom = 0x15706 | ||
265 | Pre Atom = 0x25e03 | ||
266 | Preload Atom = 0x25e07 | ||
267 | Progress Atom = 0x52c08 | ||
268 | Prompt Atom = 0x5fa06 | ||
269 | Public Atom = 0x41e06 | ||
270 | Q Atom = 0x13101 | ||
271 | Radiogroup Atom = 0x30a | ||
272 | Readonly Atom = 0x2fb08 | ||
273 | Rel Atom = 0x25f03 | ||
274 | Required Atom = 0x1d008 | ||
275 | Reversed Atom = 0x5a08 | ||
276 | Rows Atom = 0x9204 | ||
277 | Rowspan Atom = 0x9207 | ||
278 | Rp Atom = 0x1c602 | ||
279 | Rt Atom = 0x13f02 | ||
280 | Ruby Atom = 0xaf04 | ||
281 | S Atom = 0x2c01 | ||
282 | Samp Atom = 0x4e04 | ||
283 | Sandbox Atom = 0xbb07 | ||
284 | Scope Atom = 0x2bd05 | ||
285 | Scoped Atom = 0x2bd06 | ||
286 | Script Atom = 0x3d406 | ||
287 | Seamless Atom = 0x31c08 | ||
288 | Section Atom = 0x4e207 | ||
289 | Select Atom = 0x57a06 | ||
290 | Selected Atom = 0x57a08 | ||
291 | Shape Atom = 0x4f905 | ||
292 | Size Atom = 0x55504 | ||
293 | Sizes Atom = 0x55505 | ||
294 | Small Atom = 0x18f05 | ||
295 | Sortable Atom = 0x58d08 | ||
296 | Sorted Atom = 0x19906 | ||
297 | Source Atom = 0x1aa06 | ||
298 | Spacer Atom = 0x2db06 | ||
299 | Span Atom = 0x9504 | ||
300 | Spellcheck Atom = 0x3230a | ||
301 | Src Atom = 0x3c303 | ||
302 | Srcdoc Atom = 0x3c306 | ||
303 | Srclang Atom = 0x41107 | ||
304 | Start Atom = 0x38605 | ||
305 | Step Atom = 0x5f704 | ||
306 | Strike Atom = 0x53306 | ||
307 | Strong Atom = 0x55906 | ||
308 | Style Atom = 0x61105 | ||
309 | Sub Atom = 0x5a903 | ||
310 | Summary Atom = 0x61607 | ||
311 | Sup Atom = 0x61d03 | ||
312 | Svg Atom = 0x62003 | ||
313 | System Atom = 0x62306 | ||
314 | Tabindex Atom = 0x46308 | ||
315 | Table Atom = 0x42d05 | ||
316 | Target Atom = 0x24b06 | ||
317 | Tbody Atom = 0x2e05 | ||
318 | Td Atom = 0x4702 | ||
319 | Template Atom = 0x62608 | ||
320 | Textarea Atom = 0x2f608 | ||
321 | Tfoot Atom = 0x8c05 | ||
322 | Th Atom = 0x22e02 | ||
323 | Thead Atom = 0x2d405 | ||
324 | Time Atom = 0xdd04 | ||
325 | Title Atom = 0xa105 | ||
326 | Tr Atom = 0x10502 | ||
327 | Track Atom = 0x10505 | ||
328 | Translate Atom = 0x14009 | ||
329 | Tt Atom = 0x5302 | ||
330 | Type Atom = 0x21404 | ||
331 | Typemustmatch Atom = 0x2140d | ||
332 | U Atom = 0xb01 | ||
333 | Ul Atom = 0x8a02 | ||
334 | Usemap Atom = 0x51106 | ||
335 | Value Atom = 0x4005 | ||
336 | Var Atom = 0x11503 | ||
337 | Video Atom = 0x28105 | ||
338 | Wbr Atom = 0x12103 | ||
339 | Width Atom = 0x50705 | ||
340 | Wrap Atom = 0x58704 | ||
341 | Xmp Atom = 0xc103 | ||
342 | ) | ||
343 | |||
344 | const hash0 = 0xc17da63e | ||
345 | |||
346 | const maxAtomLen = 19 | ||
347 | |||
348 | var table = [1 << 9]Atom{ | ||
349 | 0x1: 0x48a0b, // onmousemove | ||
350 | 0x2: 0x5e209, // onwaiting | ||
351 | 0x3: 0x1fa13, // onautocompleteerror | ||
352 | 0x4: 0x5fa06, // prompt | ||
353 | 0x7: 0x5eb07, // optimum | ||
354 | 0x8: 0x1604, // mark | ||
355 | 0xa: 0x5ad07, // itemref | ||
356 | 0xb: 0x4fe0a, // onpageshow | ||
357 | 0xc: 0x57a06, // select | ||
358 | 0xd: 0x17b09, // draggable | ||
359 | 0xe: 0x3e03, // nav | ||
360 | 0xf: 0x17507, // command | ||
361 | 0x11: 0xb01, // u | ||
362 | 0x14: 0x2d507, // headers | ||
363 | 0x15: 0x44a08, // datalist | ||
364 | 0x17: 0x4e04, // samp | ||
365 | 0x1a: 0x3fb09, // onkeydown | ||
366 | 0x1b: 0x55f08, // onscroll | ||
367 | 0x1c: 0x15003, // col | ||
368 | 0x20: 0x3c908, // itemprop | ||
369 | 0x21: 0x2780a, // http-equiv | ||
370 | 0x22: 0x61d03, // sup | ||
371 | 0x24: 0x1d008, // required | ||
372 | 0x2b: 0x25e07, // preload | ||
373 | 0x2c: 0x6040d, // onbeforeprint | ||
374 | 0x2d: 0x3600b, // ondragenter | ||
375 | 0x2e: 0x50902, // dt | ||
376 | 0x2f: 0x5a708, // onsubmit | ||
377 | 0x30: 0x27002, // hr | ||
378 | 0x31: 0x32f0d, // oncontextmenu | ||
379 | 0x33: 0x29c05, // image | ||
380 | 0x34: 0x50d07, // onpause | ||
381 | 0x35: 0x25906, // hgroup | ||
382 | 0x36: 0x7704, // ping | ||
383 | 0x37: 0x57808, // onselect | ||
384 | 0x3a: 0x11303, // div | ||
385 | 0x3b: 0x1fa0e, // onautocomplete | ||
386 | 0x40: 0x2eb02, // mi | ||
387 | 0x41: 0x31c08, // seamless | ||
388 | 0x42: 0x2807, // charset | ||
389 | 0x43: 0x8502, // id | ||
390 | 0x44: 0x5200a, // onpopstate | ||
391 | 0x45: 0x3ef03, // del | ||
392 | 0x46: 0x2cb07, // marquee | ||
393 | 0x47: 0x3309, // accesskey | ||
394 | 0x49: 0x8d06, // footer | ||
395 | 0x4a: 0x44e04, // list | ||
396 | 0x4b: 0x2b005, // ismap | ||
397 | 0x51: 0x33804, // menu | ||
398 | 0x52: 0x2f04, // body | ||
399 | 0x55: 0x9a08, // frameset | ||
400 | 0x56: 0x54a07, // onreset | ||
401 | 0x57: 0x12705, // blink | ||
402 | 0x58: 0xa105, // title | ||
403 | 0x59: 0x38807, // article | ||
404 | 0x5b: 0x22e02, // th | ||
405 | 0x5d: 0x13101, // q | ||
406 | 0x5e: 0x3cf04, // open | ||
407 | 0x5f: 0x2fa04, // area | ||
408 | 0x61: 0x44206, // onload | ||
409 | 0x62: 0xda04, // font | ||
410 | 0x63: 0xd604, // base | ||
411 | 0x64: 0x16207, // colspan | ||
412 | 0x65: 0x53707, // keytype | ||
413 | 0x66: 0x11e02, // dl | ||
414 | 0x68: 0x1b008, // fieldset | ||
415 | 0x6a: 0x2eb03, // min | ||
416 | 0x6b: 0x11503, // var | ||
417 | 0x6f: 0x2d506, // header | ||
418 | 0x70: 0x13f02, // rt | ||
419 | 0x71: 0x15008, // colgroup | ||
420 | 0x72: 0x23502, // mn | ||
421 | 0x74: 0x13a07, // onabort | ||
422 | 0x75: 0x3906, // keygen | ||
423 | 0x76: 0x4c209, // onoffline | ||
424 | 0x77: 0x21f09, // challenge | ||
425 | 0x78: 0x2b203, // map | ||
426 | 0x7a: 0x2e902, // h4 | ||
427 | 0x7b: 0x3b607, // onerror | ||
428 | 0x7c: 0x2e109, // maxlength | ||
429 | 0x7d: 0x2f505, // mtext | ||
430 | 0x7e: 0xbb07, // sandbox | ||
431 | 0x7f: 0x58b06, // onsort | ||
432 | 0x80: 0x100a, // malignmark | ||
433 | 0x81: 0x45d04, // meta | ||
434 | 0x82: 0x7b05, // async | ||
435 | 0x83: 0x2a702, // h3 | ||
436 | 0x84: 0x26702, // dd | ||
437 | 0x85: 0x27004, // href | ||
438 | 0x86: 0x6e0a, // mediagroup | ||
439 | 0x87: 0x19406, // coords | ||
440 | 0x88: 0x41107, // srclang | ||
441 | 0x89: 0x34d0a, // ondblclick | ||
442 | 0x8a: 0x4005, // value | ||
443 | 0x8c: 0xe908, // oncancel | ||
444 | 0x8e: 0x3230a, // spellcheck | ||
445 | 0x8f: 0x9a05, // frame | ||
446 | 0x91: 0x12403, // big | ||
447 | 0x94: 0x1f606, // action | ||
448 | 0x95: 0x6903, // dir | ||
449 | 0x97: 0x2fb08, // readonly | ||
450 | 0x99: 0x42d05, // table | ||
451 | 0x9a: 0x61607, // summary | ||
452 | 0x9b: 0x12103, // wbr | ||
453 | 0x9c: 0x30a, // radiogroup | ||
454 | 0x9d: 0x6c04, // name | ||
455 | 0x9f: 0x62306, // system | ||
456 | 0xa1: 0x15d05, // color | ||
457 | 0xa2: 0x7f06, // canvas | ||
458 | 0xa3: 0x25504, // html | ||
459 | 0xa5: 0x56f09, // onseeking | ||
460 | 0xac: 0x4f905, // shape | ||
461 | 0xad: 0x25f03, // rel | ||
462 | 0xae: 0x28510, // oncanplaythrough | ||
463 | 0xaf: 0x3760a, // ondragover | ||
464 | 0xb0: 0x62608, // template | ||
465 | 0xb1: 0x1d80d, // foreignObject | ||
466 | 0xb3: 0x9204, // rows | ||
467 | 0xb6: 0x44e07, // listing | ||
468 | 0xb7: 0x49c06, // output | ||
469 | 0xb9: 0x3310b, // contextmenu | ||
470 | 0xbb: 0x11f03, // low | ||
471 | 0xbc: 0x1c602, // rp | ||
472 | 0xbd: 0x5bb09, // onsuspend | ||
473 | 0xbe: 0x13606, // button | ||
474 | 0xbf: 0x4db04, // desc | ||
475 | 0xc1: 0x4e207, // section | ||
476 | 0xc2: 0x52a0a, // onprogress | ||
477 | 0xc3: 0x59e09, // onstorage | ||
478 | 0xc4: 0x2d204, // math | ||
479 | 0xc5: 0x4503, // alt | ||
480 | 0xc7: 0x8a02, // ul | ||
481 | 0xc8: 0x5107, // pattern | ||
482 | 0xc9: 0x4b60c, // onmousewheel | ||
483 | 0xca: 0x35709, // ondragend | ||
484 | 0xcb: 0xaf04, // ruby | ||
485 | 0xcc: 0xc01, // p | ||
486 | 0xcd: 0x31707, // onclose | ||
487 | 0xce: 0x24205, // meter | ||
488 | 0xcf: 0x11807, // bgsound | ||
489 | 0xd2: 0x25106, // height | ||
490 | 0xd4: 0x101, // b | ||
491 | 0xd5: 0x2c308, // itemtype | ||
492 | 0xd8: 0x1bb07, // caption | ||
493 | 0xd9: 0x10c08, // disabled | ||
494 | 0xdb: 0x33808, // menuitem | ||
495 | 0xdc: 0x62003, // svg | ||
496 | 0xdd: 0x18f05, // small | ||
497 | 0xde: 0x44a04, // data | ||
498 | 0xe0: 0x4cb08, // ononline | ||
499 | 0xe1: 0x2a206, // mglyph | ||
500 | 0xe3: 0x6505, // embed | ||
501 | 0xe4: 0x10502, // tr | ||
502 | 0xe5: 0x46b0b, // onloadstart | ||
503 | 0xe7: 0x3c306, // srcdoc | ||
504 | 0xeb: 0x5c408, // ontoggle | ||
505 | 0xed: 0xe703, // bdo | ||
506 | 0xee: 0x4702, // td | ||
507 | 0xef: 0x8305, // aside | ||
508 | 0xf0: 0x29402, // h2 | ||
509 | 0xf1: 0x52c08, // progress | ||
510 | 0xf2: 0x12c0a, // blockquote | ||
511 | 0xf4: 0xf005, // label | ||
512 | 0xf5: 0x601, // i | ||
513 | 0xf7: 0x9207, // rowspan | ||
514 | 0xfb: 0x51709, // onplaying | ||
515 | 0xfd: 0x2a103, // img | ||
516 | 0xfe: 0xf608, // optgroup | ||
517 | 0xff: 0x42307, // content | ||
518 | 0x101: 0x53e0c, // onratechange | ||
519 | 0x103: 0x3da0c, // onhashchange | ||
520 | 0x104: 0x4807, // details | ||
521 | 0x106: 0x40008, // download | ||
522 | 0x109: 0x14009, // translate | ||
523 | 0x10b: 0x4230f, // contenteditable | ||
524 | 0x10d: 0x36b0b, // ondragleave | ||
525 | 0x10e: 0x2106, // accept | ||
526 | 0x10f: 0x57a08, // selected | ||
527 | 0x112: 0x1f20a, // formaction | ||
528 | 0x113: 0x5b506, // center | ||
529 | 0x115: 0x45510, // onloadedmetadata | ||
530 | 0x116: 0x12804, // link | ||
531 | 0x117: 0xdd04, // time | ||
532 | 0x118: 0x19f0b, // crossorigin | ||
533 | 0x119: 0x3bd07, // onfocus | ||
534 | 0x11a: 0x58704, // wrap | ||
535 | 0x11b: 0x42204, // icon | ||
536 | 0x11d: 0x28105, // video | ||
537 | 0x11e: 0x4de05, // class | ||
538 | 0x121: 0x5d40e, // onvolumechange | ||
539 | 0x122: 0xaa06, // onblur | ||
540 | 0x123: 0x2b909, // itemscope | ||
541 | 0x124: 0x61105, // style | ||
542 | 0x127: 0x41e06, // public | ||
543 | 0x129: 0x2320e, // formnovalidate | ||
544 | 0x12a: 0x58206, // onshow | ||
545 | 0x12c: 0x51706, // onplay | ||
546 | 0x12d: 0x3c804, // cite | ||
547 | 0x12e: 0x2bc02, // ms | ||
548 | 0x12f: 0xdb0c, // ontimeupdate | ||
549 | 0x130: 0x10904, // kind | ||
550 | 0x131: 0x2470a, // formtarget | ||
551 | 0x135: 0x3af07, // onended | ||
552 | 0x136: 0x26506, // hidden | ||
553 | 0x137: 0x2c01, // s | ||
554 | 0x139: 0x2280a, // formmethod | ||
555 | 0x13a: 0x3e805, // input | ||
556 | 0x13c: 0x50b02, // h6 | ||
557 | 0x13d: 0xc902, // ol | ||
558 | 0x13e: 0x3420b, // oncuechange | ||
559 | 0x13f: 0x1e50d, // foreignobject | ||
560 | 0x143: 0x4e70e, // onbeforeunload | ||
561 | 0x144: 0x2bd05, // scope | ||
562 | 0x145: 0x39609, // onemptied | ||
563 | 0x146: 0x14b05, // defer | ||
564 | 0x147: 0xc103, // xmp | ||
565 | 0x148: 0x39f10, // ondurationchange | ||
566 | 0x149: 0x1903, // kbd | ||
567 | 0x14c: 0x47609, // onmessage | ||
568 | 0x14d: 0x60006, // option | ||
569 | 0x14e: 0x2eb09, // minlength | ||
570 | 0x14f: 0x32807, // checked | ||
571 | 0x150: 0xce08, // autoplay | ||
572 | 0x152: 0x202, // br | ||
573 | 0x153: 0x2360a, // novalidate | ||
574 | 0x156: 0x6307, // noembed | ||
575 | 0x159: 0x31007, // onclick | ||
576 | 0x15a: 0x47f0b, // onmousedown | ||
577 | 0x15b: 0x3a708, // onchange | ||
578 | 0x15e: 0x3f209, // oninvalid | ||
579 | 0x15f: 0x2bd06, // scoped | ||
580 | 0x160: 0x18808, // controls | ||
581 | 0x161: 0x30b05, // muted | ||
582 | 0x162: 0x58d08, // sortable | ||
583 | 0x163: 0x51106, // usemap | ||
584 | 0x164: 0x1b80a, // figcaption | ||
585 | 0x165: 0x35706, // ondrag | ||
586 | 0x166: 0x26b04, // high | ||
587 | 0x168: 0x3c303, // src | ||
588 | 0x169: 0x15706, // poster | ||
589 | 0x16b: 0x1670e, // annotation-xml | ||
590 | 0x16c: 0x5f704, // step | ||
591 | 0x16d: 0x4, // abbr | ||
592 | 0x16e: 0x1b06, // dialog | ||
593 | 0x170: 0x1202, // li | ||
594 | 0x172: 0x3ed02, // mo | ||
595 | 0x175: 0x1d803, // for | ||
596 | 0x176: 0x1a803, // ins | ||
597 | 0x178: 0x55504, // size | ||
598 | 0x179: 0x43210, // onlanguagechange | ||
599 | 0x17a: 0x8607, // default | ||
600 | 0x17b: 0x1a03, // bdi | ||
601 | 0x17c: 0x4d30a, // onpagehide | ||
602 | 0x17d: 0x6907, // dirname | ||
603 | 0x17e: 0x21404, // type | ||
604 | 0x17f: 0x1f204, // form | ||
605 | 0x181: 0x28509, // oncanplay | ||
606 | 0x182: 0x6103, // dfn | ||
607 | 0x183: 0x46308, // tabindex | ||
608 | 0x186: 0x6502, // em | ||
609 | 0x187: 0x27404, // lang | ||
610 | 0x189: 0x39108, // dropzone | ||
611 | 0x18a: 0x4080a, // onkeypress | ||
612 | 0x18b: 0x23c08, // datetime | ||
613 | 0x18c: 0x16204, // cols | ||
614 | 0x18d: 0x1, // a | ||
615 | 0x18e: 0x4420c, // onloadeddata | ||
616 | 0x190: 0xa605, // audio | ||
617 | 0x192: 0x2e05, // tbody | ||
618 | 0x193: 0x22c06, // method | ||
619 | 0x195: 0xf404, // loop | ||
620 | 0x196: 0x29606, // iframe | ||
621 | 0x198: 0x2d504, // head | ||
622 | 0x19e: 0x5f108, // manifest | ||
623 | 0x19f: 0xb309, // autofocus | ||
624 | 0x1a0: 0x14904, // code | ||
625 | 0x1a1: 0x55906, // strong | ||
626 | 0x1a2: 0x30308, // multiple | ||
627 | 0x1a3: 0xc05, // param | ||
628 | 0x1a6: 0x21107, // enctype | ||
629 | 0x1a7: 0x5b304, // face | ||
630 | 0x1a8: 0xfd09, // plaintext | ||
631 | 0x1a9: 0x26e02, // h1 | ||
632 | 0x1aa: 0x59509, // onstalled | ||
633 | 0x1ad: 0x3d406, // script | ||
634 | 0x1ae: 0x2db06, // spacer | ||
635 | 0x1af: 0x55108, // onresize | ||
636 | 0x1b0: 0x4a20b, // onmouseover | ||
637 | 0x1b1: 0x5cc08, // onunload | ||
638 | 0x1b2: 0x56708, // onseeked | ||
639 | 0x1b4: 0x2140d, // typemustmatch | ||
640 | 0x1b5: 0x1cc06, // figure | ||
641 | 0x1b6: 0x4950a, // onmouseout | ||
642 | 0x1b7: 0x25e03, // pre | ||
643 | 0x1b8: 0x50705, // width | ||
644 | 0x1b9: 0x19906, // sorted | ||
645 | 0x1bb: 0x5704, // nobr | ||
646 | 0x1be: 0x5302, // tt | ||
647 | 0x1bf: 0x1105, // align | ||
648 | 0x1c0: 0x3e607, // oninput | ||
649 | 0x1c3: 0x41807, // onkeyup | ||
650 | 0x1c6: 0x1c00c, // onafterprint | ||
651 | 0x1c7: 0x210e, // accept-charset | ||
652 | 0x1c8: 0x33c06, // itemid | ||
653 | 0x1c9: 0x3e809, // inputmode | ||
654 | 0x1cb: 0x53306, // strike | ||
655 | 0x1cc: 0x5a903, // sub | ||
656 | 0x1cd: 0x10505, // track | ||
657 | 0x1ce: 0x38605, // start | ||
658 | 0x1d0: 0xd608, // basefont | ||
659 | 0x1d6: 0x1aa06, // source | ||
660 | 0x1d7: 0x18206, // legend | ||
661 | 0x1d8: 0x2d405, // thead | ||
662 | 0x1da: 0x8c05, // tfoot | ||
663 | 0x1dd: 0x1ec06, // object | ||
664 | 0x1de: 0x6e05, // media | ||
665 | 0x1df: 0x1670a, // annotation | ||
666 | 0x1e0: 0x20d0b, // formenctype | ||
667 | 0x1e2: 0x3d208, // noscript | ||
668 | 0x1e4: 0x55505, // sizes | ||
669 | 0x1e5: 0x1fc0c, // autocomplete | ||
670 | 0x1e6: 0x9504, // span | ||
671 | 0x1e7: 0x9808, // noframes | ||
672 | 0x1e8: 0x24b06, // target | ||
673 | 0x1e9: 0x38f06, // ondrop | ||
674 | 0x1ea: 0x2b306, // applet | ||
675 | 0x1ec: 0x5a08, // reversed | ||
676 | 0x1f0: 0x2a907, // isindex | ||
677 | 0x1f3: 0x27008, // hreflang | ||
678 | 0x1f5: 0x2f302, // h5 | ||
679 | 0x1f6: 0x4f307, // address | ||
680 | 0x1fa: 0x2e103, // max | ||
681 | 0x1fb: 0xc30b, // placeholder | ||
682 | 0x1fc: 0x2f608, // textarea | ||
683 | 0x1fe: 0x4ad09, // onmouseup | ||
684 | 0x1ff: 0x3800b, // ondragstart | ||
685 | } | ||
686 | |||
687 | const atomText = "abbradiogrouparamalignmarkbdialogaccept-charsetbodyaccesskey" + | ||
688 | "genavaluealtdetailsampatternobreversedfnoembedirnamediagroup" + | ||
689 | "ingasyncanvasidefaultfooterowspanoframesetitleaudionblurubya" + | ||
690 | "utofocusandboxmplaceholderautoplaybasefontimeupdatebdoncance" + | ||
691 | "labelooptgrouplaintextrackindisabledivarbgsoundlowbrbigblink" + | ||
692 | "blockquotebuttonabortranslatecodefercolgroupostercolorcolspa" + | ||
693 | "nnotation-xmlcommandraggablegendcontrolsmallcoordsortedcross" + | ||
694 | "originsourcefieldsetfigcaptionafterprintfigurequiredforeignO" + | ||
695 | "bjectforeignobjectformactionautocompleteerrorformenctypemust" + | ||
696 | "matchallengeformmethodformnovalidatetimeterformtargetheightm" + | ||
697 | "lhgroupreloadhiddenhigh1hreflanghttp-equivideoncanplaythroug" + | ||
698 | "h2iframeimageimglyph3isindexismappletitemscopeditemtypemarqu" + | ||
699 | "eematheaderspacermaxlength4minlength5mtextareadonlymultiplem" + | ||
700 | "utedonclickoncloseamlesspellcheckedoncontextmenuitemidoncuec" + | ||
701 | "hangeondblclickondragendondragenterondragleaveondragoverondr" + | ||
702 | "agstarticleondropzonemptiedondurationchangeonendedonerroronf" + | ||
703 | "ocusrcdocitempropenoscriptonhashchangeoninputmodeloninvalido" + | ||
704 | "nkeydownloadonkeypressrclangonkeyupublicontenteditableonlang" + | ||
705 | "uagechangeonloadeddatalistingonloadedmetadatabindexonloadsta" + | ||
706 | "rtonmessageonmousedownonmousemoveonmouseoutputonmouseoveronm" + | ||
707 | "ouseuponmousewheelonofflineononlineonpagehidesclassectionbef" + | ||
708 | "oreunloaddresshapeonpageshowidth6onpausemaponplayingonpopsta" + | ||
709 | "teonprogresstrikeytypeonratechangeonresetonresizestrongonscr" + | ||
710 | "ollonseekedonseekingonselectedonshowraponsortableonstalledon" + | ||
711 | "storageonsubmitemrefacenteronsuspendontoggleonunloadonvolume" + | ||
712 | "changeonwaitingoptimumanifestepromptoptionbeforeprintstylesu" + | ||
713 | "mmarysupsvgsystemplate" | ||
diff --git a/vendor/golang.org/x/net/html/const.go b/vendor/golang.org/x/net/html/const.go new file mode 100644 index 0000000..52f651f --- /dev/null +++ b/vendor/golang.org/x/net/html/const.go | |||
@@ -0,0 +1,102 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package html | ||
6 | |||
7 | // Section 12.2.3.2 of the HTML5 specification says "The following elements | ||
8 | // have varying levels of special parsing rules". | ||
9 | // https://html.spec.whatwg.org/multipage/syntax.html#the-stack-of-open-elements | ||
10 | var isSpecialElementMap = map[string]bool{ | ||
11 | "address": true, | ||
12 | "applet": true, | ||
13 | "area": true, | ||
14 | "article": true, | ||
15 | "aside": true, | ||
16 | "base": true, | ||
17 | "basefont": true, | ||
18 | "bgsound": true, | ||
19 | "blockquote": true, | ||
20 | "body": true, | ||
21 | "br": true, | ||
22 | "button": true, | ||
23 | "caption": true, | ||
24 | "center": true, | ||
25 | "col": true, | ||
26 | "colgroup": true, | ||
27 | "dd": true, | ||
28 | "details": true, | ||
29 | "dir": true, | ||
30 | "div": true, | ||
31 | "dl": true, | ||
32 | "dt": true, | ||
33 | "embed": true, | ||
34 | "fieldset": true, | ||
35 | "figcaption": true, | ||
36 | "figure": true, | ||
37 | "footer": true, | ||
38 | "form": true, | ||
39 | "frame": true, | ||
40 | "frameset": true, | ||
41 | "h1": true, | ||
42 | "h2": true, | ||
43 | "h3": true, | ||
44 | "h4": true, | ||
45 | "h5": true, | ||
46 | "h6": true, | ||
47 | "head": true, | ||
48 | "header": true, | ||
49 | "hgroup": true, | ||
50 | "hr": true, | ||
51 | "html": true, | ||
52 | "iframe": true, | ||
53 | "img": true, | ||
54 | "input": true, | ||
55 | "isindex": true, | ||
56 | "li": true, | ||
57 | "link": true, | ||
58 | "listing": true, | ||
59 | "marquee": true, | ||
60 | "menu": true, | ||
61 | "meta": true, | ||
62 | "nav": true, | ||
63 | "noembed": true, | ||
64 | "noframes": true, | ||
65 | "noscript": true, | ||
66 | "object": true, | ||
67 | "ol": true, | ||
68 | "p": true, | ||
69 | "param": true, | ||
70 | "plaintext": true, | ||
71 | "pre": true, | ||
72 | "script": true, | ||
73 | "section": true, | ||
74 | "select": true, | ||
75 | "source": true, | ||
76 | "style": true, | ||
77 | "summary": true, | ||
78 | "table": true, | ||
79 | "tbody": true, | ||
80 | "td": true, | ||
81 | "template": true, | ||
82 | "textarea": true, | ||
83 | "tfoot": true, | ||
84 | "th": true, | ||
85 | "thead": true, | ||
86 | "title": true, | ||
87 | "tr": true, | ||
88 | "track": true, | ||
89 | "ul": true, | ||
90 | "wbr": true, | ||
91 | "xmp": true, | ||
92 | } | ||
93 | |||
94 | func isSpecialElement(element *Node) bool { | ||
95 | switch element.Namespace { | ||
96 | case "", "html": | ||
97 | return isSpecialElementMap[element.Data] | ||
98 | case "svg": | ||
99 | return element.Data == "foreignObject" | ||
100 | } | ||
101 | return false | ||
102 | } | ||
diff --git a/vendor/golang.org/x/net/html/doc.go b/vendor/golang.org/x/net/html/doc.go new file mode 100644 index 0000000..94f4968 --- /dev/null +++ b/vendor/golang.org/x/net/html/doc.go | |||
@@ -0,0 +1,106 @@ | |||
1 | // Copyright 2010 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | /* | ||
6 | Package html implements an HTML5-compliant tokenizer and parser. | ||
7 | |||
8 | Tokenization is done by creating a Tokenizer for an io.Reader r. It is the | ||
9 | caller's responsibility to ensure that r provides UTF-8 encoded HTML. | ||
10 | |||
11 | z := html.NewTokenizer(r) | ||
12 | |||
13 | Given a Tokenizer z, the HTML is tokenized by repeatedly calling z.Next(), | ||
14 | which parses the next token and returns its type, or an error: | ||
15 | |||
16 | for { | ||
17 | tt := z.Next() | ||
18 | if tt == html.ErrorToken { | ||
19 | // ... | ||
20 | return ... | ||
21 | } | ||
22 | // Process the current token. | ||
23 | } | ||
24 | |||
25 | There are two APIs for retrieving the current token. The high-level API is to | ||
26 | call Token; the low-level API is to call Text or TagName / TagAttr. Both APIs | ||
27 | allow optionally calling Raw after Next but before Token, Text, TagName, or | ||
28 | TagAttr. In EBNF notation, the valid call sequence per token is: | ||
29 | |||
30 | Next {Raw} [ Token | Text | TagName {TagAttr} ] | ||
31 | |||
32 | Token returns an independent data structure that completely describes a token. | ||
33 | Entities (such as "<") are unescaped, tag names and attribute keys are | ||
34 | lower-cased, and attributes are collected into a []Attribute. For example: | ||
35 | |||
36 | for { | ||
37 | if z.Next() == html.ErrorToken { | ||
38 | // Returning io.EOF indicates success. | ||
39 | return z.Err() | ||
40 | } | ||
41 | emitToken(z.Token()) | ||
42 | } | ||
43 | |||
44 | The low-level API performs fewer allocations and copies, but the contents of | ||
45 | the []byte values returned by Text, TagName and TagAttr may change on the next | ||
46 | call to Next. For example, to extract an HTML page's anchor text: | ||
47 | |||
48 | depth := 0 | ||
49 | for { | ||
50 | tt := z.Next() | ||
51 | switch tt { | ||
52 | case ErrorToken: | ||
53 | return z.Err() | ||
54 | case TextToken: | ||
55 | if depth > 0 { | ||
56 | // emitBytes should copy the []byte it receives, | ||
57 | // if it doesn't process it immediately. | ||
58 | emitBytes(z.Text()) | ||
59 | } | ||
60 | case StartTagToken, EndTagToken: | ||
61 | tn, _ := z.TagName() | ||
62 | if len(tn) == 1 && tn[0] == 'a' { | ||
63 | if tt == StartTagToken { | ||
64 | depth++ | ||
65 | } else { | ||
66 | depth-- | ||
67 | } | ||
68 | } | ||
69 | } | ||
70 | } | ||
71 | |||
72 | Parsing is done by calling Parse with an io.Reader, which returns the root of | ||
73 | the parse tree (the document element) as a *Node. It is the caller's | ||
74 | responsibility to ensure that the Reader provides UTF-8 encoded HTML. For | ||
75 | example, to process each anchor node in depth-first order: | ||
76 | |||
77 | doc, err := html.Parse(r) | ||
78 | if err != nil { | ||
79 | // ... | ||
80 | } | ||
81 | var f func(*html.Node) | ||
82 | f = func(n *html.Node) { | ||
83 | if n.Type == html.ElementNode && n.Data == "a" { | ||
84 | // Do something with n... | ||
85 | } | ||
86 | for c := n.FirstChild; c != nil; c = c.NextSibling { | ||
87 | f(c) | ||
88 | } | ||
89 | } | ||
90 | f(doc) | ||
91 | |||
92 | The relevant specifications include: | ||
93 | https://html.spec.whatwg.org/multipage/syntax.html and | ||
94 | https://html.spec.whatwg.org/multipage/syntax.html#tokenization | ||
95 | */ | ||
96 | package html // import "golang.org/x/net/html" | ||
97 | |||
98 | // The tokenization algorithm implemented by this package is not a line-by-line | ||
99 | // transliteration of the relatively verbose state-machine in the WHATWG | ||
100 | // specification. A more direct approach is used instead, where the program | ||
101 | // counter implies the state, such as whether it is tokenizing a tag or a text | ||
102 | // node. Specification compliance is verified by checking expected and actual | ||
103 | // outputs over a test suite rather than aiming for algorithmic fidelity. | ||
104 | |||
105 | // TODO(nigeltao): Does a DOM API belong in this package or a separate one? | ||
106 | // TODO(nigeltao): How does parsing interact with a JavaScript engine? | ||
diff --git a/vendor/golang.org/x/net/html/doctype.go b/vendor/golang.org/x/net/html/doctype.go new file mode 100644 index 0000000..c484e5a --- /dev/null +++ b/vendor/golang.org/x/net/html/doctype.go | |||
@@ -0,0 +1,156 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package html | ||
6 | |||
7 | import ( | ||
8 | "strings" | ||
9 | ) | ||
10 | |||
11 | // parseDoctype parses the data from a DoctypeToken into a name, | ||
12 | // public identifier, and system identifier. It returns a Node whose Type | ||
13 | // is DoctypeNode, whose Data is the name, and which has attributes | ||
14 | // named "system" and "public" for the two identifiers if they were present. | ||
15 | // quirks is whether the document should be parsed in "quirks mode". | ||
16 | func parseDoctype(s string) (n *Node, quirks bool) { | ||
17 | n = &Node{Type: DoctypeNode} | ||
18 | |||
19 | // Find the name. | ||
20 | space := strings.IndexAny(s, whitespace) | ||
21 | if space == -1 { | ||
22 | space = len(s) | ||
23 | } | ||
24 | n.Data = s[:space] | ||
25 | // The comparison to "html" is case-sensitive. | ||
26 | if n.Data != "html" { | ||
27 | quirks = true | ||
28 | } | ||
29 | n.Data = strings.ToLower(n.Data) | ||
30 | s = strings.TrimLeft(s[space:], whitespace) | ||
31 | |||
32 | if len(s) < 6 { | ||
33 | // It can't start with "PUBLIC" or "SYSTEM". | ||
34 | // Ignore the rest of the string. | ||
35 | return n, quirks || s != "" | ||
36 | } | ||
37 | |||
38 | key := strings.ToLower(s[:6]) | ||
39 | s = s[6:] | ||
40 | for key == "public" || key == "system" { | ||
41 | s = strings.TrimLeft(s, whitespace) | ||
42 | if s == "" { | ||
43 | break | ||
44 | } | ||
45 | quote := s[0] | ||
46 | if quote != '"' && quote != '\'' { | ||
47 | break | ||
48 | } | ||
49 | s = s[1:] | ||
50 | q := strings.IndexRune(s, rune(quote)) | ||
51 | var id string | ||
52 | if q == -1 { | ||
53 | id = s | ||
54 | s = "" | ||
55 | } else { | ||
56 | id = s[:q] | ||
57 | s = s[q+1:] | ||
58 | } | ||
59 | n.Attr = append(n.Attr, Attribute{Key: key, Val: id}) | ||
60 | if key == "public" { | ||
61 | key = "system" | ||
62 | } else { | ||
63 | key = "" | ||
64 | } | ||
65 | } | ||
66 | |||
67 | if key != "" || s != "" { | ||
68 | quirks = true | ||
69 | } else if len(n.Attr) > 0 { | ||
70 | if n.Attr[0].Key == "public" { | ||
71 | public := strings.ToLower(n.Attr[0].Val) | ||
72 | switch public { | ||
73 | case "-//w3o//dtd w3 html strict 3.0//en//", "-/w3d/dtd html 4.0 transitional/en", "html": | ||
74 | quirks = true | ||
75 | default: | ||
76 | for _, q := range quirkyIDs { | ||
77 | if strings.HasPrefix(public, q) { | ||
78 | quirks = true | ||
79 | break | ||
80 | } | ||
81 | } | ||
82 | } | ||
83 | // The following two public IDs only cause quirks mode if there is no system ID. | ||
84 | if len(n.Attr) == 1 && (strings.HasPrefix(public, "-//w3c//dtd html 4.01 frameset//") || | ||
85 | strings.HasPrefix(public, "-//w3c//dtd html 4.01 transitional//")) { | ||
86 | quirks = true | ||
87 | } | ||
88 | } | ||
89 | if lastAttr := n.Attr[len(n.Attr)-1]; lastAttr.Key == "system" && | ||
90 | strings.ToLower(lastAttr.Val) == "http://www.ibm.com/data/dtd/v11/ibmxhtml1-transitional.dtd" { | ||
91 | quirks = true | ||
92 | } | ||
93 | } | ||
94 | |||
95 | return n, quirks | ||
96 | } | ||
97 | |||
98 | // quirkyIDs is a list of public doctype identifiers that cause a document | ||
99 | // to be interpreted in quirks mode. The identifiers should be in lower case. | ||
100 | var quirkyIDs = []string{ | ||
101 | "+//silmaril//dtd html pro v0r11 19970101//", | ||
102 | "-//advasoft ltd//dtd html 3.0 aswedit + extensions//", | ||
103 | "-//as//dtd html 3.0 aswedit + extensions//", | ||
104 | "-//ietf//dtd html 2.0 level 1//", | ||
105 | "-//ietf//dtd html 2.0 level 2//", | ||
106 | "-//ietf//dtd html 2.0 strict level 1//", | ||
107 | "-//ietf//dtd html 2.0 strict level 2//", | ||
108 | "-//ietf//dtd html 2.0 strict//", | ||
109 | "-//ietf//dtd html 2.0//", | ||
110 | "-//ietf//dtd html 2.1e//", | ||
111 | "-//ietf//dtd html 3.0//", | ||
112 | "-//ietf//dtd html 3.2 final//", | ||
113 | "-//ietf//dtd html 3.2//", | ||
114 | "-//ietf//dtd html 3//", | ||
115 | "-//ietf//dtd html level 0//", | ||
116 | "-//ietf//dtd html level 1//", | ||
117 | "-//ietf//dtd html level 2//", | ||
118 | "-//ietf//dtd html level 3//", | ||
119 | "-//ietf//dtd html strict level 0//", | ||
120 | "-//ietf//dtd html strict level 1//", | ||
121 | "-//ietf//dtd html strict level 2//", | ||
122 | "-//ietf//dtd html strict level 3//", | ||
123 | "-//ietf//dtd html strict//", | ||
124 | "-//ietf//dtd html//", | ||
125 | "-//metrius//dtd metrius presentational//", | ||
126 | "-//microsoft//dtd internet explorer 2.0 html strict//", | ||
127 | "-//microsoft//dtd internet explorer 2.0 html//", | ||
128 | "-//microsoft//dtd internet explorer 2.0 tables//", | ||
129 | "-//microsoft//dtd internet explorer 3.0 html strict//", | ||
130 | "-//microsoft//dtd internet explorer 3.0 html//", | ||
131 | "-//microsoft//dtd internet explorer 3.0 tables//", | ||
132 | "-//netscape comm. corp.//dtd html//", | ||
133 | "-//netscape comm. corp.//dtd strict html//", | ||
134 | "-//o'reilly and associates//dtd html 2.0//", | ||
135 | "-//o'reilly and associates//dtd html extended 1.0//", | ||
136 | "-//o'reilly and associates//dtd html extended relaxed 1.0//", | ||
137 | "-//softquad software//dtd hotmetal pro 6.0::19990601::extensions to html 4.0//", | ||
138 | "-//softquad//dtd hotmetal pro 4.0::19971010::extensions to html 4.0//", | ||
139 | "-//spyglass//dtd html 2.0 extended//", | ||
140 | "-//sq//dtd html 2.0 hotmetal + extensions//", | ||
141 | "-//sun microsystems corp.//dtd hotjava html//", | ||
142 | "-//sun microsystems corp.//dtd hotjava strict html//", | ||
143 | "-//w3c//dtd html 3 1995-03-24//", | ||
144 | "-//w3c//dtd html 3.2 draft//", | ||
145 | "-//w3c//dtd html 3.2 final//", | ||
146 | "-//w3c//dtd html 3.2//", | ||
147 | "-//w3c//dtd html 3.2s draft//", | ||
148 | "-//w3c//dtd html 4.0 frameset//", | ||
149 | "-//w3c//dtd html 4.0 transitional//", | ||
150 | "-//w3c//dtd html experimental 19960712//", | ||
151 | "-//w3c//dtd html experimental 970421//", | ||
152 | "-//w3c//dtd w3 html//", | ||
153 | "-//w3o//dtd w3 html 3.0//", | ||
154 | "-//webtechs//dtd mozilla html 2.0//", | ||
155 | "-//webtechs//dtd mozilla html//", | ||
156 | } | ||
diff --git a/vendor/golang.org/x/net/html/entity.go b/vendor/golang.org/x/net/html/entity.go new file mode 100644 index 0000000..a50c04c --- /dev/null +++ b/vendor/golang.org/x/net/html/entity.go | |||
@@ -0,0 +1,2253 @@ | |||
1 | // Copyright 2010 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package html | ||
6 | |||
7 | // All entities that do not end with ';' are 6 or fewer bytes long. | ||
8 | const longestEntityWithoutSemicolon = 6 | ||
9 | |||
10 | // entity is a map from HTML entity names to their values. The semicolon matters: | ||
11 | // https://html.spec.whatwg.org/multipage/syntax.html#named-character-references | ||
12 | // lists both "amp" and "amp;" as two separate entries. | ||
13 | // | ||
14 | // Note that the HTML5 list is larger than the HTML4 list at | ||
15 | // http://www.w3.org/TR/html4/sgml/entities.html | ||
16 | var entity = map[string]rune{ | ||
17 | "AElig;": '\U000000C6', | ||
18 | "AMP;": '\U00000026', | ||
19 | "Aacute;": '\U000000C1', | ||
20 | "Abreve;": '\U00000102', | ||
21 | "Acirc;": '\U000000C2', | ||
22 | "Acy;": '\U00000410', | ||
23 | "Afr;": '\U0001D504', | ||
24 | "Agrave;": '\U000000C0', | ||
25 | "Alpha;": '\U00000391', | ||
26 | "Amacr;": '\U00000100', | ||
27 | "And;": '\U00002A53', | ||
28 | "Aogon;": '\U00000104', | ||
29 | "Aopf;": '\U0001D538', | ||
30 | "ApplyFunction;": '\U00002061', | ||
31 | "Aring;": '\U000000C5', | ||
32 | "Ascr;": '\U0001D49C', | ||
33 | "Assign;": '\U00002254', | ||
34 | "Atilde;": '\U000000C3', | ||
35 | "Auml;": '\U000000C4', | ||
36 | "Backslash;": '\U00002216', | ||
37 | "Barv;": '\U00002AE7', | ||
38 | "Barwed;": '\U00002306', | ||
39 | "Bcy;": '\U00000411', | ||
40 | "Because;": '\U00002235', | ||
41 | "Bernoullis;": '\U0000212C', | ||
42 | "Beta;": '\U00000392', | ||
43 | "Bfr;": '\U0001D505', | ||
44 | "Bopf;": '\U0001D539', | ||
45 | "Breve;": '\U000002D8', | ||
46 | "Bscr;": '\U0000212C', | ||
47 | "Bumpeq;": '\U0000224E', | ||
48 | "CHcy;": '\U00000427', | ||
49 | "COPY;": '\U000000A9', | ||
50 | "Cacute;": '\U00000106', | ||
51 | "Cap;": '\U000022D2', | ||
52 | "CapitalDifferentialD;": '\U00002145', | ||
53 | "Cayleys;": '\U0000212D', | ||
54 | "Ccaron;": '\U0000010C', | ||
55 | "Ccedil;": '\U000000C7', | ||
56 | "Ccirc;": '\U00000108', | ||
57 | "Cconint;": '\U00002230', | ||
58 | "Cdot;": '\U0000010A', | ||
59 | "Cedilla;": '\U000000B8', | ||
60 | "CenterDot;": '\U000000B7', | ||
61 | "Cfr;": '\U0000212D', | ||
62 | "Chi;": '\U000003A7', | ||
63 | "CircleDot;": '\U00002299', | ||
64 | "CircleMinus;": '\U00002296', | ||
65 | "CirclePlus;": '\U00002295', | ||
66 | "CircleTimes;": '\U00002297', | ||
67 | "ClockwiseContourIntegral;": '\U00002232', | ||
68 | "CloseCurlyDoubleQuote;": '\U0000201D', | ||
69 | "CloseCurlyQuote;": '\U00002019', | ||
70 | "Colon;": '\U00002237', | ||
71 | "Colone;": '\U00002A74', | ||
72 | "Congruent;": '\U00002261', | ||
73 | "Conint;": '\U0000222F', | ||
74 | "ContourIntegral;": '\U0000222E', | ||
75 | "Copf;": '\U00002102', | ||
76 | "Coproduct;": '\U00002210', | ||
77 | "CounterClockwiseContourIntegral;": '\U00002233', | ||
78 | "Cross;": '\U00002A2F', | ||
79 | "Cscr;": '\U0001D49E', | ||
80 | "Cup;": '\U000022D3', | ||
81 | "CupCap;": '\U0000224D', | ||
82 | "DD;": '\U00002145', | ||
83 | "DDotrahd;": '\U00002911', | ||
84 | "DJcy;": '\U00000402', | ||
85 | "DScy;": '\U00000405', | ||
86 | "DZcy;": '\U0000040F', | ||
87 | "Dagger;": '\U00002021', | ||
88 | "Darr;": '\U000021A1', | ||
89 | "Dashv;": '\U00002AE4', | ||
90 | "Dcaron;": '\U0000010E', | ||
91 | "Dcy;": '\U00000414', | ||
92 | "Del;": '\U00002207', | ||
93 | "Delta;": '\U00000394', | ||
94 | "Dfr;": '\U0001D507', | ||
95 | "DiacriticalAcute;": '\U000000B4', | ||
96 | "DiacriticalDot;": '\U000002D9', | ||
97 | "DiacriticalDoubleAcute;": '\U000002DD', | ||
98 | "DiacriticalGrave;": '\U00000060', | ||
99 | "DiacriticalTilde;": '\U000002DC', | ||
100 | "Diamond;": '\U000022C4', | ||
101 | "DifferentialD;": '\U00002146', | ||
102 | "Dopf;": '\U0001D53B', | ||
103 | "Dot;": '\U000000A8', | ||
104 | "DotDot;": '\U000020DC', | ||
105 | "DotEqual;": '\U00002250', | ||
106 | "DoubleContourIntegral;": '\U0000222F', | ||
107 | "DoubleDot;": '\U000000A8', | ||
108 | "DoubleDownArrow;": '\U000021D3', | ||
109 | "DoubleLeftArrow;": '\U000021D0', | ||
110 | "DoubleLeftRightArrow;": '\U000021D4', | ||
111 | "DoubleLeftTee;": '\U00002AE4', | ||
112 | "DoubleLongLeftArrow;": '\U000027F8', | ||
113 | "DoubleLongLeftRightArrow;": '\U000027FA', | ||
114 | "DoubleLongRightArrow;": '\U000027F9', | ||
115 | "DoubleRightArrow;": '\U000021D2', | ||
116 | "DoubleRightTee;": '\U000022A8', | ||
117 | "DoubleUpArrow;": '\U000021D1', | ||
118 | "DoubleUpDownArrow;": '\U000021D5', | ||
119 | "DoubleVerticalBar;": '\U00002225', | ||
120 | "DownArrow;": '\U00002193', | ||
121 | "DownArrowBar;": '\U00002913', | ||
122 | "DownArrowUpArrow;": '\U000021F5', | ||
123 | "DownBreve;": '\U00000311', | ||
124 | "DownLeftRightVector;": '\U00002950', | ||
125 | "DownLeftTeeVector;": '\U0000295E', | ||
126 | "DownLeftVector;": '\U000021BD', | ||
127 | "DownLeftVectorBar;": '\U00002956', | ||
128 | "DownRightTeeVector;": '\U0000295F', | ||
129 | "DownRightVector;": '\U000021C1', | ||
130 | "DownRightVectorBar;": '\U00002957', | ||
131 | "DownTee;": '\U000022A4', | ||
132 | "DownTeeArrow;": '\U000021A7', | ||
133 | "Downarrow;": '\U000021D3', | ||
134 | "Dscr;": '\U0001D49F', | ||
135 | "Dstrok;": '\U00000110', | ||
136 | "ENG;": '\U0000014A', | ||
137 | "ETH;": '\U000000D0', | ||
138 | "Eacute;": '\U000000C9', | ||
139 | "Ecaron;": '\U0000011A', | ||
140 | "Ecirc;": '\U000000CA', | ||
141 | "Ecy;": '\U0000042D', | ||
142 | "Edot;": '\U00000116', | ||
143 | "Efr;": '\U0001D508', | ||
144 | "Egrave;": '\U000000C8', | ||
145 | "Element;": '\U00002208', | ||
146 | "Emacr;": '\U00000112', | ||
147 | "EmptySmallSquare;": '\U000025FB', | ||
148 | "EmptyVerySmallSquare;": '\U000025AB', | ||
149 | "Eogon;": '\U00000118', | ||
150 | "Eopf;": '\U0001D53C', | ||
151 | "Epsilon;": '\U00000395', | ||
152 | "Equal;": '\U00002A75', | ||
153 | "EqualTilde;": '\U00002242', | ||
154 | "Equilibrium;": '\U000021CC', | ||
155 | "Escr;": '\U00002130', | ||
156 | "Esim;": '\U00002A73', | ||
157 | "Eta;": '\U00000397', | ||
158 | "Euml;": '\U000000CB', | ||
159 | "Exists;": '\U00002203', | ||
160 | "ExponentialE;": '\U00002147', | ||
161 | "Fcy;": '\U00000424', | ||
162 | "Ffr;": '\U0001D509', | ||
163 | "FilledSmallSquare;": '\U000025FC', | ||
164 | "FilledVerySmallSquare;": '\U000025AA', | ||
165 | "Fopf;": '\U0001D53D', | ||
166 | "ForAll;": '\U00002200', | ||
167 | "Fouriertrf;": '\U00002131', | ||
168 | "Fscr;": '\U00002131', | ||
169 | "GJcy;": '\U00000403', | ||
170 | "GT;": '\U0000003E', | ||
171 | "Gamma;": '\U00000393', | ||
172 | "Gammad;": '\U000003DC', | ||
173 | "Gbreve;": '\U0000011E', | ||
174 | "Gcedil;": '\U00000122', | ||
175 | "Gcirc;": '\U0000011C', | ||
176 | "Gcy;": '\U00000413', | ||
177 | "Gdot;": '\U00000120', | ||
178 | "Gfr;": '\U0001D50A', | ||
179 | "Gg;": '\U000022D9', | ||
180 | "Gopf;": '\U0001D53E', | ||
181 | "GreaterEqual;": '\U00002265', | ||
182 | "GreaterEqualLess;": '\U000022DB', | ||
183 | "GreaterFullEqual;": '\U00002267', | ||
184 | "GreaterGreater;": '\U00002AA2', | ||
185 | "GreaterLess;": '\U00002277', | ||
186 | "GreaterSlantEqual;": '\U00002A7E', | ||
187 | "GreaterTilde;": '\U00002273', | ||
188 | "Gscr;": '\U0001D4A2', | ||
189 | "Gt;": '\U0000226B', | ||
190 | "HARDcy;": '\U0000042A', | ||
191 | "Hacek;": '\U000002C7', | ||
192 | "Hat;": '\U0000005E', | ||
193 | "Hcirc;": '\U00000124', | ||
194 | "Hfr;": '\U0000210C', | ||
195 | "HilbertSpace;": '\U0000210B', | ||
196 | "Hopf;": '\U0000210D', | ||
197 | "HorizontalLine;": '\U00002500', | ||
198 | "Hscr;": '\U0000210B', | ||
199 | "Hstrok;": '\U00000126', | ||
200 | "HumpDownHump;": '\U0000224E', | ||
201 | "HumpEqual;": '\U0000224F', | ||
202 | "IEcy;": '\U00000415', | ||
203 | "IJlig;": '\U00000132', | ||
204 | "IOcy;": '\U00000401', | ||
205 | "Iacute;": '\U000000CD', | ||
206 | "Icirc;": '\U000000CE', | ||
207 | "Icy;": '\U00000418', | ||
208 | "Idot;": '\U00000130', | ||
209 | "Ifr;": '\U00002111', | ||
210 | "Igrave;": '\U000000CC', | ||
211 | "Im;": '\U00002111', | ||
212 | "Imacr;": '\U0000012A', | ||
213 | "ImaginaryI;": '\U00002148', | ||
214 | "Implies;": '\U000021D2', | ||
215 | "Int;": '\U0000222C', | ||
216 | "Integral;": '\U0000222B', | ||
217 | "Intersection;": '\U000022C2', | ||
218 | "InvisibleComma;": '\U00002063', | ||
219 | "InvisibleTimes;": '\U00002062', | ||
220 | "Iogon;": '\U0000012E', | ||
221 | "Iopf;": '\U0001D540', | ||
222 | "Iota;": '\U00000399', | ||
223 | "Iscr;": '\U00002110', | ||
224 | "Itilde;": '\U00000128', | ||
225 | "Iukcy;": '\U00000406', | ||
226 | "Iuml;": '\U000000CF', | ||
227 | "Jcirc;": '\U00000134', | ||
228 | "Jcy;": '\U00000419', | ||
229 | "Jfr;": '\U0001D50D', | ||
230 | "Jopf;": '\U0001D541', | ||
231 | "Jscr;": '\U0001D4A5', | ||
232 | "Jsercy;": '\U00000408', | ||
233 | "Jukcy;": '\U00000404', | ||
234 | "KHcy;": '\U00000425', | ||
235 | "KJcy;": '\U0000040C', | ||
236 | "Kappa;": '\U0000039A', | ||
237 | "Kcedil;": '\U00000136', | ||
238 | "Kcy;": '\U0000041A', | ||
239 | "Kfr;": '\U0001D50E', | ||
240 | "Kopf;": '\U0001D542', | ||
241 | "Kscr;": '\U0001D4A6', | ||
242 | "LJcy;": '\U00000409', | ||
243 | "LT;": '\U0000003C', | ||
244 | "Lacute;": '\U00000139', | ||
245 | "Lambda;": '\U0000039B', | ||
246 | "Lang;": '\U000027EA', | ||
247 | "Laplacetrf;": '\U00002112', | ||
248 | "Larr;": '\U0000219E', | ||
249 | "Lcaron;": '\U0000013D', | ||
250 | "Lcedil;": '\U0000013B', | ||
251 | "Lcy;": '\U0000041B', | ||
252 | "LeftAngleBracket;": '\U000027E8', | ||
253 | "LeftArrow;": '\U00002190', | ||
254 | "LeftArrowBar;": '\U000021E4', | ||
255 | "LeftArrowRightArrow;": '\U000021C6', | ||
256 | "LeftCeiling;": '\U00002308', | ||
257 | "LeftDoubleBracket;": '\U000027E6', | ||
258 | "LeftDownTeeVector;": '\U00002961', | ||
259 | "LeftDownVector;": '\U000021C3', | ||
260 | "LeftDownVectorBar;": '\U00002959', | ||
261 | "LeftFloor;": '\U0000230A', | ||
262 | "LeftRightArrow;": '\U00002194', | ||
263 | "LeftRightVector;": '\U0000294E', | ||
264 | "LeftTee;": '\U000022A3', | ||
265 | "LeftTeeArrow;": '\U000021A4', | ||
266 | "LeftTeeVector;": '\U0000295A', | ||
267 | "LeftTriangle;": '\U000022B2', | ||
268 | "LeftTriangleBar;": '\U000029CF', | ||
269 | "LeftTriangleEqual;": '\U000022B4', | ||
270 | "LeftUpDownVector;": '\U00002951', | ||
271 | "LeftUpTeeVector;": '\U00002960', | ||
272 | "LeftUpVector;": '\U000021BF', | ||
273 | "LeftUpVectorBar;": '\U00002958', | ||
274 | "LeftVector;": '\U000021BC', | ||
275 | "LeftVectorBar;": '\U00002952', | ||
276 | "Leftarrow;": '\U000021D0', | ||
277 | "Leftrightarrow;": '\U000021D4', | ||
278 | "LessEqualGreater;": '\U000022DA', | ||
279 | "LessFullEqual;": '\U00002266', | ||
280 | "LessGreater;": '\U00002276', | ||
281 | "LessLess;": '\U00002AA1', | ||
282 | "LessSlantEqual;": '\U00002A7D', | ||
283 | "LessTilde;": '\U00002272', | ||
284 | "Lfr;": '\U0001D50F', | ||
285 | "Ll;": '\U000022D8', | ||
286 | "Lleftarrow;": '\U000021DA', | ||
287 | "Lmidot;": '\U0000013F', | ||
288 | "LongLeftArrow;": '\U000027F5', | ||
289 | "LongLeftRightArrow;": '\U000027F7', | ||
290 | "LongRightArrow;": '\U000027F6', | ||
291 | "Longleftarrow;": '\U000027F8', | ||
292 | "Longleftrightarrow;": '\U000027FA', | ||
293 | "Longrightarrow;": '\U000027F9', | ||
294 | "Lopf;": '\U0001D543', | ||
295 | "LowerLeftArrow;": '\U00002199', | ||
296 | "LowerRightArrow;": '\U00002198', | ||
297 | "Lscr;": '\U00002112', | ||
298 | "Lsh;": '\U000021B0', | ||
299 | "Lstrok;": '\U00000141', | ||
300 | "Lt;": '\U0000226A', | ||
301 | "Map;": '\U00002905', | ||
302 | "Mcy;": '\U0000041C', | ||
303 | "MediumSpace;": '\U0000205F', | ||
304 | "Mellintrf;": '\U00002133', | ||
305 | "Mfr;": '\U0001D510', | ||
306 | "MinusPlus;": '\U00002213', | ||
307 | "Mopf;": '\U0001D544', | ||
308 | "Mscr;": '\U00002133', | ||
309 | "Mu;": '\U0000039C', | ||
310 | "NJcy;": '\U0000040A', | ||
311 | "Nacute;": '\U00000143', | ||
312 | "Ncaron;": '\U00000147', | ||
313 | "Ncedil;": '\U00000145', | ||
314 | "Ncy;": '\U0000041D', | ||
315 | "NegativeMediumSpace;": '\U0000200B', | ||
316 | "NegativeThickSpace;": '\U0000200B', | ||
317 | "NegativeThinSpace;": '\U0000200B', | ||
318 | "NegativeVeryThinSpace;": '\U0000200B', | ||
319 | "NestedGreaterGreater;": '\U0000226B', | ||
320 | "NestedLessLess;": '\U0000226A', | ||
321 | "NewLine;": '\U0000000A', | ||
322 | "Nfr;": '\U0001D511', | ||
323 | "NoBreak;": '\U00002060', | ||
324 | "NonBreakingSpace;": '\U000000A0', | ||
325 | "Nopf;": '\U00002115', | ||
326 | "Not;": '\U00002AEC', | ||
327 | "NotCongruent;": '\U00002262', | ||
328 | "NotCupCap;": '\U0000226D', | ||
329 | "NotDoubleVerticalBar;": '\U00002226', | ||
330 | "NotElement;": '\U00002209', | ||
331 | "NotEqual;": '\U00002260', | ||
332 | "NotExists;": '\U00002204', | ||
333 | "NotGreater;": '\U0000226F', | ||
334 | "NotGreaterEqual;": '\U00002271', | ||
335 | "NotGreaterLess;": '\U00002279', | ||
336 | "NotGreaterTilde;": '\U00002275', | ||
337 | "NotLeftTriangle;": '\U000022EA', | ||
338 | "NotLeftTriangleEqual;": '\U000022EC', | ||
339 | "NotLess;": '\U0000226E', | ||
340 | "NotLessEqual;": '\U00002270', | ||
341 | "NotLessGreater;": '\U00002278', | ||
342 | "NotLessTilde;": '\U00002274', | ||
343 | "NotPrecedes;": '\U00002280', | ||
344 | "NotPrecedesSlantEqual;": '\U000022E0', | ||
345 | "NotReverseElement;": '\U0000220C', | ||
346 | "NotRightTriangle;": '\U000022EB', | ||
347 | "NotRightTriangleEqual;": '\U000022ED', | ||
348 | "NotSquareSubsetEqual;": '\U000022E2', | ||
349 | "NotSquareSupersetEqual;": '\U000022E3', | ||
350 | "NotSubsetEqual;": '\U00002288', | ||
351 | "NotSucceeds;": '\U00002281', | ||
352 | "NotSucceedsSlantEqual;": '\U000022E1', | ||
353 | "NotSupersetEqual;": '\U00002289', | ||
354 | "NotTilde;": '\U00002241', | ||
355 | "NotTildeEqual;": '\U00002244', | ||
356 | "NotTildeFullEqual;": '\U00002247', | ||
357 | "NotTildeTilde;": '\U00002249', | ||
358 | "NotVerticalBar;": '\U00002224', | ||
359 | "Nscr;": '\U0001D4A9', | ||
360 | "Ntilde;": '\U000000D1', | ||
361 | "Nu;": '\U0000039D', | ||
362 | "OElig;": '\U00000152', | ||
363 | "Oacute;": '\U000000D3', | ||
364 | "Ocirc;": '\U000000D4', | ||
365 | "Ocy;": '\U0000041E', | ||
366 | "Odblac;": '\U00000150', | ||
367 | "Ofr;": '\U0001D512', | ||
368 | "Ograve;": '\U000000D2', | ||
369 | "Omacr;": '\U0000014C', | ||
370 | "Omega;": '\U000003A9', | ||
371 | "Omicron;": '\U0000039F', | ||
372 | "Oopf;": '\U0001D546', | ||
373 | "OpenCurlyDoubleQuote;": '\U0000201C', | ||
374 | "OpenCurlyQuote;": '\U00002018', | ||
375 | "Or;": '\U00002A54', | ||
376 | "Oscr;": '\U0001D4AA', | ||
377 | "Oslash;": '\U000000D8', | ||
378 | "Otilde;": '\U000000D5', | ||
379 | "Otimes;": '\U00002A37', | ||
380 | "Ouml;": '\U000000D6', | ||
381 | "OverBar;": '\U0000203E', | ||
382 | "OverBrace;": '\U000023DE', | ||
383 | "OverBracket;": '\U000023B4', | ||
384 | "OverParenthesis;": '\U000023DC', | ||
385 | "PartialD;": '\U00002202', | ||
386 | "Pcy;": '\U0000041F', | ||
387 | "Pfr;": '\U0001D513', | ||
388 | "Phi;": '\U000003A6', | ||
389 | "Pi;": '\U000003A0', | ||
390 | "PlusMinus;": '\U000000B1', | ||
391 | "Poincareplane;": '\U0000210C', | ||
392 | "Popf;": '\U00002119', | ||
393 | "Pr;": '\U00002ABB', | ||
394 | "Precedes;": '\U0000227A', | ||
395 | "PrecedesEqual;": '\U00002AAF', | ||
396 | "PrecedesSlantEqual;": '\U0000227C', | ||
397 | "PrecedesTilde;": '\U0000227E', | ||
398 | "Prime;": '\U00002033', | ||
399 | "Product;": '\U0000220F', | ||
400 | "Proportion;": '\U00002237', | ||
401 | "Proportional;": '\U0000221D', | ||
402 | "Pscr;": '\U0001D4AB', | ||
403 | "Psi;": '\U000003A8', | ||
404 | "QUOT;": '\U00000022', | ||
405 | "Qfr;": '\U0001D514', | ||
406 | "Qopf;": '\U0000211A', | ||
407 | "Qscr;": '\U0001D4AC', | ||
408 | "RBarr;": '\U00002910', | ||
409 | "REG;": '\U000000AE', | ||
410 | "Racute;": '\U00000154', | ||
411 | "Rang;": '\U000027EB', | ||
412 | "Rarr;": '\U000021A0', | ||
413 | "Rarrtl;": '\U00002916', | ||
414 | "Rcaron;": '\U00000158', | ||
415 | "Rcedil;": '\U00000156', | ||
416 | "Rcy;": '\U00000420', | ||
417 | "Re;": '\U0000211C', | ||
418 | "ReverseElement;": '\U0000220B', | ||
419 | "ReverseEquilibrium;": '\U000021CB', | ||
420 | "ReverseUpEquilibrium;": '\U0000296F', | ||
421 | "Rfr;": '\U0000211C', | ||
422 | "Rho;": '\U000003A1', | ||
423 | "RightAngleBracket;": '\U000027E9', | ||
424 | "RightArrow;": '\U00002192', | ||
425 | "RightArrowBar;": '\U000021E5', | ||
426 | "RightArrowLeftArrow;": '\U000021C4', | ||
427 | "RightCeiling;": '\U00002309', | ||
428 | "RightDoubleBracket;": '\U000027E7', | ||
429 | "RightDownTeeVector;": '\U0000295D', | ||
430 | "RightDownVector;": '\U000021C2', | ||
431 | "RightDownVectorBar;": '\U00002955', | ||
432 | "RightFloor;": '\U0000230B', | ||
433 | "RightTee;": '\U000022A2', | ||
434 | "RightTeeArrow;": '\U000021A6', | ||
435 | "RightTeeVector;": '\U0000295B', | ||
436 | "RightTriangle;": '\U000022B3', | ||
437 | "RightTriangleBar;": '\U000029D0', | ||
438 | "RightTriangleEqual;": '\U000022B5', | ||
439 | "RightUpDownVector;": '\U0000294F', | ||
440 | "RightUpTeeVector;": '\U0000295C', | ||
441 | "RightUpVector;": '\U000021BE', | ||
442 | "RightUpVectorBar;": '\U00002954', | ||
443 | "RightVector;": '\U000021C0', | ||
444 | "RightVectorBar;": '\U00002953', | ||
445 | "Rightarrow;": '\U000021D2', | ||
446 | "Ropf;": '\U0000211D', | ||
447 | "RoundImplies;": '\U00002970', | ||
448 | "Rrightarrow;": '\U000021DB', | ||
449 | "Rscr;": '\U0000211B', | ||
450 | "Rsh;": '\U000021B1', | ||
451 | "RuleDelayed;": '\U000029F4', | ||
452 | "SHCHcy;": '\U00000429', | ||
453 | "SHcy;": '\U00000428', | ||
454 | "SOFTcy;": '\U0000042C', | ||
455 | "Sacute;": '\U0000015A', | ||
456 | "Sc;": '\U00002ABC', | ||
457 | "Scaron;": '\U00000160', | ||
458 | "Scedil;": '\U0000015E', | ||
459 | "Scirc;": '\U0000015C', | ||
460 | "Scy;": '\U00000421', | ||
461 | "Sfr;": '\U0001D516', | ||
462 | "ShortDownArrow;": '\U00002193', | ||
463 | "ShortLeftArrow;": '\U00002190', | ||
464 | "ShortRightArrow;": '\U00002192', | ||
465 | "ShortUpArrow;": '\U00002191', | ||
466 | "Sigma;": '\U000003A3', | ||
467 | "SmallCircle;": '\U00002218', | ||
468 | "Sopf;": '\U0001D54A', | ||
469 | "Sqrt;": '\U0000221A', | ||
470 | "Square;": '\U000025A1', | ||
471 | "SquareIntersection;": '\U00002293', | ||
472 | "SquareSubset;": '\U0000228F', | ||
473 | "SquareSubsetEqual;": '\U00002291', | ||
474 | "SquareSuperset;": '\U00002290', | ||
475 | "SquareSupersetEqual;": '\U00002292', | ||
476 | "SquareUnion;": '\U00002294', | ||
477 | "Sscr;": '\U0001D4AE', | ||
478 | "Star;": '\U000022C6', | ||
479 | "Sub;": '\U000022D0', | ||
480 | "Subset;": '\U000022D0', | ||
481 | "SubsetEqual;": '\U00002286', | ||
482 | "Succeeds;": '\U0000227B', | ||
483 | "SucceedsEqual;": '\U00002AB0', | ||
484 | "SucceedsSlantEqual;": '\U0000227D', | ||
485 | "SucceedsTilde;": '\U0000227F', | ||
486 | "SuchThat;": '\U0000220B', | ||
487 | "Sum;": '\U00002211', | ||
488 | "Sup;": '\U000022D1', | ||
489 | "Superset;": '\U00002283', | ||
490 | "SupersetEqual;": '\U00002287', | ||
491 | "Supset;": '\U000022D1', | ||
492 | "THORN;": '\U000000DE', | ||
493 | "TRADE;": '\U00002122', | ||
494 | "TSHcy;": '\U0000040B', | ||
495 | "TScy;": '\U00000426', | ||
496 | "Tab;": '\U00000009', | ||
497 | "Tau;": '\U000003A4', | ||
498 | "Tcaron;": '\U00000164', | ||
499 | "Tcedil;": '\U00000162', | ||
500 | "Tcy;": '\U00000422', | ||
501 | "Tfr;": '\U0001D517', | ||
502 | "Therefore;": '\U00002234', | ||
503 | "Theta;": '\U00000398', | ||
504 | "ThinSpace;": '\U00002009', | ||
505 | "Tilde;": '\U0000223C', | ||
506 | "TildeEqual;": '\U00002243', | ||
507 | "TildeFullEqual;": '\U00002245', | ||
508 | "TildeTilde;": '\U00002248', | ||
509 | "Topf;": '\U0001D54B', | ||
510 | "TripleDot;": '\U000020DB', | ||
511 | "Tscr;": '\U0001D4AF', | ||
512 | "Tstrok;": '\U00000166', | ||
513 | "Uacute;": '\U000000DA', | ||
514 | "Uarr;": '\U0000219F', | ||
515 | "Uarrocir;": '\U00002949', | ||
516 | "Ubrcy;": '\U0000040E', | ||
517 | "Ubreve;": '\U0000016C', | ||
518 | "Ucirc;": '\U000000DB', | ||
519 | "Ucy;": '\U00000423', | ||
520 | "Udblac;": '\U00000170', | ||
521 | "Ufr;": '\U0001D518', | ||
522 | "Ugrave;": '\U000000D9', | ||
523 | "Umacr;": '\U0000016A', | ||
524 | "UnderBar;": '\U0000005F', | ||
525 | "UnderBrace;": '\U000023DF', | ||
526 | "UnderBracket;": '\U000023B5', | ||
527 | "UnderParenthesis;": '\U000023DD', | ||
528 | "Union;": '\U000022C3', | ||
529 | "UnionPlus;": '\U0000228E', | ||
530 | "Uogon;": '\U00000172', | ||
531 | "Uopf;": '\U0001D54C', | ||
532 | "UpArrow;": '\U00002191', | ||
533 | "UpArrowBar;": '\U00002912', | ||
534 | "UpArrowDownArrow;": '\U000021C5', | ||
535 | "UpDownArrow;": '\U00002195', | ||
536 | "UpEquilibrium;": '\U0000296E', | ||
537 | "UpTee;": '\U000022A5', | ||
538 | "UpTeeArrow;": '\U000021A5', | ||
539 | "Uparrow;": '\U000021D1', | ||
540 | "Updownarrow;": '\U000021D5', | ||
541 | "UpperLeftArrow;": '\U00002196', | ||
542 | "UpperRightArrow;": '\U00002197', | ||
543 | "Upsi;": '\U000003D2', | ||
544 | "Upsilon;": '\U000003A5', | ||
545 | "Uring;": '\U0000016E', | ||
546 | "Uscr;": '\U0001D4B0', | ||
547 | "Utilde;": '\U00000168', | ||
548 | "Uuml;": '\U000000DC', | ||
549 | "VDash;": '\U000022AB', | ||
550 | "Vbar;": '\U00002AEB', | ||
551 | "Vcy;": '\U00000412', | ||
552 | "Vdash;": '\U000022A9', | ||
553 | "Vdashl;": '\U00002AE6', | ||
554 | "Vee;": '\U000022C1', | ||
555 | "Verbar;": '\U00002016', | ||
556 | "Vert;": '\U00002016', | ||
557 | "VerticalBar;": '\U00002223', | ||
558 | "VerticalLine;": '\U0000007C', | ||
559 | "VerticalSeparator;": '\U00002758', | ||
560 | "VerticalTilde;": '\U00002240', | ||
561 | "VeryThinSpace;": '\U0000200A', | ||
562 | "Vfr;": '\U0001D519', | ||
563 | "Vopf;": '\U0001D54D', | ||
564 | "Vscr;": '\U0001D4B1', | ||
565 | "Vvdash;": '\U000022AA', | ||
566 | "Wcirc;": '\U00000174', | ||
567 | "Wedge;": '\U000022C0', | ||
568 | "Wfr;": '\U0001D51A', | ||
569 | "Wopf;": '\U0001D54E', | ||
570 | "Wscr;": '\U0001D4B2', | ||
571 | "Xfr;": '\U0001D51B', | ||
572 | "Xi;": '\U0000039E', | ||
573 | "Xopf;": '\U0001D54F', | ||
574 | "Xscr;": '\U0001D4B3', | ||
575 | "YAcy;": '\U0000042F', | ||
576 | "YIcy;": '\U00000407', | ||
577 | "YUcy;": '\U0000042E', | ||
578 | "Yacute;": '\U000000DD', | ||
579 | "Ycirc;": '\U00000176', | ||
580 | "Ycy;": '\U0000042B', | ||
581 | "Yfr;": '\U0001D51C', | ||
582 | "Yopf;": '\U0001D550', | ||
583 | "Yscr;": '\U0001D4B4', | ||
584 | "Yuml;": '\U00000178', | ||
585 | "ZHcy;": '\U00000416', | ||
586 | "Zacute;": '\U00000179', | ||
587 | "Zcaron;": '\U0000017D', | ||
588 | "Zcy;": '\U00000417', | ||
589 | "Zdot;": '\U0000017B', | ||
590 | "ZeroWidthSpace;": '\U0000200B', | ||
591 | "Zeta;": '\U00000396', | ||
592 | "Zfr;": '\U00002128', | ||
593 | "Zopf;": '\U00002124', | ||
594 | "Zscr;": '\U0001D4B5', | ||
595 | "aacute;": '\U000000E1', | ||
596 | "abreve;": '\U00000103', | ||
597 | "ac;": '\U0000223E', | ||
598 | "acd;": '\U0000223F', | ||
599 | "acirc;": '\U000000E2', | ||
600 | "acute;": '\U000000B4', | ||
601 | "acy;": '\U00000430', | ||
602 | "aelig;": '\U000000E6', | ||
603 | "af;": '\U00002061', | ||
604 | "afr;": '\U0001D51E', | ||
605 | "agrave;": '\U000000E0', | ||
606 | "alefsym;": '\U00002135', | ||
607 | "aleph;": '\U00002135', | ||
608 | "alpha;": '\U000003B1', | ||
609 | "amacr;": '\U00000101', | ||
610 | "amalg;": '\U00002A3F', | ||
611 | "amp;": '\U00000026', | ||
612 | "and;": '\U00002227', | ||
613 | "andand;": '\U00002A55', | ||
614 | "andd;": '\U00002A5C', | ||
615 | "andslope;": '\U00002A58', | ||
616 | "andv;": '\U00002A5A', | ||
617 | "ang;": '\U00002220', | ||
618 | "ange;": '\U000029A4', | ||
619 | "angle;": '\U00002220', | ||
620 | "angmsd;": '\U00002221', | ||
621 | "angmsdaa;": '\U000029A8', | ||
622 | "angmsdab;": '\U000029A9', | ||
623 | "angmsdac;": '\U000029AA', | ||
624 | "angmsdad;": '\U000029AB', | ||
625 | "angmsdae;": '\U000029AC', | ||
626 | "angmsdaf;": '\U000029AD', | ||
627 | "angmsdag;": '\U000029AE', | ||
628 | "angmsdah;": '\U000029AF', | ||
629 | "angrt;": '\U0000221F', | ||
630 | "angrtvb;": '\U000022BE', | ||
631 | "angrtvbd;": '\U0000299D', | ||
632 | "angsph;": '\U00002222', | ||
633 | "angst;": '\U000000C5', | ||
634 | "angzarr;": '\U0000237C', | ||
635 | "aogon;": '\U00000105', | ||
636 | "aopf;": '\U0001D552', | ||
637 | "ap;": '\U00002248', | ||
638 | "apE;": '\U00002A70', | ||
639 | "apacir;": '\U00002A6F', | ||
640 | "ape;": '\U0000224A', | ||
641 | "apid;": '\U0000224B', | ||
642 | "apos;": '\U00000027', | ||
643 | "approx;": '\U00002248', | ||
644 | "approxeq;": '\U0000224A', | ||
645 | "aring;": '\U000000E5', | ||
646 | "ascr;": '\U0001D4B6', | ||
647 | "ast;": '\U0000002A', | ||
648 | "asymp;": '\U00002248', | ||
649 | "asympeq;": '\U0000224D', | ||
650 | "atilde;": '\U000000E3', | ||
651 | "auml;": '\U000000E4', | ||
652 | "awconint;": '\U00002233', | ||
653 | "awint;": '\U00002A11', | ||
654 | "bNot;": '\U00002AED', | ||
655 | "backcong;": '\U0000224C', | ||
656 | "backepsilon;": '\U000003F6', | ||
657 | "backprime;": '\U00002035', | ||
658 | "backsim;": '\U0000223D', | ||
659 | "backsimeq;": '\U000022CD', | ||
660 | "barvee;": '\U000022BD', | ||
661 | "barwed;": '\U00002305', | ||
662 | "barwedge;": '\U00002305', | ||
663 | "bbrk;": '\U000023B5', | ||
664 | "bbrktbrk;": '\U000023B6', | ||
665 | "bcong;": '\U0000224C', | ||
666 | "bcy;": '\U00000431', | ||
667 | "bdquo;": '\U0000201E', | ||
668 | "becaus;": '\U00002235', | ||
669 | "because;": '\U00002235', | ||
670 | "bemptyv;": '\U000029B0', | ||
671 | "bepsi;": '\U000003F6', | ||
672 | "bernou;": '\U0000212C', | ||
673 | "beta;": '\U000003B2', | ||
674 | "beth;": '\U00002136', | ||
675 | "between;": '\U0000226C', | ||
676 | "bfr;": '\U0001D51F', | ||
677 | "bigcap;": '\U000022C2', | ||
678 | "bigcirc;": '\U000025EF', | ||
679 | "bigcup;": '\U000022C3', | ||
680 | "bigodot;": '\U00002A00', | ||
681 | "bigoplus;": '\U00002A01', | ||
682 | "bigotimes;": '\U00002A02', | ||
683 | "bigsqcup;": '\U00002A06', | ||
684 | "bigstar;": '\U00002605', | ||
685 | "bigtriangledown;": '\U000025BD', | ||
686 | "bigtriangleup;": '\U000025B3', | ||
687 | "biguplus;": '\U00002A04', | ||
688 | "bigvee;": '\U000022C1', | ||
689 | "bigwedge;": '\U000022C0', | ||
690 | "bkarow;": '\U0000290D', | ||
691 | "blacklozenge;": '\U000029EB', | ||
692 | "blacksquare;": '\U000025AA', | ||
693 | "blacktriangle;": '\U000025B4', | ||
694 | "blacktriangledown;": '\U000025BE', | ||
695 | "blacktriangleleft;": '\U000025C2', | ||
696 | "blacktriangleright;": '\U000025B8', | ||
697 | "blank;": '\U00002423', | ||
698 | "blk12;": '\U00002592', | ||
699 | "blk14;": '\U00002591', | ||
700 | "blk34;": '\U00002593', | ||
701 | "block;": '\U00002588', | ||
702 | "bnot;": '\U00002310', | ||
703 | "bopf;": '\U0001D553', | ||
704 | "bot;": '\U000022A5', | ||
705 | "bottom;": '\U000022A5', | ||
706 | "bowtie;": '\U000022C8', | ||
707 | "boxDL;": '\U00002557', | ||
708 | "boxDR;": '\U00002554', | ||
709 | "boxDl;": '\U00002556', | ||
710 | "boxDr;": '\U00002553', | ||
711 | "boxH;": '\U00002550', | ||
712 | "boxHD;": '\U00002566', | ||
713 | "boxHU;": '\U00002569', | ||
714 | "boxHd;": '\U00002564', | ||
715 | "boxHu;": '\U00002567', | ||
716 | "boxUL;": '\U0000255D', | ||
717 | "boxUR;": '\U0000255A', | ||
718 | "boxUl;": '\U0000255C', | ||
719 | "boxUr;": '\U00002559', | ||
720 | "boxV;": '\U00002551', | ||
721 | "boxVH;": '\U0000256C', | ||
722 | "boxVL;": '\U00002563', | ||
723 | "boxVR;": '\U00002560', | ||
724 | "boxVh;": '\U0000256B', | ||
725 | "boxVl;": '\U00002562', | ||
726 | "boxVr;": '\U0000255F', | ||
727 | "boxbox;": '\U000029C9', | ||
728 | "boxdL;": '\U00002555', | ||
729 | "boxdR;": '\U00002552', | ||
730 | "boxdl;": '\U00002510', | ||
731 | "boxdr;": '\U0000250C', | ||
732 | "boxh;": '\U00002500', | ||
733 | "boxhD;": '\U00002565', | ||
734 | "boxhU;": '\U00002568', | ||
735 | "boxhd;": '\U0000252C', | ||
736 | "boxhu;": '\U00002534', | ||
737 | "boxminus;": '\U0000229F', | ||
738 | "boxplus;": '\U0000229E', | ||
739 | "boxtimes;": '\U000022A0', | ||
740 | "boxuL;": '\U0000255B', | ||
741 | "boxuR;": '\U00002558', | ||
742 | "boxul;": '\U00002518', | ||
743 | "boxur;": '\U00002514', | ||
744 | "boxv;": '\U00002502', | ||
745 | "boxvH;": '\U0000256A', | ||
746 | "boxvL;": '\U00002561', | ||
747 | "boxvR;": '\U0000255E', | ||
748 | "boxvh;": '\U0000253C', | ||
749 | "boxvl;": '\U00002524', | ||
750 | "boxvr;": '\U0000251C', | ||
751 | "bprime;": '\U00002035', | ||
752 | "breve;": '\U000002D8', | ||
753 | "brvbar;": '\U000000A6', | ||
754 | "bscr;": '\U0001D4B7', | ||
755 | "bsemi;": '\U0000204F', | ||
756 | "bsim;": '\U0000223D', | ||
757 | "bsime;": '\U000022CD', | ||
758 | "bsol;": '\U0000005C', | ||
759 | "bsolb;": '\U000029C5', | ||
760 | "bsolhsub;": '\U000027C8', | ||
761 | "bull;": '\U00002022', | ||
762 | "bullet;": '\U00002022', | ||
763 | "bump;": '\U0000224E', | ||
764 | "bumpE;": '\U00002AAE', | ||
765 | "bumpe;": '\U0000224F', | ||
766 | "bumpeq;": '\U0000224F', | ||
767 | "cacute;": '\U00000107', | ||
768 | "cap;": '\U00002229', | ||
769 | "capand;": '\U00002A44', | ||
770 | "capbrcup;": '\U00002A49', | ||
771 | "capcap;": '\U00002A4B', | ||
772 | "capcup;": '\U00002A47', | ||
773 | "capdot;": '\U00002A40', | ||
774 | "caret;": '\U00002041', | ||
775 | "caron;": '\U000002C7', | ||
776 | "ccaps;": '\U00002A4D', | ||
777 | "ccaron;": '\U0000010D', | ||
778 | "ccedil;": '\U000000E7', | ||
779 | "ccirc;": '\U00000109', | ||
780 | "ccups;": '\U00002A4C', | ||
781 | "ccupssm;": '\U00002A50', | ||
782 | "cdot;": '\U0000010B', | ||
783 | "cedil;": '\U000000B8', | ||
784 | "cemptyv;": '\U000029B2', | ||
785 | "cent;": '\U000000A2', | ||
786 | "centerdot;": '\U000000B7', | ||
787 | "cfr;": '\U0001D520', | ||
788 | "chcy;": '\U00000447', | ||
789 | "check;": '\U00002713', | ||
790 | "checkmark;": '\U00002713', | ||
791 | "chi;": '\U000003C7', | ||
792 | "cir;": '\U000025CB', | ||
793 | "cirE;": '\U000029C3', | ||
794 | "circ;": '\U000002C6', | ||
795 | "circeq;": '\U00002257', | ||
796 | "circlearrowleft;": '\U000021BA', | ||
797 | "circlearrowright;": '\U000021BB', | ||
798 | "circledR;": '\U000000AE', | ||
799 | "circledS;": '\U000024C8', | ||
800 | "circledast;": '\U0000229B', | ||
801 | "circledcirc;": '\U0000229A', | ||
802 | "circleddash;": '\U0000229D', | ||
803 | "cire;": '\U00002257', | ||
804 | "cirfnint;": '\U00002A10', | ||
805 | "cirmid;": '\U00002AEF', | ||
806 | "cirscir;": '\U000029C2', | ||
807 | "clubs;": '\U00002663', | ||
808 | "clubsuit;": '\U00002663', | ||
809 | "colon;": '\U0000003A', | ||
810 | "colone;": '\U00002254', | ||
811 | "coloneq;": '\U00002254', | ||
812 | "comma;": '\U0000002C', | ||
813 | "commat;": '\U00000040', | ||
814 | "comp;": '\U00002201', | ||
815 | "compfn;": '\U00002218', | ||
816 | "complement;": '\U00002201', | ||
817 | "complexes;": '\U00002102', | ||
818 | "cong;": '\U00002245', | ||
819 | "congdot;": '\U00002A6D', | ||
820 | "conint;": '\U0000222E', | ||
821 | "copf;": '\U0001D554', | ||
822 | "coprod;": '\U00002210', | ||
823 | "copy;": '\U000000A9', | ||
824 | "copysr;": '\U00002117', | ||
825 | "crarr;": '\U000021B5', | ||
826 | "cross;": '\U00002717', | ||
827 | "cscr;": '\U0001D4B8', | ||
828 | "csub;": '\U00002ACF', | ||
829 | "csube;": '\U00002AD1', | ||
830 | "csup;": '\U00002AD0', | ||
831 | "csupe;": '\U00002AD2', | ||
832 | "ctdot;": '\U000022EF', | ||
833 | "cudarrl;": '\U00002938', | ||
834 | "cudarrr;": '\U00002935', | ||
835 | "cuepr;": '\U000022DE', | ||
836 | "cuesc;": '\U000022DF', | ||
837 | "cularr;": '\U000021B6', | ||
838 | "cularrp;": '\U0000293D', | ||
839 | "cup;": '\U0000222A', | ||
840 | "cupbrcap;": '\U00002A48', | ||
841 | "cupcap;": '\U00002A46', | ||
842 | "cupcup;": '\U00002A4A', | ||
843 | "cupdot;": '\U0000228D', | ||
844 | "cupor;": '\U00002A45', | ||
845 | "curarr;": '\U000021B7', | ||
846 | "curarrm;": '\U0000293C', | ||
847 | "curlyeqprec;": '\U000022DE', | ||
848 | "curlyeqsucc;": '\U000022DF', | ||
849 | "curlyvee;": '\U000022CE', | ||
850 | "curlywedge;": '\U000022CF', | ||
851 | "curren;": '\U000000A4', | ||
852 | "curvearrowleft;": '\U000021B6', | ||
853 | "curvearrowright;": '\U000021B7', | ||
854 | "cuvee;": '\U000022CE', | ||
855 | "cuwed;": '\U000022CF', | ||
856 | "cwconint;": '\U00002232', | ||
857 | "cwint;": '\U00002231', | ||
858 | "cylcty;": '\U0000232D', | ||
859 | "dArr;": '\U000021D3', | ||
860 | "dHar;": '\U00002965', | ||
861 | "dagger;": '\U00002020', | ||
862 | "daleth;": '\U00002138', | ||
863 | "darr;": '\U00002193', | ||
864 | "dash;": '\U00002010', | ||
865 | "dashv;": '\U000022A3', | ||
866 | "dbkarow;": '\U0000290F', | ||
867 | "dblac;": '\U000002DD', | ||
868 | "dcaron;": '\U0000010F', | ||
869 | "dcy;": '\U00000434', | ||
870 | "dd;": '\U00002146', | ||
871 | "ddagger;": '\U00002021', | ||
872 | "ddarr;": '\U000021CA', | ||
873 | "ddotseq;": '\U00002A77', | ||
874 | "deg;": '\U000000B0', | ||
875 | "delta;": '\U000003B4', | ||
876 | "demptyv;": '\U000029B1', | ||
877 | "dfisht;": '\U0000297F', | ||
878 | "dfr;": '\U0001D521', | ||
879 | "dharl;": '\U000021C3', | ||
880 | "dharr;": '\U000021C2', | ||
881 | "diam;": '\U000022C4', | ||
882 | "diamond;": '\U000022C4', | ||
883 | "diamondsuit;": '\U00002666', | ||
884 | "diams;": '\U00002666', | ||
885 | "die;": '\U000000A8', | ||
886 | "digamma;": '\U000003DD', | ||
887 | "disin;": '\U000022F2', | ||
888 | "div;": '\U000000F7', | ||
889 | "divide;": '\U000000F7', | ||
890 | "divideontimes;": '\U000022C7', | ||
891 | "divonx;": '\U000022C7', | ||
892 | "djcy;": '\U00000452', | ||
893 | "dlcorn;": '\U0000231E', | ||
894 | "dlcrop;": '\U0000230D', | ||
895 | "dollar;": '\U00000024', | ||
896 | "dopf;": '\U0001D555', | ||
897 | "dot;": '\U000002D9', | ||
898 | "doteq;": '\U00002250', | ||
899 | "doteqdot;": '\U00002251', | ||
900 | "dotminus;": '\U00002238', | ||
901 | "dotplus;": '\U00002214', | ||
902 | "dotsquare;": '\U000022A1', | ||
903 | "doublebarwedge;": '\U00002306', | ||
904 | "downarrow;": '\U00002193', | ||
905 | "downdownarrows;": '\U000021CA', | ||
906 | "downharpoonleft;": '\U000021C3', | ||
907 | "downharpoonright;": '\U000021C2', | ||
908 | "drbkarow;": '\U00002910', | ||
909 | "drcorn;": '\U0000231F', | ||
910 | "drcrop;": '\U0000230C', | ||
911 | "dscr;": '\U0001D4B9', | ||
912 | "dscy;": '\U00000455', | ||
913 | "dsol;": '\U000029F6', | ||
914 | "dstrok;": '\U00000111', | ||
915 | "dtdot;": '\U000022F1', | ||
916 | "dtri;": '\U000025BF', | ||
917 | "dtrif;": '\U000025BE', | ||
918 | "duarr;": '\U000021F5', | ||
919 | "duhar;": '\U0000296F', | ||
920 | "dwangle;": '\U000029A6', | ||
921 | "dzcy;": '\U0000045F', | ||
922 | "dzigrarr;": '\U000027FF', | ||
923 | "eDDot;": '\U00002A77', | ||
924 | "eDot;": '\U00002251', | ||
925 | "eacute;": '\U000000E9', | ||
926 | "easter;": '\U00002A6E', | ||
927 | "ecaron;": '\U0000011B', | ||
928 | "ecir;": '\U00002256', | ||
929 | "ecirc;": '\U000000EA', | ||
930 | "ecolon;": '\U00002255', | ||
931 | "ecy;": '\U0000044D', | ||
932 | "edot;": '\U00000117', | ||
933 | "ee;": '\U00002147', | ||
934 | "efDot;": '\U00002252', | ||
935 | "efr;": '\U0001D522', | ||
936 | "eg;": '\U00002A9A', | ||
937 | "egrave;": '\U000000E8', | ||
938 | "egs;": '\U00002A96', | ||
939 | "egsdot;": '\U00002A98', | ||
940 | "el;": '\U00002A99', | ||
941 | "elinters;": '\U000023E7', | ||
942 | "ell;": '\U00002113', | ||
943 | "els;": '\U00002A95', | ||
944 | "elsdot;": '\U00002A97', | ||
945 | "emacr;": '\U00000113', | ||
946 | "empty;": '\U00002205', | ||
947 | "emptyset;": '\U00002205', | ||
948 | "emptyv;": '\U00002205', | ||
949 | "emsp;": '\U00002003', | ||
950 | "emsp13;": '\U00002004', | ||
951 | "emsp14;": '\U00002005', | ||
952 | "eng;": '\U0000014B', | ||
953 | "ensp;": '\U00002002', | ||
954 | "eogon;": '\U00000119', | ||
955 | "eopf;": '\U0001D556', | ||
956 | "epar;": '\U000022D5', | ||
957 | "eparsl;": '\U000029E3', | ||
958 | "eplus;": '\U00002A71', | ||
959 | "epsi;": '\U000003B5', | ||
960 | "epsilon;": '\U000003B5', | ||
961 | "epsiv;": '\U000003F5', | ||
962 | "eqcirc;": '\U00002256', | ||
963 | "eqcolon;": '\U00002255', | ||
964 | "eqsim;": '\U00002242', | ||
965 | "eqslantgtr;": '\U00002A96', | ||
966 | "eqslantless;": '\U00002A95', | ||
967 | "equals;": '\U0000003D', | ||
968 | "equest;": '\U0000225F', | ||
969 | "equiv;": '\U00002261', | ||
970 | "equivDD;": '\U00002A78', | ||
971 | "eqvparsl;": '\U000029E5', | ||
972 | "erDot;": '\U00002253', | ||
973 | "erarr;": '\U00002971', | ||
974 | "escr;": '\U0000212F', | ||
975 | "esdot;": '\U00002250', | ||
976 | "esim;": '\U00002242', | ||
977 | "eta;": '\U000003B7', | ||
978 | "eth;": '\U000000F0', | ||
979 | "euml;": '\U000000EB', | ||
980 | "euro;": '\U000020AC', | ||
981 | "excl;": '\U00000021', | ||
982 | "exist;": '\U00002203', | ||
983 | "expectation;": '\U00002130', | ||
984 | "exponentiale;": '\U00002147', | ||
985 | "fallingdotseq;": '\U00002252', | ||
986 | "fcy;": '\U00000444', | ||
987 | "female;": '\U00002640', | ||
988 | "ffilig;": '\U0000FB03', | ||
989 | "fflig;": '\U0000FB00', | ||
990 | "ffllig;": '\U0000FB04', | ||
991 | "ffr;": '\U0001D523', | ||
992 | "filig;": '\U0000FB01', | ||
993 | "flat;": '\U0000266D', | ||
994 | "fllig;": '\U0000FB02', | ||
995 | "fltns;": '\U000025B1', | ||
996 | "fnof;": '\U00000192', | ||
997 | "fopf;": '\U0001D557', | ||
998 | "forall;": '\U00002200', | ||
999 | "fork;": '\U000022D4', | ||
1000 | "forkv;": '\U00002AD9', | ||
1001 | "fpartint;": '\U00002A0D', | ||
1002 | "frac12;": '\U000000BD', | ||
1003 | "frac13;": '\U00002153', | ||
1004 | "frac14;": '\U000000BC', | ||
1005 | "frac15;": '\U00002155', | ||
1006 | "frac16;": '\U00002159', | ||
1007 | "frac18;": '\U0000215B', | ||
1008 | "frac23;": '\U00002154', | ||
1009 | "frac25;": '\U00002156', | ||
1010 | "frac34;": '\U000000BE', | ||
1011 | "frac35;": '\U00002157', | ||
1012 | "frac38;": '\U0000215C', | ||
1013 | "frac45;": '\U00002158', | ||
1014 | "frac56;": '\U0000215A', | ||
1015 | "frac58;": '\U0000215D', | ||
1016 | "frac78;": '\U0000215E', | ||
1017 | "frasl;": '\U00002044', | ||
1018 | "frown;": '\U00002322', | ||
1019 | "fscr;": '\U0001D4BB', | ||
1020 | "gE;": '\U00002267', | ||
1021 | "gEl;": '\U00002A8C', | ||
1022 | "gacute;": '\U000001F5', | ||
1023 | "gamma;": '\U000003B3', | ||
1024 | "gammad;": '\U000003DD', | ||
1025 | "gap;": '\U00002A86', | ||
1026 | "gbreve;": '\U0000011F', | ||
1027 | "gcirc;": '\U0000011D', | ||
1028 | "gcy;": '\U00000433', | ||
1029 | "gdot;": '\U00000121', | ||
1030 | "ge;": '\U00002265', | ||
1031 | "gel;": '\U000022DB', | ||
1032 | "geq;": '\U00002265', | ||
1033 | "geqq;": '\U00002267', | ||
1034 | "geqslant;": '\U00002A7E', | ||
1035 | "ges;": '\U00002A7E', | ||
1036 | "gescc;": '\U00002AA9', | ||
1037 | "gesdot;": '\U00002A80', | ||
1038 | "gesdoto;": '\U00002A82', | ||
1039 | "gesdotol;": '\U00002A84', | ||
1040 | "gesles;": '\U00002A94', | ||
1041 | "gfr;": '\U0001D524', | ||
1042 | "gg;": '\U0000226B', | ||
1043 | "ggg;": '\U000022D9', | ||
1044 | "gimel;": '\U00002137', | ||
1045 | "gjcy;": '\U00000453', | ||
1046 | "gl;": '\U00002277', | ||
1047 | "glE;": '\U00002A92', | ||
1048 | "gla;": '\U00002AA5', | ||
1049 | "glj;": '\U00002AA4', | ||
1050 | "gnE;": '\U00002269', | ||
1051 | "gnap;": '\U00002A8A', | ||
1052 | "gnapprox;": '\U00002A8A', | ||
1053 | "gne;": '\U00002A88', | ||
1054 | "gneq;": '\U00002A88', | ||
1055 | "gneqq;": '\U00002269', | ||
1056 | "gnsim;": '\U000022E7', | ||
1057 | "gopf;": '\U0001D558', | ||
1058 | "grave;": '\U00000060', | ||
1059 | "gscr;": '\U0000210A', | ||
1060 | "gsim;": '\U00002273', | ||
1061 | "gsime;": '\U00002A8E', | ||
1062 | "gsiml;": '\U00002A90', | ||
1063 | "gt;": '\U0000003E', | ||
1064 | "gtcc;": '\U00002AA7', | ||
1065 | "gtcir;": '\U00002A7A', | ||
1066 | "gtdot;": '\U000022D7', | ||
1067 | "gtlPar;": '\U00002995', | ||
1068 | "gtquest;": '\U00002A7C', | ||
1069 | "gtrapprox;": '\U00002A86', | ||
1070 | "gtrarr;": '\U00002978', | ||
1071 | "gtrdot;": '\U000022D7', | ||
1072 | "gtreqless;": '\U000022DB', | ||
1073 | "gtreqqless;": '\U00002A8C', | ||
1074 | "gtrless;": '\U00002277', | ||
1075 | "gtrsim;": '\U00002273', | ||
1076 | "hArr;": '\U000021D4', | ||
1077 | "hairsp;": '\U0000200A', | ||
1078 | "half;": '\U000000BD', | ||
1079 | "hamilt;": '\U0000210B', | ||
1080 | "hardcy;": '\U0000044A', | ||
1081 | "harr;": '\U00002194', | ||
1082 | "harrcir;": '\U00002948', | ||
1083 | "harrw;": '\U000021AD', | ||
1084 | "hbar;": '\U0000210F', | ||
1085 | "hcirc;": '\U00000125', | ||
1086 | "hearts;": '\U00002665', | ||
1087 | "heartsuit;": '\U00002665', | ||
1088 | "hellip;": '\U00002026', | ||
1089 | "hercon;": '\U000022B9', | ||
1090 | "hfr;": '\U0001D525', | ||
1091 | "hksearow;": '\U00002925', | ||
1092 | "hkswarow;": '\U00002926', | ||
1093 | "hoarr;": '\U000021FF', | ||
1094 | "homtht;": '\U0000223B', | ||
1095 | "hookleftarrow;": '\U000021A9', | ||
1096 | "hookrightarrow;": '\U000021AA', | ||
1097 | "hopf;": '\U0001D559', | ||
1098 | "horbar;": '\U00002015', | ||
1099 | "hscr;": '\U0001D4BD', | ||
1100 | "hslash;": '\U0000210F', | ||
1101 | "hstrok;": '\U00000127', | ||
1102 | "hybull;": '\U00002043', | ||
1103 | "hyphen;": '\U00002010', | ||
1104 | "iacute;": '\U000000ED', | ||
1105 | "ic;": '\U00002063', | ||
1106 | "icirc;": '\U000000EE', | ||
1107 | "icy;": '\U00000438', | ||
1108 | "iecy;": '\U00000435', | ||
1109 | "iexcl;": '\U000000A1', | ||
1110 | "iff;": '\U000021D4', | ||
1111 | "ifr;": '\U0001D526', | ||
1112 | "igrave;": '\U000000EC', | ||
1113 | "ii;": '\U00002148', | ||
1114 | "iiiint;": '\U00002A0C', | ||
1115 | "iiint;": '\U0000222D', | ||
1116 | "iinfin;": '\U000029DC', | ||
1117 | "iiota;": '\U00002129', | ||
1118 | "ijlig;": '\U00000133', | ||
1119 | "imacr;": '\U0000012B', | ||
1120 | "image;": '\U00002111', | ||
1121 | "imagline;": '\U00002110', | ||
1122 | "imagpart;": '\U00002111', | ||
1123 | "imath;": '\U00000131', | ||
1124 | "imof;": '\U000022B7', | ||
1125 | "imped;": '\U000001B5', | ||
1126 | "in;": '\U00002208', | ||
1127 | "incare;": '\U00002105', | ||
1128 | "infin;": '\U0000221E', | ||
1129 | "infintie;": '\U000029DD', | ||
1130 | "inodot;": '\U00000131', | ||
1131 | "int;": '\U0000222B', | ||
1132 | "intcal;": '\U000022BA', | ||
1133 | "integers;": '\U00002124', | ||
1134 | "intercal;": '\U000022BA', | ||
1135 | "intlarhk;": '\U00002A17', | ||
1136 | "intprod;": '\U00002A3C', | ||
1137 | "iocy;": '\U00000451', | ||
1138 | "iogon;": '\U0000012F', | ||
1139 | "iopf;": '\U0001D55A', | ||
1140 | "iota;": '\U000003B9', | ||
1141 | "iprod;": '\U00002A3C', | ||
1142 | "iquest;": '\U000000BF', | ||
1143 | "iscr;": '\U0001D4BE', | ||
1144 | "isin;": '\U00002208', | ||
1145 | "isinE;": '\U000022F9', | ||
1146 | "isindot;": '\U000022F5', | ||
1147 | "isins;": '\U000022F4', | ||
1148 | "isinsv;": '\U000022F3', | ||
1149 | "isinv;": '\U00002208', | ||
1150 | "it;": '\U00002062', | ||
1151 | "itilde;": '\U00000129', | ||
1152 | "iukcy;": '\U00000456', | ||
1153 | "iuml;": '\U000000EF', | ||
1154 | "jcirc;": '\U00000135', | ||
1155 | "jcy;": '\U00000439', | ||
1156 | "jfr;": '\U0001D527', | ||
1157 | "jmath;": '\U00000237', | ||
1158 | "jopf;": '\U0001D55B', | ||
1159 | "jscr;": '\U0001D4BF', | ||
1160 | "jsercy;": '\U00000458', | ||
1161 | "jukcy;": '\U00000454', | ||
1162 | "kappa;": '\U000003BA', | ||
1163 | "kappav;": '\U000003F0', | ||
1164 | "kcedil;": '\U00000137', | ||
1165 | "kcy;": '\U0000043A', | ||
1166 | "kfr;": '\U0001D528', | ||
1167 | "kgreen;": '\U00000138', | ||
1168 | "khcy;": '\U00000445', | ||
1169 | "kjcy;": '\U0000045C', | ||
1170 | "kopf;": '\U0001D55C', | ||
1171 | "kscr;": '\U0001D4C0', | ||
1172 | "lAarr;": '\U000021DA', | ||
1173 | "lArr;": '\U000021D0', | ||
1174 | "lAtail;": '\U0000291B', | ||
1175 | "lBarr;": '\U0000290E', | ||
1176 | "lE;": '\U00002266', | ||
1177 | "lEg;": '\U00002A8B', | ||
1178 | "lHar;": '\U00002962', | ||
1179 | "lacute;": '\U0000013A', | ||
1180 | "laemptyv;": '\U000029B4', | ||
1181 | "lagran;": '\U00002112', | ||
1182 | "lambda;": '\U000003BB', | ||
1183 | "lang;": '\U000027E8', | ||
1184 | "langd;": '\U00002991', | ||
1185 | "langle;": '\U000027E8', | ||
1186 | "lap;": '\U00002A85', | ||
1187 | "laquo;": '\U000000AB', | ||
1188 | "larr;": '\U00002190', | ||
1189 | "larrb;": '\U000021E4', | ||
1190 | "larrbfs;": '\U0000291F', | ||
1191 | "larrfs;": '\U0000291D', | ||
1192 | "larrhk;": '\U000021A9', | ||
1193 | "larrlp;": '\U000021AB', | ||
1194 | "larrpl;": '\U00002939', | ||
1195 | "larrsim;": '\U00002973', | ||
1196 | "larrtl;": '\U000021A2', | ||
1197 | "lat;": '\U00002AAB', | ||
1198 | "latail;": '\U00002919', | ||
1199 | "late;": '\U00002AAD', | ||
1200 | "lbarr;": '\U0000290C', | ||
1201 | "lbbrk;": '\U00002772', | ||
1202 | "lbrace;": '\U0000007B', | ||
1203 | "lbrack;": '\U0000005B', | ||
1204 | "lbrke;": '\U0000298B', | ||
1205 | "lbrksld;": '\U0000298F', | ||
1206 | "lbrkslu;": '\U0000298D', | ||
1207 | "lcaron;": '\U0000013E', | ||
1208 | "lcedil;": '\U0000013C', | ||
1209 | "lceil;": '\U00002308', | ||
1210 | "lcub;": '\U0000007B', | ||
1211 | "lcy;": '\U0000043B', | ||
1212 | "ldca;": '\U00002936', | ||
1213 | "ldquo;": '\U0000201C', | ||
1214 | "ldquor;": '\U0000201E', | ||
1215 | "ldrdhar;": '\U00002967', | ||
1216 | "ldrushar;": '\U0000294B', | ||
1217 | "ldsh;": '\U000021B2', | ||
1218 | "le;": '\U00002264', | ||
1219 | "leftarrow;": '\U00002190', | ||
1220 | "leftarrowtail;": '\U000021A2', | ||
1221 | "leftharpoondown;": '\U000021BD', | ||
1222 | "leftharpoonup;": '\U000021BC', | ||
1223 | "leftleftarrows;": '\U000021C7', | ||
1224 | "leftrightarrow;": '\U00002194', | ||
1225 | "leftrightarrows;": '\U000021C6', | ||
1226 | "leftrightharpoons;": '\U000021CB', | ||
1227 | "leftrightsquigarrow;": '\U000021AD', | ||
1228 | "leftthreetimes;": '\U000022CB', | ||
1229 | "leg;": '\U000022DA', | ||
1230 | "leq;": '\U00002264', | ||
1231 | "leqq;": '\U00002266', | ||
1232 | "leqslant;": '\U00002A7D', | ||
1233 | "les;": '\U00002A7D', | ||
1234 | "lescc;": '\U00002AA8', | ||
1235 | "lesdot;": '\U00002A7F', | ||
1236 | "lesdoto;": '\U00002A81', | ||
1237 | "lesdotor;": '\U00002A83', | ||
1238 | "lesges;": '\U00002A93', | ||
1239 | "lessapprox;": '\U00002A85', | ||
1240 | "lessdot;": '\U000022D6', | ||
1241 | "lesseqgtr;": '\U000022DA', | ||
1242 | "lesseqqgtr;": '\U00002A8B', | ||
1243 | "lessgtr;": '\U00002276', | ||
1244 | "lesssim;": '\U00002272', | ||
1245 | "lfisht;": '\U0000297C', | ||
1246 | "lfloor;": '\U0000230A', | ||
1247 | "lfr;": '\U0001D529', | ||
1248 | "lg;": '\U00002276', | ||
1249 | "lgE;": '\U00002A91', | ||
1250 | "lhard;": '\U000021BD', | ||
1251 | "lharu;": '\U000021BC', | ||
1252 | "lharul;": '\U0000296A', | ||
1253 | "lhblk;": '\U00002584', | ||
1254 | "ljcy;": '\U00000459', | ||
1255 | "ll;": '\U0000226A', | ||
1256 | "llarr;": '\U000021C7', | ||
1257 | "llcorner;": '\U0000231E', | ||
1258 | "llhard;": '\U0000296B', | ||
1259 | "lltri;": '\U000025FA', | ||
1260 | "lmidot;": '\U00000140', | ||
1261 | "lmoust;": '\U000023B0', | ||
1262 | "lmoustache;": '\U000023B0', | ||
1263 | "lnE;": '\U00002268', | ||
1264 | "lnap;": '\U00002A89', | ||
1265 | "lnapprox;": '\U00002A89', | ||
1266 | "lne;": '\U00002A87', | ||
1267 | "lneq;": '\U00002A87', | ||
1268 | "lneqq;": '\U00002268', | ||
1269 | "lnsim;": '\U000022E6', | ||
1270 | "loang;": '\U000027EC', | ||
1271 | "loarr;": '\U000021FD', | ||
1272 | "lobrk;": '\U000027E6', | ||
1273 | "longleftarrow;": '\U000027F5', | ||
1274 | "longleftrightarrow;": '\U000027F7', | ||
1275 | "longmapsto;": '\U000027FC', | ||
1276 | "longrightarrow;": '\U000027F6', | ||
1277 | "looparrowleft;": '\U000021AB', | ||
1278 | "looparrowright;": '\U000021AC', | ||
1279 | "lopar;": '\U00002985', | ||
1280 | "lopf;": '\U0001D55D', | ||
1281 | "loplus;": '\U00002A2D', | ||
1282 | "lotimes;": '\U00002A34', | ||
1283 | "lowast;": '\U00002217', | ||
1284 | "lowbar;": '\U0000005F', | ||
1285 | "loz;": '\U000025CA', | ||
1286 | "lozenge;": '\U000025CA', | ||
1287 | "lozf;": '\U000029EB', | ||
1288 | "lpar;": '\U00000028', | ||
1289 | "lparlt;": '\U00002993', | ||
1290 | "lrarr;": '\U000021C6', | ||
1291 | "lrcorner;": '\U0000231F', | ||
1292 | "lrhar;": '\U000021CB', | ||
1293 | "lrhard;": '\U0000296D', | ||
1294 | "lrm;": '\U0000200E', | ||
1295 | "lrtri;": '\U000022BF', | ||
1296 | "lsaquo;": '\U00002039', | ||
1297 | "lscr;": '\U0001D4C1', | ||
1298 | "lsh;": '\U000021B0', | ||
1299 | "lsim;": '\U00002272', | ||
1300 | "lsime;": '\U00002A8D', | ||
1301 | "lsimg;": '\U00002A8F', | ||
1302 | "lsqb;": '\U0000005B', | ||
1303 | "lsquo;": '\U00002018', | ||
1304 | "lsquor;": '\U0000201A', | ||
1305 | "lstrok;": '\U00000142', | ||
1306 | "lt;": '\U0000003C', | ||
1307 | "ltcc;": '\U00002AA6', | ||
1308 | "ltcir;": '\U00002A79', | ||
1309 | "ltdot;": '\U000022D6', | ||
1310 | "lthree;": '\U000022CB', | ||
1311 | "ltimes;": '\U000022C9', | ||
1312 | "ltlarr;": '\U00002976', | ||
1313 | "ltquest;": '\U00002A7B', | ||
1314 | "ltrPar;": '\U00002996', | ||
1315 | "ltri;": '\U000025C3', | ||
1316 | "ltrie;": '\U000022B4', | ||
1317 | "ltrif;": '\U000025C2', | ||
1318 | "lurdshar;": '\U0000294A', | ||
1319 | "luruhar;": '\U00002966', | ||
1320 | "mDDot;": '\U0000223A', | ||
1321 | "macr;": '\U000000AF', | ||
1322 | "male;": '\U00002642', | ||
1323 | "malt;": '\U00002720', | ||
1324 | "maltese;": '\U00002720', | ||
1325 | "map;": '\U000021A6', | ||
1326 | "mapsto;": '\U000021A6', | ||
1327 | "mapstodown;": '\U000021A7', | ||
1328 | "mapstoleft;": '\U000021A4', | ||
1329 | "mapstoup;": '\U000021A5', | ||
1330 | "marker;": '\U000025AE', | ||
1331 | "mcomma;": '\U00002A29', | ||
1332 | "mcy;": '\U0000043C', | ||
1333 | "mdash;": '\U00002014', | ||
1334 | "measuredangle;": '\U00002221', | ||
1335 | "mfr;": '\U0001D52A', | ||
1336 | "mho;": '\U00002127', | ||
1337 | "micro;": '\U000000B5', | ||
1338 | "mid;": '\U00002223', | ||
1339 | "midast;": '\U0000002A', | ||
1340 | "midcir;": '\U00002AF0', | ||
1341 | "middot;": '\U000000B7', | ||
1342 | "minus;": '\U00002212', | ||
1343 | "minusb;": '\U0000229F', | ||
1344 | "minusd;": '\U00002238', | ||
1345 | "minusdu;": '\U00002A2A', | ||
1346 | "mlcp;": '\U00002ADB', | ||
1347 | "mldr;": '\U00002026', | ||
1348 | "mnplus;": '\U00002213', | ||
1349 | "models;": '\U000022A7', | ||
1350 | "mopf;": '\U0001D55E', | ||
1351 | "mp;": '\U00002213', | ||
1352 | "mscr;": '\U0001D4C2', | ||
1353 | "mstpos;": '\U0000223E', | ||
1354 | "mu;": '\U000003BC', | ||
1355 | "multimap;": '\U000022B8', | ||
1356 | "mumap;": '\U000022B8', | ||
1357 | "nLeftarrow;": '\U000021CD', | ||
1358 | "nLeftrightarrow;": '\U000021CE', | ||
1359 | "nRightarrow;": '\U000021CF', | ||
1360 | "nVDash;": '\U000022AF', | ||
1361 | "nVdash;": '\U000022AE', | ||
1362 | "nabla;": '\U00002207', | ||
1363 | "nacute;": '\U00000144', | ||
1364 | "nap;": '\U00002249', | ||
1365 | "napos;": '\U00000149', | ||
1366 | "napprox;": '\U00002249', | ||
1367 | "natur;": '\U0000266E', | ||
1368 | "natural;": '\U0000266E', | ||
1369 | "naturals;": '\U00002115', | ||
1370 | "nbsp;": '\U000000A0', | ||
1371 | "ncap;": '\U00002A43', | ||
1372 | "ncaron;": '\U00000148', | ||
1373 | "ncedil;": '\U00000146', | ||
1374 | "ncong;": '\U00002247', | ||
1375 | "ncup;": '\U00002A42', | ||
1376 | "ncy;": '\U0000043D', | ||
1377 | "ndash;": '\U00002013', | ||
1378 | "ne;": '\U00002260', | ||
1379 | "neArr;": '\U000021D7', | ||
1380 | "nearhk;": '\U00002924', | ||
1381 | "nearr;": '\U00002197', | ||
1382 | "nearrow;": '\U00002197', | ||
1383 | "nequiv;": '\U00002262', | ||
1384 | "nesear;": '\U00002928', | ||
1385 | "nexist;": '\U00002204', | ||
1386 | "nexists;": '\U00002204', | ||
1387 | "nfr;": '\U0001D52B', | ||
1388 | "nge;": '\U00002271', | ||
1389 | "ngeq;": '\U00002271', | ||
1390 | "ngsim;": '\U00002275', | ||
1391 | "ngt;": '\U0000226F', | ||
1392 | "ngtr;": '\U0000226F', | ||
1393 | "nhArr;": '\U000021CE', | ||
1394 | "nharr;": '\U000021AE', | ||
1395 | "nhpar;": '\U00002AF2', | ||
1396 | "ni;": '\U0000220B', | ||
1397 | "nis;": '\U000022FC', | ||
1398 | "nisd;": '\U000022FA', | ||
1399 | "niv;": '\U0000220B', | ||
1400 | "njcy;": '\U0000045A', | ||
1401 | "nlArr;": '\U000021CD', | ||
1402 | "nlarr;": '\U0000219A', | ||
1403 | "nldr;": '\U00002025', | ||
1404 | "nle;": '\U00002270', | ||
1405 | "nleftarrow;": '\U0000219A', | ||
1406 | "nleftrightarrow;": '\U000021AE', | ||
1407 | "nleq;": '\U00002270', | ||
1408 | "nless;": '\U0000226E', | ||
1409 | "nlsim;": '\U00002274', | ||
1410 | "nlt;": '\U0000226E', | ||
1411 | "nltri;": '\U000022EA', | ||
1412 | "nltrie;": '\U000022EC', | ||
1413 | "nmid;": '\U00002224', | ||
1414 | "nopf;": '\U0001D55F', | ||
1415 | "not;": '\U000000AC', | ||
1416 | "notin;": '\U00002209', | ||
1417 | "notinva;": '\U00002209', | ||
1418 | "notinvb;": '\U000022F7', | ||
1419 | "notinvc;": '\U000022F6', | ||
1420 | "notni;": '\U0000220C', | ||
1421 | "notniva;": '\U0000220C', | ||
1422 | "notnivb;": '\U000022FE', | ||
1423 | "notnivc;": '\U000022FD', | ||
1424 | "npar;": '\U00002226', | ||
1425 | "nparallel;": '\U00002226', | ||
1426 | "npolint;": '\U00002A14', | ||
1427 | "npr;": '\U00002280', | ||
1428 | "nprcue;": '\U000022E0', | ||
1429 | "nprec;": '\U00002280', | ||
1430 | "nrArr;": '\U000021CF', | ||
1431 | "nrarr;": '\U0000219B', | ||
1432 | "nrightarrow;": '\U0000219B', | ||
1433 | "nrtri;": '\U000022EB', | ||
1434 | "nrtrie;": '\U000022ED', | ||
1435 | "nsc;": '\U00002281', | ||
1436 | "nsccue;": '\U000022E1', | ||
1437 | "nscr;": '\U0001D4C3', | ||
1438 | "nshortmid;": '\U00002224', | ||
1439 | "nshortparallel;": '\U00002226', | ||
1440 | "nsim;": '\U00002241', | ||
1441 | "nsime;": '\U00002244', | ||
1442 | "nsimeq;": '\U00002244', | ||
1443 | "nsmid;": '\U00002224', | ||
1444 | "nspar;": '\U00002226', | ||
1445 | "nsqsube;": '\U000022E2', | ||
1446 | "nsqsupe;": '\U000022E3', | ||
1447 | "nsub;": '\U00002284', | ||
1448 | "nsube;": '\U00002288', | ||
1449 | "nsubseteq;": '\U00002288', | ||
1450 | "nsucc;": '\U00002281', | ||
1451 | "nsup;": '\U00002285', | ||
1452 | "nsupe;": '\U00002289', | ||
1453 | "nsupseteq;": '\U00002289', | ||
1454 | "ntgl;": '\U00002279', | ||
1455 | "ntilde;": '\U000000F1', | ||
1456 | "ntlg;": '\U00002278', | ||
1457 | "ntriangleleft;": '\U000022EA', | ||
1458 | "ntrianglelefteq;": '\U000022EC', | ||
1459 | "ntriangleright;": '\U000022EB', | ||
1460 | "ntrianglerighteq;": '\U000022ED', | ||
1461 | "nu;": '\U000003BD', | ||
1462 | "num;": '\U00000023', | ||
1463 | "numero;": '\U00002116', | ||
1464 | "numsp;": '\U00002007', | ||
1465 | "nvDash;": '\U000022AD', | ||
1466 | "nvHarr;": '\U00002904', | ||
1467 | "nvdash;": '\U000022AC', | ||
1468 | "nvinfin;": '\U000029DE', | ||
1469 | "nvlArr;": '\U00002902', | ||
1470 | "nvrArr;": '\U00002903', | ||
1471 | "nwArr;": '\U000021D6', | ||
1472 | "nwarhk;": '\U00002923', | ||
1473 | "nwarr;": '\U00002196', | ||
1474 | "nwarrow;": '\U00002196', | ||
1475 | "nwnear;": '\U00002927', | ||
1476 | "oS;": '\U000024C8', | ||
1477 | "oacute;": '\U000000F3', | ||
1478 | "oast;": '\U0000229B', | ||
1479 | "ocir;": '\U0000229A', | ||
1480 | "ocirc;": '\U000000F4', | ||
1481 | "ocy;": '\U0000043E', | ||
1482 | "odash;": '\U0000229D', | ||
1483 | "odblac;": '\U00000151', | ||
1484 | "odiv;": '\U00002A38', | ||
1485 | "odot;": '\U00002299', | ||
1486 | "odsold;": '\U000029BC', | ||
1487 | "oelig;": '\U00000153', | ||
1488 | "ofcir;": '\U000029BF', | ||
1489 | "ofr;": '\U0001D52C', | ||
1490 | "ogon;": '\U000002DB', | ||
1491 | "ograve;": '\U000000F2', | ||
1492 | "ogt;": '\U000029C1', | ||
1493 | "ohbar;": '\U000029B5', | ||
1494 | "ohm;": '\U000003A9', | ||
1495 | "oint;": '\U0000222E', | ||
1496 | "olarr;": '\U000021BA', | ||
1497 | "olcir;": '\U000029BE', | ||
1498 | "olcross;": '\U000029BB', | ||
1499 | "oline;": '\U0000203E', | ||
1500 | "olt;": '\U000029C0', | ||
1501 | "omacr;": '\U0000014D', | ||
1502 | "omega;": '\U000003C9', | ||
1503 | "omicron;": '\U000003BF', | ||
1504 | "omid;": '\U000029B6', | ||
1505 | "ominus;": '\U00002296', | ||
1506 | "oopf;": '\U0001D560', | ||
1507 | "opar;": '\U000029B7', | ||
1508 | "operp;": '\U000029B9', | ||
1509 | "oplus;": '\U00002295', | ||
1510 | "or;": '\U00002228', | ||
1511 | "orarr;": '\U000021BB', | ||
1512 | "ord;": '\U00002A5D', | ||
1513 | "order;": '\U00002134', | ||
1514 | "orderof;": '\U00002134', | ||
1515 | "ordf;": '\U000000AA', | ||
1516 | "ordm;": '\U000000BA', | ||
1517 | "origof;": '\U000022B6', | ||
1518 | "oror;": '\U00002A56', | ||
1519 | "orslope;": '\U00002A57', | ||
1520 | "orv;": '\U00002A5B', | ||
1521 | "oscr;": '\U00002134', | ||
1522 | "oslash;": '\U000000F8', | ||
1523 | "osol;": '\U00002298', | ||
1524 | "otilde;": '\U000000F5', | ||
1525 | "otimes;": '\U00002297', | ||
1526 | "otimesas;": '\U00002A36', | ||
1527 | "ouml;": '\U000000F6', | ||
1528 | "ovbar;": '\U0000233D', | ||
1529 | "par;": '\U00002225', | ||
1530 | "para;": '\U000000B6', | ||
1531 | "parallel;": '\U00002225', | ||
1532 | "parsim;": '\U00002AF3', | ||
1533 | "parsl;": '\U00002AFD', | ||
1534 | "part;": '\U00002202', | ||
1535 | "pcy;": '\U0000043F', | ||
1536 | "percnt;": '\U00000025', | ||
1537 | "period;": '\U0000002E', | ||
1538 | "permil;": '\U00002030', | ||
1539 | "perp;": '\U000022A5', | ||
1540 | "pertenk;": '\U00002031', | ||
1541 | "pfr;": '\U0001D52D', | ||
1542 | "phi;": '\U000003C6', | ||
1543 | "phiv;": '\U000003D5', | ||
1544 | "phmmat;": '\U00002133', | ||
1545 | "phone;": '\U0000260E', | ||
1546 | "pi;": '\U000003C0', | ||
1547 | "pitchfork;": '\U000022D4', | ||
1548 | "piv;": '\U000003D6', | ||
1549 | "planck;": '\U0000210F', | ||
1550 | "planckh;": '\U0000210E', | ||
1551 | "plankv;": '\U0000210F', | ||
1552 | "plus;": '\U0000002B', | ||
1553 | "plusacir;": '\U00002A23', | ||
1554 | "plusb;": '\U0000229E', | ||
1555 | "pluscir;": '\U00002A22', | ||
1556 | "plusdo;": '\U00002214', | ||
1557 | "plusdu;": '\U00002A25', | ||
1558 | "pluse;": '\U00002A72', | ||
1559 | "plusmn;": '\U000000B1', | ||
1560 | "plussim;": '\U00002A26', | ||
1561 | "plustwo;": '\U00002A27', | ||
1562 | "pm;": '\U000000B1', | ||
1563 | "pointint;": '\U00002A15', | ||
1564 | "popf;": '\U0001D561', | ||
1565 | "pound;": '\U000000A3', | ||
1566 | "pr;": '\U0000227A', | ||
1567 | "prE;": '\U00002AB3', | ||
1568 | "prap;": '\U00002AB7', | ||
1569 | "prcue;": '\U0000227C', | ||
1570 | "pre;": '\U00002AAF', | ||
1571 | "prec;": '\U0000227A', | ||
1572 | "precapprox;": '\U00002AB7', | ||
1573 | "preccurlyeq;": '\U0000227C', | ||
1574 | "preceq;": '\U00002AAF', | ||
1575 | "precnapprox;": '\U00002AB9', | ||
1576 | "precneqq;": '\U00002AB5', | ||
1577 | "precnsim;": '\U000022E8', | ||
1578 | "precsim;": '\U0000227E', | ||
1579 | "prime;": '\U00002032', | ||
1580 | "primes;": '\U00002119', | ||
1581 | "prnE;": '\U00002AB5', | ||
1582 | "prnap;": '\U00002AB9', | ||
1583 | "prnsim;": '\U000022E8', | ||
1584 | "prod;": '\U0000220F', | ||
1585 | "profalar;": '\U0000232E', | ||
1586 | "profline;": '\U00002312', | ||
1587 | "profsurf;": '\U00002313', | ||
1588 | "prop;": '\U0000221D', | ||
1589 | "propto;": '\U0000221D', | ||
1590 | "prsim;": '\U0000227E', | ||
1591 | "prurel;": '\U000022B0', | ||
1592 | "pscr;": '\U0001D4C5', | ||
1593 | "psi;": '\U000003C8', | ||
1594 | "puncsp;": '\U00002008', | ||
1595 | "qfr;": '\U0001D52E', | ||
1596 | "qint;": '\U00002A0C', | ||
1597 | "qopf;": '\U0001D562', | ||
1598 | "qprime;": '\U00002057', | ||
1599 | "qscr;": '\U0001D4C6', | ||
1600 | "quaternions;": '\U0000210D', | ||
1601 | "quatint;": '\U00002A16', | ||
1602 | "quest;": '\U0000003F', | ||
1603 | "questeq;": '\U0000225F', | ||
1604 | "quot;": '\U00000022', | ||
1605 | "rAarr;": '\U000021DB', | ||
1606 | "rArr;": '\U000021D2', | ||
1607 | "rAtail;": '\U0000291C', | ||
1608 | "rBarr;": '\U0000290F', | ||
1609 | "rHar;": '\U00002964', | ||
1610 | "racute;": '\U00000155', | ||
1611 | "radic;": '\U0000221A', | ||
1612 | "raemptyv;": '\U000029B3', | ||
1613 | "rang;": '\U000027E9', | ||
1614 | "rangd;": '\U00002992', | ||
1615 | "range;": '\U000029A5', | ||
1616 | "rangle;": '\U000027E9', | ||
1617 | "raquo;": '\U000000BB', | ||
1618 | "rarr;": '\U00002192', | ||
1619 | "rarrap;": '\U00002975', | ||
1620 | "rarrb;": '\U000021E5', | ||
1621 | "rarrbfs;": '\U00002920', | ||
1622 | "rarrc;": '\U00002933', | ||
1623 | "rarrfs;": '\U0000291E', | ||
1624 | "rarrhk;": '\U000021AA', | ||
1625 | "rarrlp;": '\U000021AC', | ||
1626 | "rarrpl;": '\U00002945', | ||
1627 | "rarrsim;": '\U00002974', | ||
1628 | "rarrtl;": '\U000021A3', | ||
1629 | "rarrw;": '\U0000219D', | ||
1630 | "ratail;": '\U0000291A', | ||
1631 | "ratio;": '\U00002236', | ||
1632 | "rationals;": '\U0000211A', | ||
1633 | "rbarr;": '\U0000290D', | ||
1634 | "rbbrk;": '\U00002773', | ||
1635 | "rbrace;": '\U0000007D', | ||
1636 | "rbrack;": '\U0000005D', | ||
1637 | "rbrke;": '\U0000298C', | ||
1638 | "rbrksld;": '\U0000298E', | ||
1639 | "rbrkslu;": '\U00002990', | ||
1640 | "rcaron;": '\U00000159', | ||
1641 | "rcedil;": '\U00000157', | ||
1642 | "rceil;": '\U00002309', | ||
1643 | "rcub;": '\U0000007D', | ||
1644 | "rcy;": '\U00000440', | ||
1645 | "rdca;": '\U00002937', | ||
1646 | "rdldhar;": '\U00002969', | ||
1647 | "rdquo;": '\U0000201D', | ||
1648 | "rdquor;": '\U0000201D', | ||
1649 | "rdsh;": '\U000021B3', | ||
1650 | "real;": '\U0000211C', | ||
1651 | "realine;": '\U0000211B', | ||
1652 | "realpart;": '\U0000211C', | ||
1653 | "reals;": '\U0000211D', | ||
1654 | "rect;": '\U000025AD', | ||
1655 | "reg;": '\U000000AE', | ||
1656 | "rfisht;": '\U0000297D', | ||
1657 | "rfloor;": '\U0000230B', | ||
1658 | "rfr;": '\U0001D52F', | ||
1659 | "rhard;": '\U000021C1', | ||
1660 | "rharu;": '\U000021C0', | ||
1661 | "rharul;": '\U0000296C', | ||
1662 | "rho;": '\U000003C1', | ||
1663 | "rhov;": '\U000003F1', | ||
1664 | "rightarrow;": '\U00002192', | ||
1665 | "rightarrowtail;": '\U000021A3', | ||
1666 | "rightharpoondown;": '\U000021C1', | ||
1667 | "rightharpoonup;": '\U000021C0', | ||
1668 | "rightleftarrows;": '\U000021C4', | ||
1669 | "rightleftharpoons;": '\U000021CC', | ||
1670 | "rightrightarrows;": '\U000021C9', | ||
1671 | "rightsquigarrow;": '\U0000219D', | ||
1672 | "rightthreetimes;": '\U000022CC', | ||
1673 | "ring;": '\U000002DA', | ||
1674 | "risingdotseq;": '\U00002253', | ||
1675 | "rlarr;": '\U000021C4', | ||
1676 | "rlhar;": '\U000021CC', | ||
1677 | "rlm;": '\U0000200F', | ||
1678 | "rmoust;": '\U000023B1', | ||
1679 | "rmoustache;": '\U000023B1', | ||
1680 | "rnmid;": '\U00002AEE', | ||
1681 | "roang;": '\U000027ED', | ||
1682 | "roarr;": '\U000021FE', | ||
1683 | "robrk;": '\U000027E7', | ||
1684 | "ropar;": '\U00002986', | ||
1685 | "ropf;": '\U0001D563', | ||
1686 | "roplus;": '\U00002A2E', | ||
1687 | "rotimes;": '\U00002A35', | ||
1688 | "rpar;": '\U00000029', | ||
1689 | "rpargt;": '\U00002994', | ||
1690 | "rppolint;": '\U00002A12', | ||
1691 | "rrarr;": '\U000021C9', | ||
1692 | "rsaquo;": '\U0000203A', | ||
1693 | "rscr;": '\U0001D4C7', | ||
1694 | "rsh;": '\U000021B1', | ||
1695 | "rsqb;": '\U0000005D', | ||
1696 | "rsquo;": '\U00002019', | ||
1697 | "rsquor;": '\U00002019', | ||
1698 | "rthree;": '\U000022CC', | ||
1699 | "rtimes;": '\U000022CA', | ||
1700 | "rtri;": '\U000025B9', | ||
1701 | "rtrie;": '\U000022B5', | ||
1702 | "rtrif;": '\U000025B8', | ||
1703 | "rtriltri;": '\U000029CE', | ||
1704 | "ruluhar;": '\U00002968', | ||
1705 | "rx;": '\U0000211E', | ||
1706 | "sacute;": '\U0000015B', | ||
1707 | "sbquo;": '\U0000201A', | ||
1708 | "sc;": '\U0000227B', | ||
1709 | "scE;": '\U00002AB4', | ||
1710 | "scap;": '\U00002AB8', | ||
1711 | "scaron;": '\U00000161', | ||
1712 | "sccue;": '\U0000227D', | ||
1713 | "sce;": '\U00002AB0', | ||
1714 | "scedil;": '\U0000015F', | ||
1715 | "scirc;": '\U0000015D', | ||
1716 | "scnE;": '\U00002AB6', | ||
1717 | "scnap;": '\U00002ABA', | ||
1718 | "scnsim;": '\U000022E9', | ||
1719 | "scpolint;": '\U00002A13', | ||
1720 | "scsim;": '\U0000227F', | ||
1721 | "scy;": '\U00000441', | ||
1722 | "sdot;": '\U000022C5', | ||
1723 | "sdotb;": '\U000022A1', | ||
1724 | "sdote;": '\U00002A66', | ||
1725 | "seArr;": '\U000021D8', | ||
1726 | "searhk;": '\U00002925', | ||
1727 | "searr;": '\U00002198', | ||
1728 | "searrow;": '\U00002198', | ||
1729 | "sect;": '\U000000A7', | ||
1730 | "semi;": '\U0000003B', | ||
1731 | "seswar;": '\U00002929', | ||
1732 | "setminus;": '\U00002216', | ||
1733 | "setmn;": '\U00002216', | ||
1734 | "sext;": '\U00002736', | ||
1735 | "sfr;": '\U0001D530', | ||
1736 | "sfrown;": '\U00002322', | ||
1737 | "sharp;": '\U0000266F', | ||
1738 | "shchcy;": '\U00000449', | ||
1739 | "shcy;": '\U00000448', | ||
1740 | "shortmid;": '\U00002223', | ||
1741 | "shortparallel;": '\U00002225', | ||
1742 | "shy;": '\U000000AD', | ||
1743 | "sigma;": '\U000003C3', | ||
1744 | "sigmaf;": '\U000003C2', | ||
1745 | "sigmav;": '\U000003C2', | ||
1746 | "sim;": '\U0000223C', | ||
1747 | "simdot;": '\U00002A6A', | ||
1748 | "sime;": '\U00002243', | ||
1749 | "simeq;": '\U00002243', | ||
1750 | "simg;": '\U00002A9E', | ||
1751 | "simgE;": '\U00002AA0', | ||
1752 | "siml;": '\U00002A9D', | ||
1753 | "simlE;": '\U00002A9F', | ||
1754 | "simne;": '\U00002246', | ||
1755 | "simplus;": '\U00002A24', | ||
1756 | "simrarr;": '\U00002972', | ||
1757 | "slarr;": '\U00002190', | ||
1758 | "smallsetminus;": '\U00002216', | ||
1759 | "smashp;": '\U00002A33', | ||
1760 | "smeparsl;": '\U000029E4', | ||
1761 | "smid;": '\U00002223', | ||
1762 | "smile;": '\U00002323', | ||
1763 | "smt;": '\U00002AAA', | ||
1764 | "smte;": '\U00002AAC', | ||
1765 | "softcy;": '\U0000044C', | ||
1766 | "sol;": '\U0000002F', | ||
1767 | "solb;": '\U000029C4', | ||
1768 | "solbar;": '\U0000233F', | ||
1769 | "sopf;": '\U0001D564', | ||
1770 | "spades;": '\U00002660', | ||
1771 | "spadesuit;": '\U00002660', | ||
1772 | "spar;": '\U00002225', | ||
1773 | "sqcap;": '\U00002293', | ||
1774 | "sqcup;": '\U00002294', | ||
1775 | "sqsub;": '\U0000228F', | ||
1776 | "sqsube;": '\U00002291', | ||
1777 | "sqsubset;": '\U0000228F', | ||
1778 | "sqsubseteq;": '\U00002291', | ||
1779 | "sqsup;": '\U00002290', | ||
1780 | "sqsupe;": '\U00002292', | ||
1781 | "sqsupset;": '\U00002290', | ||
1782 | "sqsupseteq;": '\U00002292', | ||
1783 | "squ;": '\U000025A1', | ||
1784 | "square;": '\U000025A1', | ||
1785 | "squarf;": '\U000025AA', | ||
1786 | "squf;": '\U000025AA', | ||
1787 | "srarr;": '\U00002192', | ||
1788 | "sscr;": '\U0001D4C8', | ||
1789 | "ssetmn;": '\U00002216', | ||
1790 | "ssmile;": '\U00002323', | ||
1791 | "sstarf;": '\U000022C6', | ||
1792 | "star;": '\U00002606', | ||
1793 | "starf;": '\U00002605', | ||
1794 | "straightepsilon;": '\U000003F5', | ||
1795 | "straightphi;": '\U000003D5', | ||
1796 | "strns;": '\U000000AF', | ||
1797 | "sub;": '\U00002282', | ||
1798 | "subE;": '\U00002AC5', | ||
1799 | "subdot;": '\U00002ABD', | ||
1800 | "sube;": '\U00002286', | ||
1801 | "subedot;": '\U00002AC3', | ||
1802 | "submult;": '\U00002AC1', | ||
1803 | "subnE;": '\U00002ACB', | ||
1804 | "subne;": '\U0000228A', | ||
1805 | "subplus;": '\U00002ABF', | ||
1806 | "subrarr;": '\U00002979', | ||
1807 | "subset;": '\U00002282', | ||
1808 | "subseteq;": '\U00002286', | ||
1809 | "subseteqq;": '\U00002AC5', | ||
1810 | "subsetneq;": '\U0000228A', | ||
1811 | "subsetneqq;": '\U00002ACB', | ||
1812 | "subsim;": '\U00002AC7', | ||
1813 | "subsub;": '\U00002AD5', | ||
1814 | "subsup;": '\U00002AD3', | ||
1815 | "succ;": '\U0000227B', | ||
1816 | "succapprox;": '\U00002AB8', | ||
1817 | "succcurlyeq;": '\U0000227D', | ||
1818 | "succeq;": '\U00002AB0', | ||
1819 | "succnapprox;": '\U00002ABA', | ||
1820 | "succneqq;": '\U00002AB6', | ||
1821 | "succnsim;": '\U000022E9', | ||
1822 | "succsim;": '\U0000227F', | ||
1823 | "sum;": '\U00002211', | ||
1824 | "sung;": '\U0000266A', | ||
1825 | "sup;": '\U00002283', | ||
1826 | "sup1;": '\U000000B9', | ||
1827 | "sup2;": '\U000000B2', | ||
1828 | "sup3;": '\U000000B3', | ||
1829 | "supE;": '\U00002AC6', | ||
1830 | "supdot;": '\U00002ABE', | ||
1831 | "supdsub;": '\U00002AD8', | ||
1832 | "supe;": '\U00002287', | ||
1833 | "supedot;": '\U00002AC4', | ||
1834 | "suphsol;": '\U000027C9', | ||
1835 | "suphsub;": '\U00002AD7', | ||
1836 | "suplarr;": '\U0000297B', | ||
1837 | "supmult;": '\U00002AC2', | ||
1838 | "supnE;": '\U00002ACC', | ||
1839 | "supne;": '\U0000228B', | ||
1840 | "supplus;": '\U00002AC0', | ||
1841 | "supset;": '\U00002283', | ||
1842 | "supseteq;": '\U00002287', | ||
1843 | "supseteqq;": '\U00002AC6', | ||
1844 | "supsetneq;": '\U0000228B', | ||
1845 | "supsetneqq;": '\U00002ACC', | ||
1846 | "supsim;": '\U00002AC8', | ||
1847 | "supsub;": '\U00002AD4', | ||
1848 | "supsup;": '\U00002AD6', | ||
1849 | "swArr;": '\U000021D9', | ||
1850 | "swarhk;": '\U00002926', | ||
1851 | "swarr;": '\U00002199', | ||
1852 | "swarrow;": '\U00002199', | ||
1853 | "swnwar;": '\U0000292A', | ||
1854 | "szlig;": '\U000000DF', | ||
1855 | "target;": '\U00002316', | ||
1856 | "tau;": '\U000003C4', | ||
1857 | "tbrk;": '\U000023B4', | ||
1858 | "tcaron;": '\U00000165', | ||
1859 | "tcedil;": '\U00000163', | ||
1860 | "tcy;": '\U00000442', | ||
1861 | "tdot;": '\U000020DB', | ||
1862 | "telrec;": '\U00002315', | ||
1863 | "tfr;": '\U0001D531', | ||
1864 | "there4;": '\U00002234', | ||
1865 | "therefore;": '\U00002234', | ||
1866 | "theta;": '\U000003B8', | ||
1867 | "thetasym;": '\U000003D1', | ||
1868 | "thetav;": '\U000003D1', | ||
1869 | "thickapprox;": '\U00002248', | ||
1870 | "thicksim;": '\U0000223C', | ||
1871 | "thinsp;": '\U00002009', | ||
1872 | "thkap;": '\U00002248', | ||
1873 | "thksim;": '\U0000223C', | ||
1874 | "thorn;": '\U000000FE', | ||
1875 | "tilde;": '\U000002DC', | ||
1876 | "times;": '\U000000D7', | ||
1877 | "timesb;": '\U000022A0', | ||
1878 | "timesbar;": '\U00002A31', | ||
1879 | "timesd;": '\U00002A30', | ||
1880 | "tint;": '\U0000222D', | ||
1881 | "toea;": '\U00002928', | ||
1882 | "top;": '\U000022A4', | ||
1883 | "topbot;": '\U00002336', | ||
1884 | "topcir;": '\U00002AF1', | ||
1885 | "topf;": '\U0001D565', | ||
1886 | "topfork;": '\U00002ADA', | ||
1887 | "tosa;": '\U00002929', | ||
1888 | "tprime;": '\U00002034', | ||
1889 | "trade;": '\U00002122', | ||
1890 | "triangle;": '\U000025B5', | ||
1891 | "triangledown;": '\U000025BF', | ||
1892 | "triangleleft;": '\U000025C3', | ||
1893 | "trianglelefteq;": '\U000022B4', | ||
1894 | "triangleq;": '\U0000225C', | ||
1895 | "triangleright;": '\U000025B9', | ||
1896 | "trianglerighteq;": '\U000022B5', | ||
1897 | "tridot;": '\U000025EC', | ||
1898 | "trie;": '\U0000225C', | ||
1899 | "triminus;": '\U00002A3A', | ||
1900 | "triplus;": '\U00002A39', | ||
1901 | "trisb;": '\U000029CD', | ||
1902 | "tritime;": '\U00002A3B', | ||
1903 | "trpezium;": '\U000023E2', | ||
1904 | "tscr;": '\U0001D4C9', | ||
1905 | "tscy;": '\U00000446', | ||
1906 | "tshcy;": '\U0000045B', | ||
1907 | "tstrok;": '\U00000167', | ||
1908 | "twixt;": '\U0000226C', | ||
1909 | "twoheadleftarrow;": '\U0000219E', | ||
1910 | "twoheadrightarrow;": '\U000021A0', | ||
1911 | "uArr;": '\U000021D1', | ||
1912 | "uHar;": '\U00002963', | ||
1913 | "uacute;": '\U000000FA', | ||
1914 | "uarr;": '\U00002191', | ||
1915 | "ubrcy;": '\U0000045E', | ||
1916 | "ubreve;": '\U0000016D', | ||
1917 | "ucirc;": '\U000000FB', | ||
1918 | "ucy;": '\U00000443', | ||
1919 | "udarr;": '\U000021C5', | ||
1920 | "udblac;": '\U00000171', | ||
1921 | "udhar;": '\U0000296E', | ||
1922 | "ufisht;": '\U0000297E', | ||
1923 | "ufr;": '\U0001D532', | ||
1924 | "ugrave;": '\U000000F9', | ||
1925 | "uharl;": '\U000021BF', | ||
1926 | "uharr;": '\U000021BE', | ||
1927 | "uhblk;": '\U00002580', | ||
1928 | "ulcorn;": '\U0000231C', | ||
1929 | "ulcorner;": '\U0000231C', | ||
1930 | "ulcrop;": '\U0000230F', | ||
1931 | "ultri;": '\U000025F8', | ||
1932 | "umacr;": '\U0000016B', | ||
1933 | "uml;": '\U000000A8', | ||
1934 | "uogon;": '\U00000173', | ||
1935 | "uopf;": '\U0001D566', | ||
1936 | "uparrow;": '\U00002191', | ||
1937 | "updownarrow;": '\U00002195', | ||
1938 | "upharpoonleft;": '\U000021BF', | ||
1939 | "upharpoonright;": '\U000021BE', | ||
1940 | "uplus;": '\U0000228E', | ||
1941 | "upsi;": '\U000003C5', | ||
1942 | "upsih;": '\U000003D2', | ||
1943 | "upsilon;": '\U000003C5', | ||
1944 | "upuparrows;": '\U000021C8', | ||
1945 | "urcorn;": '\U0000231D', | ||
1946 | "urcorner;": '\U0000231D', | ||
1947 | "urcrop;": '\U0000230E', | ||
1948 | "uring;": '\U0000016F', | ||
1949 | "urtri;": '\U000025F9', | ||
1950 | "uscr;": '\U0001D4CA', | ||
1951 | "utdot;": '\U000022F0', | ||
1952 | "utilde;": '\U00000169', | ||
1953 | "utri;": '\U000025B5', | ||
1954 | "utrif;": '\U000025B4', | ||
1955 | "uuarr;": '\U000021C8', | ||
1956 | "uuml;": '\U000000FC', | ||
1957 | "uwangle;": '\U000029A7', | ||
1958 | "vArr;": '\U000021D5', | ||
1959 | "vBar;": '\U00002AE8', | ||
1960 | "vBarv;": '\U00002AE9', | ||
1961 | "vDash;": '\U000022A8', | ||
1962 | "vangrt;": '\U0000299C', | ||
1963 | "varepsilon;": '\U000003F5', | ||
1964 | "varkappa;": '\U000003F0', | ||
1965 | "varnothing;": '\U00002205', | ||
1966 | "varphi;": '\U000003D5', | ||
1967 | "varpi;": '\U000003D6', | ||
1968 | "varpropto;": '\U0000221D', | ||
1969 | "varr;": '\U00002195', | ||
1970 | "varrho;": '\U000003F1', | ||
1971 | "varsigma;": '\U000003C2', | ||
1972 | "vartheta;": '\U000003D1', | ||
1973 | "vartriangleleft;": '\U000022B2', | ||
1974 | "vartriangleright;": '\U000022B3', | ||
1975 | "vcy;": '\U00000432', | ||
1976 | "vdash;": '\U000022A2', | ||
1977 | "vee;": '\U00002228', | ||
1978 | "veebar;": '\U000022BB', | ||
1979 | "veeeq;": '\U0000225A', | ||
1980 | "vellip;": '\U000022EE', | ||
1981 | "verbar;": '\U0000007C', | ||
1982 | "vert;": '\U0000007C', | ||
1983 | "vfr;": '\U0001D533', | ||
1984 | "vltri;": '\U000022B2', | ||
1985 | "vopf;": '\U0001D567', | ||
1986 | "vprop;": '\U0000221D', | ||
1987 | "vrtri;": '\U000022B3', | ||
1988 | "vscr;": '\U0001D4CB', | ||
1989 | "vzigzag;": '\U0000299A', | ||
1990 | "wcirc;": '\U00000175', | ||
1991 | "wedbar;": '\U00002A5F', | ||
1992 | "wedge;": '\U00002227', | ||
1993 | "wedgeq;": '\U00002259', | ||
1994 | "weierp;": '\U00002118', | ||
1995 | "wfr;": '\U0001D534', | ||
1996 | "wopf;": '\U0001D568', | ||
1997 | "wp;": '\U00002118', | ||
1998 | "wr;": '\U00002240', | ||
1999 | "wreath;": '\U00002240', | ||
2000 | "wscr;": '\U0001D4CC', | ||
2001 | "xcap;": '\U000022C2', | ||
2002 | "xcirc;": '\U000025EF', | ||
2003 | "xcup;": '\U000022C3', | ||
2004 | "xdtri;": '\U000025BD', | ||
2005 | "xfr;": '\U0001D535', | ||
2006 | "xhArr;": '\U000027FA', | ||
2007 | "xharr;": '\U000027F7', | ||
2008 | "xi;": '\U000003BE', | ||
2009 | "xlArr;": '\U000027F8', | ||
2010 | "xlarr;": '\U000027F5', | ||
2011 | "xmap;": '\U000027FC', | ||
2012 | "xnis;": '\U000022FB', | ||
2013 | "xodot;": '\U00002A00', | ||
2014 | "xopf;": '\U0001D569', | ||
2015 | "xoplus;": '\U00002A01', | ||
2016 | "xotime;": '\U00002A02', | ||
2017 | "xrArr;": '\U000027F9', | ||
2018 | "xrarr;": '\U000027F6', | ||
2019 | "xscr;": '\U0001D4CD', | ||
2020 | "xsqcup;": '\U00002A06', | ||
2021 | "xuplus;": '\U00002A04', | ||
2022 | "xutri;": '\U000025B3', | ||
2023 | "xvee;": '\U000022C1', | ||
2024 | "xwedge;": '\U000022C0', | ||
2025 | "yacute;": '\U000000FD', | ||
2026 | "yacy;": '\U0000044F', | ||
2027 | "ycirc;": '\U00000177', | ||
2028 | "ycy;": '\U0000044B', | ||
2029 | "yen;": '\U000000A5', | ||
2030 | "yfr;": '\U0001D536', | ||
2031 | "yicy;": '\U00000457', | ||
2032 | "yopf;": '\U0001D56A', | ||
2033 | "yscr;": '\U0001D4CE', | ||
2034 | "yucy;": '\U0000044E', | ||
2035 | "yuml;": '\U000000FF', | ||
2036 | "zacute;": '\U0000017A', | ||
2037 | "zcaron;": '\U0000017E', | ||
2038 | "zcy;": '\U00000437', | ||
2039 | "zdot;": '\U0000017C', | ||
2040 | "zeetrf;": '\U00002128', | ||
2041 | "zeta;": '\U000003B6', | ||
2042 | "zfr;": '\U0001D537', | ||
2043 | "zhcy;": '\U00000436', | ||
2044 | "zigrarr;": '\U000021DD', | ||
2045 | "zopf;": '\U0001D56B', | ||
2046 | "zscr;": '\U0001D4CF', | ||
2047 | "zwj;": '\U0000200D', | ||
2048 | "zwnj;": '\U0000200C', | ||
2049 | "AElig": '\U000000C6', | ||
2050 | "AMP": '\U00000026', | ||
2051 | "Aacute": '\U000000C1', | ||
2052 | "Acirc": '\U000000C2', | ||
2053 | "Agrave": '\U000000C0', | ||
2054 | "Aring": '\U000000C5', | ||
2055 | "Atilde": '\U000000C3', | ||
2056 | "Auml": '\U000000C4', | ||
2057 | "COPY": '\U000000A9', | ||
2058 | "Ccedil": '\U000000C7', | ||
2059 | "ETH": '\U000000D0', | ||
2060 | "Eacute": '\U000000C9', | ||
2061 | "Ecirc": '\U000000CA', | ||
2062 | "Egrave": '\U000000C8', | ||
2063 | "Euml": '\U000000CB', | ||
2064 | "GT": '\U0000003E', | ||
2065 | "Iacute": '\U000000CD', | ||
2066 | "Icirc": '\U000000CE', | ||
2067 | "Igrave": '\U000000CC', | ||
2068 | "Iuml": '\U000000CF', | ||
2069 | "LT": '\U0000003C', | ||
2070 | "Ntilde": '\U000000D1', | ||
2071 | "Oacute": '\U000000D3', | ||
2072 | "Ocirc": '\U000000D4', | ||
2073 | "Ograve": '\U000000D2', | ||
2074 | "Oslash": '\U000000D8', | ||
2075 | "Otilde": '\U000000D5', | ||
2076 | "Ouml": '\U000000D6', | ||
2077 | "QUOT": '\U00000022', | ||
2078 | "REG": '\U000000AE', | ||
2079 | "THORN": '\U000000DE', | ||
2080 | "Uacute": '\U000000DA', | ||
2081 | "Ucirc": '\U000000DB', | ||
2082 | "Ugrave": '\U000000D9', | ||
2083 | "Uuml": '\U000000DC', | ||
2084 | "Yacute": '\U000000DD', | ||
2085 | "aacute": '\U000000E1', | ||
2086 | "acirc": '\U000000E2', | ||
2087 | "acute": '\U000000B4', | ||
2088 | "aelig": '\U000000E6', | ||
2089 | "agrave": '\U000000E0', | ||
2090 | "amp": '\U00000026', | ||
2091 | "aring": '\U000000E5', | ||
2092 | "atilde": '\U000000E3', | ||
2093 | "auml": '\U000000E4', | ||
2094 | "brvbar": '\U000000A6', | ||
2095 | "ccedil": '\U000000E7', | ||
2096 | "cedil": '\U000000B8', | ||
2097 | "cent": '\U000000A2', | ||
2098 | "copy": '\U000000A9', | ||
2099 | "curren": '\U000000A4', | ||
2100 | "deg": '\U000000B0', | ||
2101 | "divide": '\U000000F7', | ||
2102 | "eacute": '\U000000E9', | ||
2103 | "ecirc": '\U000000EA', | ||
2104 | "egrave": '\U000000E8', | ||
2105 | "eth": '\U000000F0', | ||
2106 | "euml": '\U000000EB', | ||
2107 | "frac12": '\U000000BD', | ||
2108 | "frac14": '\U000000BC', | ||
2109 | "frac34": '\U000000BE', | ||
2110 | "gt": '\U0000003E', | ||
2111 | "iacute": '\U000000ED', | ||
2112 | "icirc": '\U000000EE', | ||
2113 | "iexcl": '\U000000A1', | ||
2114 | "igrave": '\U000000EC', | ||
2115 | "iquest": '\U000000BF', | ||
2116 | "iuml": '\U000000EF', | ||
2117 | "laquo": '\U000000AB', | ||
2118 | "lt": '\U0000003C', | ||
2119 | "macr": '\U000000AF', | ||
2120 | "micro": '\U000000B5', | ||
2121 | "middot": '\U000000B7', | ||
2122 | "nbsp": '\U000000A0', | ||
2123 | "not": '\U000000AC', | ||
2124 | "ntilde": '\U000000F1', | ||
2125 | "oacute": '\U000000F3', | ||
2126 | "ocirc": '\U000000F4', | ||
2127 | "ograve": '\U000000F2', | ||
2128 | "ordf": '\U000000AA', | ||
2129 | "ordm": '\U000000BA', | ||
2130 | "oslash": '\U000000F8', | ||
2131 | "otilde": '\U000000F5', | ||
2132 | "ouml": '\U000000F6', | ||
2133 | "para": '\U000000B6', | ||
2134 | "plusmn": '\U000000B1', | ||
2135 | "pound": '\U000000A3', | ||
2136 | "quot": '\U00000022', | ||
2137 | "raquo": '\U000000BB', | ||
2138 | "reg": '\U000000AE', | ||
2139 | "sect": '\U000000A7', | ||
2140 | "shy": '\U000000AD', | ||
2141 | "sup1": '\U000000B9', | ||
2142 | "sup2": '\U000000B2', | ||
2143 | "sup3": '\U000000B3', | ||
2144 | "szlig": '\U000000DF', | ||
2145 | "thorn": '\U000000FE', | ||
2146 | "times": '\U000000D7', | ||
2147 | "uacute": '\U000000FA', | ||
2148 | "ucirc": '\U000000FB', | ||
2149 | "ugrave": '\U000000F9', | ||
2150 | "uml": '\U000000A8', | ||
2151 | "uuml": '\U000000FC', | ||
2152 | "yacute": '\U000000FD', | ||
2153 | "yen": '\U000000A5', | ||
2154 | "yuml": '\U000000FF', | ||
2155 | } | ||
2156 | |||
2157 | // HTML entities that are two unicode codepoints. | ||
2158 | var entity2 = map[string][2]rune{ | ||
2159 | // TODO(nigeltao): Handle replacements that are wider than their names. | ||
2160 | // "nLt;": {'\u226A', '\u20D2'}, | ||
2161 | // "nGt;": {'\u226B', '\u20D2'}, | ||
2162 | "NotEqualTilde;": {'\u2242', '\u0338'}, | ||
2163 | "NotGreaterFullEqual;": {'\u2267', '\u0338'}, | ||
2164 | "NotGreaterGreater;": {'\u226B', '\u0338'}, | ||
2165 | "NotGreaterSlantEqual;": {'\u2A7E', '\u0338'}, | ||
2166 | "NotHumpDownHump;": {'\u224E', '\u0338'}, | ||
2167 | "NotHumpEqual;": {'\u224F', '\u0338'}, | ||
2168 | "NotLeftTriangleBar;": {'\u29CF', '\u0338'}, | ||
2169 | "NotLessLess;": {'\u226A', '\u0338'}, | ||
2170 | "NotLessSlantEqual;": {'\u2A7D', '\u0338'}, | ||
2171 | "NotNestedGreaterGreater;": {'\u2AA2', '\u0338'}, | ||
2172 | "NotNestedLessLess;": {'\u2AA1', '\u0338'}, | ||
2173 | "NotPrecedesEqual;": {'\u2AAF', '\u0338'}, | ||
2174 | "NotRightTriangleBar;": {'\u29D0', '\u0338'}, | ||
2175 | "NotSquareSubset;": {'\u228F', '\u0338'}, | ||
2176 | "NotSquareSuperset;": {'\u2290', '\u0338'}, | ||
2177 | "NotSubset;": {'\u2282', '\u20D2'}, | ||
2178 | "NotSucceedsEqual;": {'\u2AB0', '\u0338'}, | ||
2179 | "NotSucceedsTilde;": {'\u227F', '\u0338'}, | ||
2180 | "NotSuperset;": {'\u2283', '\u20D2'}, | ||
2181 | "ThickSpace;": {'\u205F', '\u200A'}, | ||
2182 | "acE;": {'\u223E', '\u0333'}, | ||
2183 | "bne;": {'\u003D', '\u20E5'}, | ||
2184 | "bnequiv;": {'\u2261', '\u20E5'}, | ||
2185 | "caps;": {'\u2229', '\uFE00'}, | ||
2186 | "cups;": {'\u222A', '\uFE00'}, | ||
2187 | "fjlig;": {'\u0066', '\u006A'}, | ||
2188 | "gesl;": {'\u22DB', '\uFE00'}, | ||
2189 | "gvertneqq;": {'\u2269', '\uFE00'}, | ||
2190 | "gvnE;": {'\u2269', '\uFE00'}, | ||
2191 | "lates;": {'\u2AAD', '\uFE00'}, | ||
2192 | "lesg;": {'\u22DA', '\uFE00'}, | ||
2193 | "lvertneqq;": {'\u2268', '\uFE00'}, | ||
2194 | "lvnE;": {'\u2268', '\uFE00'}, | ||
2195 | "nGg;": {'\u22D9', '\u0338'}, | ||
2196 | "nGtv;": {'\u226B', '\u0338'}, | ||
2197 | "nLl;": {'\u22D8', '\u0338'}, | ||
2198 | "nLtv;": {'\u226A', '\u0338'}, | ||
2199 | "nang;": {'\u2220', '\u20D2'}, | ||
2200 | "napE;": {'\u2A70', '\u0338'}, | ||
2201 | "napid;": {'\u224B', '\u0338'}, | ||
2202 | "nbump;": {'\u224E', '\u0338'}, | ||
2203 | "nbumpe;": {'\u224F', '\u0338'}, | ||
2204 | "ncongdot;": {'\u2A6D', '\u0338'}, | ||
2205 | "nedot;": {'\u2250', '\u0338'}, | ||
2206 | "nesim;": {'\u2242', '\u0338'}, | ||
2207 | "ngE;": {'\u2267', '\u0338'}, | ||
2208 | "ngeqq;": {'\u2267', '\u0338'}, | ||
2209 | "ngeqslant;": {'\u2A7E', '\u0338'}, | ||
2210 | "nges;": {'\u2A7E', '\u0338'}, | ||
2211 | "nlE;": {'\u2266', '\u0338'}, | ||
2212 | "nleqq;": {'\u2266', '\u0338'}, | ||
2213 | "nleqslant;": {'\u2A7D', '\u0338'}, | ||
2214 | "nles;": {'\u2A7D', '\u0338'}, | ||
2215 | "notinE;": {'\u22F9', '\u0338'}, | ||
2216 | "notindot;": {'\u22F5', '\u0338'}, | ||
2217 | "nparsl;": {'\u2AFD', '\u20E5'}, | ||
2218 | "npart;": {'\u2202', '\u0338'}, | ||
2219 | "npre;": {'\u2AAF', '\u0338'}, | ||
2220 | "npreceq;": {'\u2AAF', '\u0338'}, | ||
2221 | "nrarrc;": {'\u2933', '\u0338'}, | ||
2222 | "nrarrw;": {'\u219D', '\u0338'}, | ||
2223 | "nsce;": {'\u2AB0', '\u0338'}, | ||
2224 | "nsubE;": {'\u2AC5', '\u0338'}, | ||
2225 | "nsubset;": {'\u2282', '\u20D2'}, | ||
2226 | "nsubseteqq;": {'\u2AC5', '\u0338'}, | ||
2227 | "nsucceq;": {'\u2AB0', '\u0338'}, | ||
2228 | "nsupE;": {'\u2AC6', '\u0338'}, | ||
2229 | "nsupset;": {'\u2283', '\u20D2'}, | ||
2230 | "nsupseteqq;": {'\u2AC6', '\u0338'}, | ||
2231 | "nvap;": {'\u224D', '\u20D2'}, | ||
2232 | "nvge;": {'\u2265', '\u20D2'}, | ||
2233 | "nvgt;": {'\u003E', '\u20D2'}, | ||
2234 | "nvle;": {'\u2264', '\u20D2'}, | ||
2235 | "nvlt;": {'\u003C', '\u20D2'}, | ||
2236 | "nvltrie;": {'\u22B4', '\u20D2'}, | ||
2237 | "nvrtrie;": {'\u22B5', '\u20D2'}, | ||
2238 | "nvsim;": {'\u223C', '\u20D2'}, | ||
2239 | "race;": {'\u223D', '\u0331'}, | ||
2240 | "smtes;": {'\u2AAC', '\uFE00'}, | ||
2241 | "sqcaps;": {'\u2293', '\uFE00'}, | ||
2242 | "sqcups;": {'\u2294', '\uFE00'}, | ||
2243 | "varsubsetneq;": {'\u228A', '\uFE00'}, | ||
2244 | "varsubsetneqq;": {'\u2ACB', '\uFE00'}, | ||
2245 | "varsupsetneq;": {'\u228B', '\uFE00'}, | ||
2246 | "varsupsetneqq;": {'\u2ACC', '\uFE00'}, | ||
2247 | "vnsub;": {'\u2282', '\u20D2'}, | ||
2248 | "vnsup;": {'\u2283', '\u20D2'}, | ||
2249 | "vsubnE;": {'\u2ACB', '\uFE00'}, | ||
2250 | "vsubne;": {'\u228A', '\uFE00'}, | ||
2251 | "vsupnE;": {'\u2ACC', '\uFE00'}, | ||
2252 | "vsupne;": {'\u228B', '\uFE00'}, | ||
2253 | } | ||
diff --git a/vendor/golang.org/x/net/html/escape.go b/vendor/golang.org/x/net/html/escape.go new file mode 100644 index 0000000..d856139 --- /dev/null +++ b/vendor/golang.org/x/net/html/escape.go | |||
@@ -0,0 +1,258 @@ | |||
1 | // Copyright 2010 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package html | ||
6 | |||
7 | import ( | ||
8 | "bytes" | ||
9 | "strings" | ||
10 | "unicode/utf8" | ||
11 | ) | ||
12 | |||
13 | // These replacements permit compatibility with old numeric entities that | ||
14 | // assumed Windows-1252 encoding. | ||
15 | // https://html.spec.whatwg.org/multipage/syntax.html#consume-a-character-reference | ||
16 | var replacementTable = [...]rune{ | ||
17 | '\u20AC', // First entry is what 0x80 should be replaced with. | ||
18 | '\u0081', | ||
19 | '\u201A', | ||
20 | '\u0192', | ||
21 | '\u201E', | ||
22 | '\u2026', | ||
23 | '\u2020', | ||
24 | '\u2021', | ||
25 | '\u02C6', | ||
26 | '\u2030', | ||
27 | '\u0160', | ||
28 | '\u2039', | ||
29 | '\u0152', | ||
30 | '\u008D', | ||
31 | '\u017D', | ||
32 | '\u008F', | ||
33 | '\u0090', | ||
34 | '\u2018', | ||
35 | '\u2019', | ||
36 | '\u201C', | ||
37 | '\u201D', | ||
38 | '\u2022', | ||
39 | '\u2013', | ||
40 | '\u2014', | ||
41 | '\u02DC', | ||
42 | '\u2122', | ||
43 | '\u0161', | ||
44 | '\u203A', | ||
45 | '\u0153', | ||
46 | '\u009D', | ||
47 | '\u017E', | ||
48 | '\u0178', // Last entry is 0x9F. | ||
49 | // 0x00->'\uFFFD' is handled programmatically. | ||
50 | // 0x0D->'\u000D' is a no-op. | ||
51 | } | ||
52 | |||
53 | // unescapeEntity reads an entity like "<" from b[src:] and writes the | ||
54 | // corresponding "<" to b[dst:], returning the incremented dst and src cursors. | ||
55 | // Precondition: b[src] == '&' && dst <= src. | ||
56 | // attribute should be true if parsing an attribute value. | ||
57 | func unescapeEntity(b []byte, dst, src int, attribute bool) (dst1, src1 int) { | ||
58 | // https://html.spec.whatwg.org/multipage/syntax.html#consume-a-character-reference | ||
59 | |||
60 | // i starts at 1 because we already know that s[0] == '&'. | ||
61 | i, s := 1, b[src:] | ||
62 | |||
63 | if len(s) <= 1 { | ||
64 | b[dst] = b[src] | ||
65 | return dst + 1, src + 1 | ||
66 | } | ||
67 | |||
68 | if s[i] == '#' { | ||
69 | if len(s) <= 3 { // We need to have at least "&#.". | ||
70 | b[dst] = b[src] | ||
71 | return dst + 1, src + 1 | ||
72 | } | ||
73 | i++ | ||
74 | c := s[i] | ||
75 | hex := false | ||
76 | if c == 'x' || c == 'X' { | ||
77 | hex = true | ||
78 | i++ | ||
79 | } | ||
80 | |||
81 | x := '\x00' | ||
82 | for i < len(s) { | ||
83 | c = s[i] | ||
84 | i++ | ||
85 | if hex { | ||
86 | if '0' <= c && c <= '9' { | ||
87 | x = 16*x + rune(c) - '0' | ||
88 | continue | ||
89 | } else if 'a' <= c && c <= 'f' { | ||
90 | x = 16*x + rune(c) - 'a' + 10 | ||
91 | continue | ||
92 | } else if 'A' <= c && c <= 'F' { | ||
93 | x = 16*x + rune(c) - 'A' + 10 | ||
94 | continue | ||
95 | } | ||
96 | } else if '0' <= c && c <= '9' { | ||
97 | x = 10*x + rune(c) - '0' | ||
98 | continue | ||
99 | } | ||
100 | if c != ';' { | ||
101 | i-- | ||
102 | } | ||
103 | break | ||
104 | } | ||
105 | |||
106 | if i <= 3 { // No characters matched. | ||
107 | b[dst] = b[src] | ||
108 | return dst + 1, src + 1 | ||
109 | } | ||
110 | |||
111 | if 0x80 <= x && x <= 0x9F { | ||
112 | // Replace characters from Windows-1252 with UTF-8 equivalents. | ||
113 | x = replacementTable[x-0x80] | ||
114 | } else if x == 0 || (0xD800 <= x && x <= 0xDFFF) || x > 0x10FFFF { | ||
115 | // Replace invalid characters with the replacement character. | ||
116 | x = '\uFFFD' | ||
117 | } | ||
118 | |||
119 | return dst + utf8.EncodeRune(b[dst:], x), src + i | ||
120 | } | ||
121 | |||
122 | // Consume the maximum number of characters possible, with the | ||
123 | // consumed characters matching one of the named references. | ||
124 | |||
125 | for i < len(s) { | ||
126 | c := s[i] | ||
127 | i++ | ||
128 | // Lower-cased characters are more common in entities, so we check for them first. | ||
129 | if 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' || '0' <= c && c <= '9' { | ||
130 | continue | ||
131 | } | ||
132 | if c != ';' { | ||
133 | i-- | ||
134 | } | ||
135 | break | ||
136 | } | ||
137 | |||
138 | entityName := string(s[1:i]) | ||
139 | if entityName == "" { | ||
140 | // No-op. | ||
141 | } else if attribute && entityName[len(entityName)-1] != ';' && len(s) > i && s[i] == '=' { | ||
142 | // No-op. | ||
143 | } else if x := entity[entityName]; x != 0 { | ||
144 | return dst + utf8.EncodeRune(b[dst:], x), src + i | ||
145 | } else if x := entity2[entityName]; x[0] != 0 { | ||
146 | dst1 := dst + utf8.EncodeRune(b[dst:], x[0]) | ||
147 | return dst1 + utf8.EncodeRune(b[dst1:], x[1]), src + i | ||
148 | } else if !attribute { | ||
149 | maxLen := len(entityName) - 1 | ||
150 | if maxLen > longestEntityWithoutSemicolon { | ||
151 | maxLen = longestEntityWithoutSemicolon | ||
152 | } | ||
153 | for j := maxLen; j > 1; j-- { | ||
154 | if x := entity[entityName[:j]]; x != 0 { | ||
155 | return dst + utf8.EncodeRune(b[dst:], x), src + j + 1 | ||
156 | } | ||
157 | } | ||
158 | } | ||
159 | |||
160 | dst1, src1 = dst+i, src+i | ||
161 | copy(b[dst:dst1], b[src:src1]) | ||
162 | return dst1, src1 | ||
163 | } | ||
164 | |||
165 | // unescape unescapes b's entities in-place, so that "a<b" becomes "a<b". | ||
166 | // attribute should be true if parsing an attribute value. | ||
167 | func unescape(b []byte, attribute bool) []byte { | ||
168 | for i, c := range b { | ||
169 | if c == '&' { | ||
170 | dst, src := unescapeEntity(b, i, i, attribute) | ||
171 | for src < len(b) { | ||
172 | c := b[src] | ||
173 | if c == '&' { | ||
174 | dst, src = unescapeEntity(b, dst, src, attribute) | ||
175 | } else { | ||
176 | b[dst] = c | ||
177 | dst, src = dst+1, src+1 | ||
178 | } | ||
179 | } | ||
180 | return b[0:dst] | ||
181 | } | ||
182 | } | ||
183 | return b | ||
184 | } | ||
185 | |||
186 | // lower lower-cases the A-Z bytes in b in-place, so that "aBc" becomes "abc". | ||
187 | func lower(b []byte) []byte { | ||
188 | for i, c := range b { | ||
189 | if 'A' <= c && c <= 'Z' { | ||
190 | b[i] = c + 'a' - 'A' | ||
191 | } | ||
192 | } | ||
193 | return b | ||
194 | } | ||
195 | |||
196 | const escapedChars = "&'<>\"\r" | ||
197 | |||
198 | func escape(w writer, s string) error { | ||
199 | i := strings.IndexAny(s, escapedChars) | ||
200 | for i != -1 { | ||
201 | if _, err := w.WriteString(s[:i]); err != nil { | ||
202 | return err | ||
203 | } | ||
204 | var esc string | ||
205 | switch s[i] { | ||
206 | case '&': | ||
207 | esc = "&" | ||
208 | case '\'': | ||
209 | // "'" is shorter than "'" and apos was not in HTML until HTML5. | ||
210 | esc = "'" | ||
211 | case '<': | ||
212 | esc = "<" | ||
213 | case '>': | ||
214 | esc = ">" | ||
215 | case '"': | ||
216 | // """ is shorter than """. | ||
217 | esc = """ | ||
218 | case '\r': | ||
219 | esc = " " | ||
220 | default: | ||
221 | panic("unrecognized escape character") | ||
222 | } | ||
223 | s = s[i+1:] | ||
224 | if _, err := w.WriteString(esc); err != nil { | ||
225 | return err | ||
226 | } | ||
227 | i = strings.IndexAny(s, escapedChars) | ||
228 | } | ||
229 | _, err := w.WriteString(s) | ||
230 | return err | ||
231 | } | ||
232 | |||
233 | // EscapeString escapes special characters like "<" to become "<". It | ||
234 | // escapes only five such characters: <, >, &, ' and ". | ||
235 | // UnescapeString(EscapeString(s)) == s always holds, but the converse isn't | ||
236 | // always true. | ||
237 | func EscapeString(s string) string { | ||
238 | if strings.IndexAny(s, escapedChars) == -1 { | ||
239 | return s | ||
240 | } | ||
241 | var buf bytes.Buffer | ||
242 | escape(&buf, s) | ||
243 | return buf.String() | ||
244 | } | ||
245 | |||
246 | // UnescapeString unescapes entities like "<" to become "<". It unescapes a | ||
247 | // larger range of entities than EscapeString escapes. For example, "á" | ||
248 | // unescapes to "á", as does "á" and "&xE1;". | ||
249 | // UnescapeString(EscapeString(s)) == s always holds, but the converse isn't | ||
250 | // always true. | ||
251 | func UnescapeString(s string) string { | ||
252 | for _, c := range s { | ||
253 | if c == '&' { | ||
254 | return string(unescape([]byte(s), false)) | ||
255 | } | ||
256 | } | ||
257 | return s | ||
258 | } | ||
diff --git a/vendor/golang.org/x/net/html/foreign.go b/vendor/golang.org/x/net/html/foreign.go new file mode 100644 index 0000000..d3b3844 --- /dev/null +++ b/vendor/golang.org/x/net/html/foreign.go | |||
@@ -0,0 +1,226 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package html | ||
6 | |||
7 | import ( | ||
8 | "strings" | ||
9 | ) | ||
10 | |||
11 | func adjustAttributeNames(aa []Attribute, nameMap map[string]string) { | ||
12 | for i := range aa { | ||
13 | if newName, ok := nameMap[aa[i].Key]; ok { | ||
14 | aa[i].Key = newName | ||
15 | } | ||
16 | } | ||
17 | } | ||
18 | |||
19 | func adjustForeignAttributes(aa []Attribute) { | ||
20 | for i, a := range aa { | ||
21 | if a.Key == "" || a.Key[0] != 'x' { | ||
22 | continue | ||
23 | } | ||
24 | switch a.Key { | ||
25 | case "xlink:actuate", "xlink:arcrole", "xlink:href", "xlink:role", "xlink:show", | ||
26 | "xlink:title", "xlink:type", "xml:base", "xml:lang", "xml:space", "xmlns:xlink": | ||
27 | j := strings.Index(a.Key, ":") | ||
28 | aa[i].Namespace = a.Key[:j] | ||
29 | aa[i].Key = a.Key[j+1:] | ||
30 | } | ||
31 | } | ||
32 | } | ||
33 | |||
34 | func htmlIntegrationPoint(n *Node) bool { | ||
35 | if n.Type != ElementNode { | ||
36 | return false | ||
37 | } | ||
38 | switch n.Namespace { | ||
39 | case "math": | ||
40 | if n.Data == "annotation-xml" { | ||
41 | for _, a := range n.Attr { | ||
42 | if a.Key == "encoding" { | ||
43 | val := strings.ToLower(a.Val) | ||
44 | if val == "text/html" || val == "application/xhtml+xml" { | ||
45 | return true | ||
46 | } | ||
47 | } | ||
48 | } | ||
49 | } | ||
50 | case "svg": | ||
51 | switch n.Data { | ||
52 | case "desc", "foreignObject", "title": | ||
53 | return true | ||
54 | } | ||
55 | } | ||
56 | return false | ||
57 | } | ||
58 | |||
59 | func mathMLTextIntegrationPoint(n *Node) bool { | ||
60 | if n.Namespace != "math" { | ||
61 | return false | ||
62 | } | ||
63 | switch n.Data { | ||
64 | case "mi", "mo", "mn", "ms", "mtext": | ||
65 | return true | ||
66 | } | ||
67 | return false | ||
68 | } | ||
69 | |||
70 | // Section 12.2.5.5. | ||
71 | var breakout = map[string]bool{ | ||
72 | "b": true, | ||
73 | "big": true, | ||
74 | "blockquote": true, | ||
75 | "body": true, | ||
76 | "br": true, | ||
77 | "center": true, | ||
78 | "code": true, | ||
79 | "dd": true, | ||
80 | "div": true, | ||
81 | "dl": true, | ||
82 | "dt": true, | ||
83 | "em": true, | ||
84 | "embed": true, | ||
85 | "h1": true, | ||
86 | "h2": true, | ||
87 | "h3": true, | ||
88 | "h4": true, | ||
89 | "h5": true, | ||
90 | "h6": true, | ||
91 | "head": true, | ||
92 | "hr": true, | ||
93 | "i": true, | ||
94 | "img": true, | ||
95 | "li": true, | ||
96 | "listing": true, | ||
97 | "menu": true, | ||
98 | "meta": true, | ||
99 | "nobr": true, | ||
100 | "ol": true, | ||
101 | "p": true, | ||
102 | "pre": true, | ||
103 | "ruby": true, | ||
104 | "s": true, | ||
105 | "small": true, | ||
106 | "span": true, | ||
107 | "strong": true, | ||
108 | "strike": true, | ||
109 | "sub": true, | ||
110 | "sup": true, | ||
111 | "table": true, | ||
112 | "tt": true, | ||
113 | "u": true, | ||
114 | "ul": true, | ||
115 | "var": true, | ||
116 | } | ||
117 | |||
118 | // Section 12.2.5.5. | ||
119 | var svgTagNameAdjustments = map[string]string{ | ||
120 | "altglyph": "altGlyph", | ||
121 | "altglyphdef": "altGlyphDef", | ||
122 | "altglyphitem": "altGlyphItem", | ||
123 | "animatecolor": "animateColor", | ||
124 | "animatemotion": "animateMotion", | ||
125 | "animatetransform": "animateTransform", | ||
126 | "clippath": "clipPath", | ||
127 | "feblend": "feBlend", | ||
128 | "fecolormatrix": "feColorMatrix", | ||
129 | "fecomponenttransfer": "feComponentTransfer", | ||
130 | "fecomposite": "feComposite", | ||
131 | "feconvolvematrix": "feConvolveMatrix", | ||
132 | "fediffuselighting": "feDiffuseLighting", | ||
133 | "fedisplacementmap": "feDisplacementMap", | ||
134 | "fedistantlight": "feDistantLight", | ||
135 | "feflood": "feFlood", | ||
136 | "fefunca": "feFuncA", | ||
137 | "fefuncb": "feFuncB", | ||
138 | "fefuncg": "feFuncG", | ||
139 | "fefuncr": "feFuncR", | ||
140 | "fegaussianblur": "feGaussianBlur", | ||
141 | "feimage": "feImage", | ||
142 | "femerge": "feMerge", | ||
143 | "femergenode": "feMergeNode", | ||
144 | "femorphology": "feMorphology", | ||
145 | "feoffset": "feOffset", | ||
146 | "fepointlight": "fePointLight", | ||
147 | "fespecularlighting": "feSpecularLighting", | ||
148 | "fespotlight": "feSpotLight", | ||
149 | "fetile": "feTile", | ||
150 | "feturbulence": "feTurbulence", | ||
151 | "foreignobject": "foreignObject", | ||
152 | "glyphref": "glyphRef", | ||
153 | "lineargradient": "linearGradient", | ||
154 | "radialgradient": "radialGradient", | ||
155 | "textpath": "textPath", | ||
156 | } | ||
157 | |||
158 | // Section 12.2.5.1 | ||
159 | var mathMLAttributeAdjustments = map[string]string{ | ||
160 | "definitionurl": "definitionURL", | ||
161 | } | ||
162 | |||
163 | var svgAttributeAdjustments = map[string]string{ | ||
164 | "attributename": "attributeName", | ||
165 | "attributetype": "attributeType", | ||
166 | "basefrequency": "baseFrequency", | ||
167 | "baseprofile": "baseProfile", | ||
168 | "calcmode": "calcMode", | ||
169 | "clippathunits": "clipPathUnits", | ||
170 | "contentscripttype": "contentScriptType", | ||
171 | "contentstyletype": "contentStyleType", | ||
172 | "diffuseconstant": "diffuseConstant", | ||
173 | "edgemode": "edgeMode", | ||
174 | "externalresourcesrequired": "externalResourcesRequired", | ||
175 | "filterres": "filterRes", | ||
176 | "filterunits": "filterUnits", | ||
177 | "glyphref": "glyphRef", | ||
178 | "gradienttransform": "gradientTransform", | ||
179 | "gradientunits": "gradientUnits", | ||
180 | "kernelmatrix": "kernelMatrix", | ||
181 | "kernelunitlength": "kernelUnitLength", | ||
182 | "keypoints": "keyPoints", | ||
183 | "keysplines": "keySplines", | ||
184 | "keytimes": "keyTimes", | ||
185 | "lengthadjust": "lengthAdjust", | ||
186 | "limitingconeangle": "limitingConeAngle", | ||
187 | "markerheight": "markerHeight", | ||
188 | "markerunits": "markerUnits", | ||
189 | "markerwidth": "markerWidth", | ||
190 | "maskcontentunits": "maskContentUnits", | ||
191 | "maskunits": "maskUnits", | ||
192 | "numoctaves": "numOctaves", | ||
193 | "pathlength": "pathLength", | ||
194 | "patterncontentunits": "patternContentUnits", | ||
195 | "patterntransform": "patternTransform", | ||
196 | "patternunits": "patternUnits", | ||
197 | "pointsatx": "pointsAtX", | ||
198 | "pointsaty": "pointsAtY", | ||
199 | "pointsatz": "pointsAtZ", | ||
200 | "preservealpha": "preserveAlpha", | ||
201 | "preserveaspectratio": "preserveAspectRatio", | ||
202 | "primitiveunits": "primitiveUnits", | ||
203 | "refx": "refX", | ||
204 | "refy": "refY", | ||
205 | "repeatcount": "repeatCount", | ||
206 | "repeatdur": "repeatDur", | ||
207 | "requiredextensions": "requiredExtensions", | ||
208 | "requiredfeatures": "requiredFeatures", | ||
209 | "specularconstant": "specularConstant", | ||
210 | "specularexponent": "specularExponent", | ||
211 | "spreadmethod": "spreadMethod", | ||
212 | "startoffset": "startOffset", | ||
213 | "stddeviation": "stdDeviation", | ||
214 | "stitchtiles": "stitchTiles", | ||
215 | "surfacescale": "surfaceScale", | ||
216 | "systemlanguage": "systemLanguage", | ||
217 | "tablevalues": "tableValues", | ||
218 | "targetx": "targetX", | ||
219 | "targety": "targetY", | ||
220 | "textlength": "textLength", | ||
221 | "viewbox": "viewBox", | ||
222 | "viewtarget": "viewTarget", | ||
223 | "xchannelselector": "xChannelSelector", | ||
224 | "ychannelselector": "yChannelSelector", | ||
225 | "zoomandpan": "zoomAndPan", | ||
226 | } | ||
diff --git a/vendor/golang.org/x/net/html/node.go b/vendor/golang.org/x/net/html/node.go new file mode 100644 index 0000000..26b657a --- /dev/null +++ b/vendor/golang.org/x/net/html/node.go | |||
@@ -0,0 +1,193 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package html | ||
6 | |||
7 | import ( | ||
8 | "golang.org/x/net/html/atom" | ||
9 | ) | ||
10 | |||
11 | // A NodeType is the type of a Node. | ||
12 | type NodeType uint32 | ||
13 | |||
14 | const ( | ||
15 | ErrorNode NodeType = iota | ||
16 | TextNode | ||
17 | DocumentNode | ||
18 | ElementNode | ||
19 | CommentNode | ||
20 | DoctypeNode | ||
21 | scopeMarkerNode | ||
22 | ) | ||
23 | |||
24 | // Section 12.2.3.3 says "scope markers are inserted when entering applet | ||
25 | // elements, buttons, object elements, marquees, table cells, and table | ||
26 | // captions, and are used to prevent formatting from 'leaking'". | ||
27 | var scopeMarker = Node{Type: scopeMarkerNode} | ||
28 | |||
29 | // A Node consists of a NodeType and some Data (tag name for element nodes, | ||
30 | // content for text) and are part of a tree of Nodes. Element nodes may also | ||
31 | // have a Namespace and contain a slice of Attributes. Data is unescaped, so | ||
32 | // that it looks like "a<b" rather than "a<b". For element nodes, DataAtom | ||
33 | // is the atom for Data, or zero if Data is not a known tag name. | ||
34 | // | ||
35 | // An empty Namespace implies a "http://www.w3.org/1999/xhtml" namespace. | ||
36 | // Similarly, "math" is short for "http://www.w3.org/1998/Math/MathML", and | ||
37 | // "svg" is short for "http://www.w3.org/2000/svg". | ||
38 | type Node struct { | ||
39 | Parent, FirstChild, LastChild, PrevSibling, NextSibling *Node | ||
40 | |||
41 | Type NodeType | ||
42 | DataAtom atom.Atom | ||
43 | Data string | ||
44 | Namespace string | ||
45 | Attr []Attribute | ||
46 | } | ||
47 | |||
48 | // InsertBefore inserts newChild as a child of n, immediately before oldChild | ||
49 | // in the sequence of n's children. oldChild may be nil, in which case newChild | ||
50 | // is appended to the end of n's children. | ||
51 | // | ||
52 | // It will panic if newChild already has a parent or siblings. | ||
53 | func (n *Node) InsertBefore(newChild, oldChild *Node) { | ||
54 | if newChild.Parent != nil || newChild.PrevSibling != nil || newChild.NextSibling != nil { | ||
55 | panic("html: InsertBefore called for an attached child Node") | ||
56 | } | ||
57 | var prev, next *Node | ||
58 | if oldChild != nil { | ||
59 | prev, next = oldChild.PrevSibling, oldChild | ||
60 | } else { | ||
61 | prev = n.LastChild | ||
62 | } | ||
63 | if prev != nil { | ||
64 | prev.NextSibling = newChild | ||
65 | } else { | ||
66 | n.FirstChild = newChild | ||
67 | } | ||
68 | if next != nil { | ||
69 | next.PrevSibling = newChild | ||
70 | } else { | ||
71 | n.LastChild = newChild | ||
72 | } | ||
73 | newChild.Parent = n | ||
74 | newChild.PrevSibling = prev | ||
75 | newChild.NextSibling = next | ||
76 | } | ||
77 | |||
78 | // AppendChild adds a node c as a child of n. | ||
79 | // | ||
80 | // It will panic if c already has a parent or siblings. | ||
81 | func (n *Node) AppendChild(c *Node) { | ||
82 | if c.Parent != nil || c.PrevSibling != nil || c.NextSibling != nil { | ||
83 | panic("html: AppendChild called for an attached child Node") | ||
84 | } | ||
85 | last := n.LastChild | ||
86 | if last != nil { | ||
87 | last.NextSibling = c | ||
88 | } else { | ||
89 | n.FirstChild = c | ||
90 | } | ||
91 | n.LastChild = c | ||
92 | c.Parent = n | ||
93 | c.PrevSibling = last | ||
94 | } | ||
95 | |||
96 | // RemoveChild removes a node c that is a child of n. Afterwards, c will have | ||
97 | // no parent and no siblings. | ||
98 | // | ||
99 | // It will panic if c's parent is not n. | ||
100 | func (n *Node) RemoveChild(c *Node) { | ||
101 | if c.Parent != n { | ||
102 | panic("html: RemoveChild called for a non-child Node") | ||
103 | } | ||
104 | if n.FirstChild == c { | ||
105 | n.FirstChild = c.NextSibling | ||
106 | } | ||
107 | if c.NextSibling != nil { | ||
108 | c.NextSibling.PrevSibling = c.PrevSibling | ||
109 | } | ||
110 | if n.LastChild == c { | ||
111 | n.LastChild = c.PrevSibling | ||
112 | } | ||
113 | if c.PrevSibling != nil { | ||
114 | c.PrevSibling.NextSibling = c.NextSibling | ||
115 | } | ||
116 | c.Parent = nil | ||
117 | c.PrevSibling = nil | ||
118 | c.NextSibling = nil | ||
119 | } | ||
120 | |||
121 | // reparentChildren reparents all of src's child nodes to dst. | ||
122 | func reparentChildren(dst, src *Node) { | ||
123 | for { | ||
124 | child := src.FirstChild | ||
125 | if child == nil { | ||
126 | break | ||
127 | } | ||
128 | src.RemoveChild(child) | ||
129 | dst.AppendChild(child) | ||
130 | } | ||
131 | } | ||
132 | |||
133 | // clone returns a new node with the same type, data and attributes. | ||
134 | // The clone has no parent, no siblings and no children. | ||
135 | func (n *Node) clone() *Node { | ||
136 | m := &Node{ | ||
137 | Type: n.Type, | ||
138 | DataAtom: n.DataAtom, | ||
139 | Data: n.Data, | ||
140 | Attr: make([]Attribute, len(n.Attr)), | ||
141 | } | ||
142 | copy(m.Attr, n.Attr) | ||
143 | return m | ||
144 | } | ||
145 | |||
146 | // nodeStack is a stack of nodes. | ||
147 | type nodeStack []*Node | ||
148 | |||
149 | // pop pops the stack. It will panic if s is empty. | ||
150 | func (s *nodeStack) pop() *Node { | ||
151 | i := len(*s) | ||
152 | n := (*s)[i-1] | ||
153 | *s = (*s)[:i-1] | ||
154 | return n | ||
155 | } | ||
156 | |||
157 | // top returns the most recently pushed node, or nil if s is empty. | ||
158 | func (s *nodeStack) top() *Node { | ||
159 | if i := len(*s); i > 0 { | ||
160 | return (*s)[i-1] | ||
161 | } | ||
162 | return nil | ||
163 | } | ||
164 | |||
165 | // index returns the index of the top-most occurrence of n in the stack, or -1 | ||
166 | // if n is not present. | ||
167 | func (s *nodeStack) index(n *Node) int { | ||
168 | for i := len(*s) - 1; i >= 0; i-- { | ||
169 | if (*s)[i] == n { | ||
170 | return i | ||
171 | } | ||
172 | } | ||
173 | return -1 | ||
174 | } | ||
175 | |||
176 | // insert inserts a node at the given index. | ||
177 | func (s *nodeStack) insert(i int, n *Node) { | ||
178 | (*s) = append(*s, nil) | ||
179 | copy((*s)[i+1:], (*s)[i:]) | ||
180 | (*s)[i] = n | ||
181 | } | ||
182 | |||
183 | // remove removes a node from the stack. It is a no-op if n is not present. | ||
184 | func (s *nodeStack) remove(n *Node) { | ||
185 | i := s.index(n) | ||
186 | if i == -1 { | ||
187 | return | ||
188 | } | ||
189 | copy((*s)[i:], (*s)[i+1:]) | ||
190 | j := len(*s) - 1 | ||
191 | (*s)[j] = nil | ||
192 | *s = (*s)[:j] | ||
193 | } | ||
diff --git a/vendor/golang.org/x/net/html/parse.go b/vendor/golang.org/x/net/html/parse.go new file mode 100644 index 0000000..be4b2bf --- /dev/null +++ b/vendor/golang.org/x/net/html/parse.go | |||
@@ -0,0 +1,2094 @@ | |||
1 | // Copyright 2010 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package html | ||
6 | |||
7 | import ( | ||
8 | "errors" | ||
9 | "fmt" | ||
10 | "io" | ||
11 | "strings" | ||
12 | |||
13 | a "golang.org/x/net/html/atom" | ||
14 | ) | ||
15 | |||
16 | // A parser implements the HTML5 parsing algorithm: | ||
17 | // https://html.spec.whatwg.org/multipage/syntax.html#tree-construction | ||
18 | type parser struct { | ||
19 | // tokenizer provides the tokens for the parser. | ||
20 | tokenizer *Tokenizer | ||
21 | // tok is the most recently read token. | ||
22 | tok Token | ||
23 | // Self-closing tags like <hr/> are treated as start tags, except that | ||
24 | // hasSelfClosingToken is set while they are being processed. | ||
25 | hasSelfClosingToken bool | ||
26 | // doc is the document root element. | ||
27 | doc *Node | ||
28 | // The stack of open elements (section 12.2.3.2) and active formatting | ||
29 | // elements (section 12.2.3.3). | ||
30 | oe, afe nodeStack | ||
31 | // Element pointers (section 12.2.3.4). | ||
32 | head, form *Node | ||
33 | // Other parsing state flags (section 12.2.3.5). | ||
34 | scripting, framesetOK bool | ||
35 | // im is the current insertion mode. | ||
36 | im insertionMode | ||
37 | // originalIM is the insertion mode to go back to after completing a text | ||
38 | // or inTableText insertion mode. | ||
39 | originalIM insertionMode | ||
40 | // fosterParenting is whether new elements should be inserted according to | ||
41 | // the foster parenting rules (section 12.2.5.3). | ||
42 | fosterParenting bool | ||
43 | // quirks is whether the parser is operating in "quirks mode." | ||
44 | quirks bool | ||
45 | // fragment is whether the parser is parsing an HTML fragment. | ||
46 | fragment bool | ||
47 | // context is the context element when parsing an HTML fragment | ||
48 | // (section 12.4). | ||
49 | context *Node | ||
50 | } | ||
51 | |||
52 | func (p *parser) top() *Node { | ||
53 | if n := p.oe.top(); n != nil { | ||
54 | return n | ||
55 | } | ||
56 | return p.doc | ||
57 | } | ||
58 | |||
59 | // Stop tags for use in popUntil. These come from section 12.2.3.2. | ||
60 | var ( | ||
61 | defaultScopeStopTags = map[string][]a.Atom{ | ||
62 | "": {a.Applet, a.Caption, a.Html, a.Table, a.Td, a.Th, a.Marquee, a.Object, a.Template}, | ||
63 | "math": {a.AnnotationXml, a.Mi, a.Mn, a.Mo, a.Ms, a.Mtext}, | ||
64 | "svg": {a.Desc, a.ForeignObject, a.Title}, | ||
65 | } | ||
66 | ) | ||
67 | |||
68 | type scope int | ||
69 | |||
70 | const ( | ||
71 | defaultScope scope = iota | ||
72 | listItemScope | ||
73 | buttonScope | ||
74 | tableScope | ||
75 | tableRowScope | ||
76 | tableBodyScope | ||
77 | selectScope | ||
78 | ) | ||
79 | |||
80 | // popUntil pops the stack of open elements at the highest element whose tag | ||
81 | // is in matchTags, provided there is no higher element in the scope's stop | ||
82 | // tags (as defined in section 12.2.3.2). It returns whether or not there was | ||
83 | // such an element. If there was not, popUntil leaves the stack unchanged. | ||
84 | // | ||
85 | // For example, the set of stop tags for table scope is: "html", "table". If | ||
86 | // the stack was: | ||
87 | // ["html", "body", "font", "table", "b", "i", "u"] | ||
88 | // then popUntil(tableScope, "font") would return false, but | ||
89 | // popUntil(tableScope, "i") would return true and the stack would become: | ||
90 | // ["html", "body", "font", "table", "b"] | ||
91 | // | ||
92 | // If an element's tag is in both the stop tags and matchTags, then the stack | ||
93 | // will be popped and the function returns true (provided, of course, there was | ||
94 | // no higher element in the stack that was also in the stop tags). For example, | ||
95 | // popUntil(tableScope, "table") returns true and leaves: | ||
96 | // ["html", "body", "font"] | ||
97 | func (p *parser) popUntil(s scope, matchTags ...a.Atom) bool { | ||
98 | if i := p.indexOfElementInScope(s, matchTags...); i != -1 { | ||
99 | p.oe = p.oe[:i] | ||
100 | return true | ||
101 | } | ||
102 | return false | ||
103 | } | ||
104 | |||
105 | // indexOfElementInScope returns the index in p.oe of the highest element whose | ||
106 | // tag is in matchTags that is in scope. If no matching element is in scope, it | ||
107 | // returns -1. | ||
108 | func (p *parser) indexOfElementInScope(s scope, matchTags ...a.Atom) int { | ||
109 | for i := len(p.oe) - 1; i >= 0; i-- { | ||
110 | tagAtom := p.oe[i].DataAtom | ||
111 | if p.oe[i].Namespace == "" { | ||
112 | for _, t := range matchTags { | ||
113 | if t == tagAtom { | ||
114 | return i | ||
115 | } | ||
116 | } | ||
117 | switch s { | ||
118 | case defaultScope: | ||
119 | // No-op. | ||
120 | case listItemScope: | ||
121 | if tagAtom == a.Ol || tagAtom == a.Ul { | ||
122 | return -1 | ||
123 | } | ||
124 | case buttonScope: | ||
125 | if tagAtom == a.Button { | ||
126 | return -1 | ||
127 | } | ||
128 | case tableScope: | ||
129 | if tagAtom == a.Html || tagAtom == a.Table { | ||
130 | return -1 | ||
131 | } | ||
132 | case selectScope: | ||
133 | if tagAtom != a.Optgroup && tagAtom != a.Option { | ||
134 | return -1 | ||
135 | } | ||
136 | default: | ||
137 | panic("unreachable") | ||
138 | } | ||
139 | } | ||
140 | switch s { | ||
141 | case defaultScope, listItemScope, buttonScope: | ||
142 | for _, t := range defaultScopeStopTags[p.oe[i].Namespace] { | ||
143 | if t == tagAtom { | ||
144 | return -1 | ||
145 | } | ||
146 | } | ||
147 | } | ||
148 | } | ||
149 | return -1 | ||
150 | } | ||
151 | |||
152 | // elementInScope is like popUntil, except that it doesn't modify the stack of | ||
153 | // open elements. | ||
154 | func (p *parser) elementInScope(s scope, matchTags ...a.Atom) bool { | ||
155 | return p.indexOfElementInScope(s, matchTags...) != -1 | ||
156 | } | ||
157 | |||
158 | // clearStackToContext pops elements off the stack of open elements until a | ||
159 | // scope-defined element is found. | ||
160 | func (p *parser) clearStackToContext(s scope) { | ||
161 | for i := len(p.oe) - 1; i >= 0; i-- { | ||
162 | tagAtom := p.oe[i].DataAtom | ||
163 | switch s { | ||
164 | case tableScope: | ||
165 | if tagAtom == a.Html || tagAtom == a.Table { | ||
166 | p.oe = p.oe[:i+1] | ||
167 | return | ||
168 | } | ||
169 | case tableRowScope: | ||
170 | if tagAtom == a.Html || tagAtom == a.Tr { | ||
171 | p.oe = p.oe[:i+1] | ||
172 | return | ||
173 | } | ||
174 | case tableBodyScope: | ||
175 | if tagAtom == a.Html || tagAtom == a.Tbody || tagAtom == a.Tfoot || tagAtom == a.Thead { | ||
176 | p.oe = p.oe[:i+1] | ||
177 | return | ||
178 | } | ||
179 | default: | ||
180 | panic("unreachable") | ||
181 | } | ||
182 | } | ||
183 | } | ||
184 | |||
185 | // generateImpliedEndTags pops nodes off the stack of open elements as long as | ||
186 | // the top node has a tag name of dd, dt, li, option, optgroup, p, rp, or rt. | ||
187 | // If exceptions are specified, nodes with that name will not be popped off. | ||
188 | func (p *parser) generateImpliedEndTags(exceptions ...string) { | ||
189 | var i int | ||
190 | loop: | ||
191 | for i = len(p.oe) - 1; i >= 0; i-- { | ||
192 | n := p.oe[i] | ||
193 | if n.Type == ElementNode { | ||
194 | switch n.DataAtom { | ||
195 | case a.Dd, a.Dt, a.Li, a.Option, a.Optgroup, a.P, a.Rp, a.Rt: | ||
196 | for _, except := range exceptions { | ||
197 | if n.Data == except { | ||
198 | break loop | ||
199 | } | ||
200 | } | ||
201 | continue | ||
202 | } | ||
203 | } | ||
204 | break | ||
205 | } | ||
206 | |||
207 | p.oe = p.oe[:i+1] | ||
208 | } | ||
209 | |||
210 | // addChild adds a child node n to the top element, and pushes n onto the stack | ||
211 | // of open elements if it is an element node. | ||
212 | func (p *parser) addChild(n *Node) { | ||
213 | if p.shouldFosterParent() { | ||
214 | p.fosterParent(n) | ||
215 | } else { | ||
216 | p.top().AppendChild(n) | ||
217 | } | ||
218 | |||
219 | if n.Type == ElementNode { | ||
220 | p.oe = append(p.oe, n) | ||
221 | } | ||
222 | } | ||
223 | |||
224 | // shouldFosterParent returns whether the next node to be added should be | ||
225 | // foster parented. | ||
226 | func (p *parser) shouldFosterParent() bool { | ||
227 | if p.fosterParenting { | ||
228 | switch p.top().DataAtom { | ||
229 | case a.Table, a.Tbody, a.Tfoot, a.Thead, a.Tr: | ||
230 | return true | ||
231 | } | ||
232 | } | ||
233 | return false | ||
234 | } | ||
235 | |||
236 | // fosterParent adds a child node according to the foster parenting rules. | ||
237 | // Section 12.2.5.3, "foster parenting". | ||
238 | func (p *parser) fosterParent(n *Node) { | ||
239 | var table, parent, prev *Node | ||
240 | var i int | ||
241 | for i = len(p.oe) - 1; i >= 0; i-- { | ||
242 | if p.oe[i].DataAtom == a.Table { | ||
243 | table = p.oe[i] | ||
244 | break | ||
245 | } | ||
246 | } | ||
247 | |||
248 | if table == nil { | ||
249 | // The foster parent is the html element. | ||
250 | parent = p.oe[0] | ||
251 | } else { | ||
252 | parent = table.Parent | ||
253 | } | ||
254 | if parent == nil { | ||
255 | parent = p.oe[i-1] | ||
256 | } | ||
257 | |||
258 | if table != nil { | ||
259 | prev = table.PrevSibling | ||
260 | } else { | ||
261 | prev = parent.LastChild | ||
262 | } | ||
263 | if prev != nil && prev.Type == TextNode && n.Type == TextNode { | ||
264 | prev.Data += n.Data | ||
265 | return | ||
266 | } | ||
267 | |||
268 | parent.InsertBefore(n, table) | ||
269 | } | ||
270 | |||
271 | // addText adds text to the preceding node if it is a text node, or else it | ||
272 | // calls addChild with a new text node. | ||
273 | func (p *parser) addText(text string) { | ||
274 | if text == "" { | ||
275 | return | ||
276 | } | ||
277 | |||
278 | if p.shouldFosterParent() { | ||
279 | p.fosterParent(&Node{ | ||
280 | Type: TextNode, | ||
281 | Data: text, | ||
282 | }) | ||
283 | return | ||
284 | } | ||
285 | |||
286 | t := p.top() | ||
287 | if n := t.LastChild; n != nil && n.Type == TextNode { | ||
288 | n.Data += text | ||
289 | return | ||
290 | } | ||
291 | p.addChild(&Node{ | ||
292 | Type: TextNode, | ||
293 | Data: text, | ||
294 | }) | ||
295 | } | ||
296 | |||
297 | // addElement adds a child element based on the current token. | ||
298 | func (p *parser) addElement() { | ||
299 | p.addChild(&Node{ | ||
300 | Type: ElementNode, | ||
301 | DataAtom: p.tok.DataAtom, | ||
302 | Data: p.tok.Data, | ||
303 | Attr: p.tok.Attr, | ||
304 | }) | ||
305 | } | ||
306 | |||
307 | // Section 12.2.3.3. | ||
308 | func (p *parser) addFormattingElement() { | ||
309 | tagAtom, attr := p.tok.DataAtom, p.tok.Attr | ||
310 | p.addElement() | ||
311 | |||
312 | // Implement the Noah's Ark clause, but with three per family instead of two. | ||
313 | identicalElements := 0 | ||
314 | findIdenticalElements: | ||
315 | for i := len(p.afe) - 1; i >= 0; i-- { | ||
316 | n := p.afe[i] | ||
317 | if n.Type == scopeMarkerNode { | ||
318 | break | ||
319 | } | ||
320 | if n.Type != ElementNode { | ||
321 | continue | ||
322 | } | ||
323 | if n.Namespace != "" { | ||
324 | continue | ||
325 | } | ||
326 | if n.DataAtom != tagAtom { | ||
327 | continue | ||
328 | } | ||
329 | if len(n.Attr) != len(attr) { | ||
330 | continue | ||
331 | } | ||
332 | compareAttributes: | ||
333 | for _, t0 := range n.Attr { | ||
334 | for _, t1 := range attr { | ||
335 | if t0.Key == t1.Key && t0.Namespace == t1.Namespace && t0.Val == t1.Val { | ||
336 | // Found a match for this attribute, continue with the next attribute. | ||
337 | continue compareAttributes | ||
338 | } | ||
339 | } | ||
340 | // If we get here, there is no attribute that matches a. | ||
341 | // Therefore the element is not identical to the new one. | ||
342 | continue findIdenticalElements | ||
343 | } | ||
344 | |||
345 | identicalElements++ | ||
346 | if identicalElements >= 3 { | ||
347 | p.afe.remove(n) | ||
348 | } | ||
349 | } | ||
350 | |||
351 | p.afe = append(p.afe, p.top()) | ||
352 | } | ||
353 | |||
354 | // Section 12.2.3.3. | ||
355 | func (p *parser) clearActiveFormattingElements() { | ||
356 | for { | ||
357 | n := p.afe.pop() | ||
358 | if len(p.afe) == 0 || n.Type == scopeMarkerNode { | ||
359 | return | ||
360 | } | ||
361 | } | ||
362 | } | ||
363 | |||
364 | // Section 12.2.3.3. | ||
365 | func (p *parser) reconstructActiveFormattingElements() { | ||
366 | n := p.afe.top() | ||
367 | if n == nil { | ||
368 | return | ||
369 | } | ||
370 | if n.Type == scopeMarkerNode || p.oe.index(n) != -1 { | ||
371 | return | ||
372 | } | ||
373 | i := len(p.afe) - 1 | ||
374 | for n.Type != scopeMarkerNode && p.oe.index(n) == -1 { | ||
375 | if i == 0 { | ||
376 | i = -1 | ||
377 | break | ||
378 | } | ||
379 | i-- | ||
380 | n = p.afe[i] | ||
381 | } | ||
382 | for { | ||
383 | i++ | ||
384 | clone := p.afe[i].clone() | ||
385 | p.addChild(clone) | ||
386 | p.afe[i] = clone | ||
387 | if i == len(p.afe)-1 { | ||
388 | break | ||
389 | } | ||
390 | } | ||
391 | } | ||
392 | |||
393 | // Section 12.2.4. | ||
394 | func (p *parser) acknowledgeSelfClosingTag() { | ||
395 | p.hasSelfClosingToken = false | ||
396 | } | ||
397 | |||
398 | // An insertion mode (section 12.2.3.1) is the state transition function from | ||
399 | // a particular state in the HTML5 parser's state machine. It updates the | ||
400 | // parser's fields depending on parser.tok (where ErrorToken means EOF). | ||
401 | // It returns whether the token was consumed. | ||
402 | type insertionMode func(*parser) bool | ||
403 | |||
404 | // setOriginalIM sets the insertion mode to return to after completing a text or | ||
405 | // inTableText insertion mode. | ||
406 | // Section 12.2.3.1, "using the rules for". | ||
407 | func (p *parser) setOriginalIM() { | ||
408 | if p.originalIM != nil { | ||
409 | panic("html: bad parser state: originalIM was set twice") | ||
410 | } | ||
411 | p.originalIM = p.im | ||
412 | } | ||
413 | |||
414 | // Section 12.2.3.1, "reset the insertion mode". | ||
415 | func (p *parser) resetInsertionMode() { | ||
416 | for i := len(p.oe) - 1; i >= 0; i-- { | ||
417 | n := p.oe[i] | ||
418 | if i == 0 && p.context != nil { | ||
419 | n = p.context | ||
420 | } | ||
421 | |||
422 | switch n.DataAtom { | ||
423 | case a.Select: | ||
424 | p.im = inSelectIM | ||
425 | case a.Td, a.Th: | ||
426 | p.im = inCellIM | ||
427 | case a.Tr: | ||
428 | p.im = inRowIM | ||
429 | case a.Tbody, a.Thead, a.Tfoot: | ||
430 | p.im = inTableBodyIM | ||
431 | case a.Caption: | ||
432 | p.im = inCaptionIM | ||
433 | case a.Colgroup: | ||
434 | p.im = inColumnGroupIM | ||
435 | case a.Table: | ||
436 | p.im = inTableIM | ||
437 | case a.Head: | ||
438 | p.im = inBodyIM | ||
439 | case a.Body: | ||
440 | p.im = inBodyIM | ||
441 | case a.Frameset: | ||
442 | p.im = inFramesetIM | ||
443 | case a.Html: | ||
444 | p.im = beforeHeadIM | ||
445 | default: | ||
446 | continue | ||
447 | } | ||
448 | return | ||
449 | } | ||
450 | p.im = inBodyIM | ||
451 | } | ||
452 | |||
453 | const whitespace = " \t\r\n\f" | ||
454 | |||
455 | // Section 12.2.5.4.1. | ||
456 | func initialIM(p *parser) bool { | ||
457 | switch p.tok.Type { | ||
458 | case TextToken: | ||
459 | p.tok.Data = strings.TrimLeft(p.tok.Data, whitespace) | ||
460 | if len(p.tok.Data) == 0 { | ||
461 | // It was all whitespace, so ignore it. | ||
462 | return true | ||
463 | } | ||
464 | case CommentToken: | ||
465 | p.doc.AppendChild(&Node{ | ||
466 | Type: CommentNode, | ||
467 | Data: p.tok.Data, | ||
468 | }) | ||
469 | return true | ||
470 | case DoctypeToken: | ||
471 | n, quirks := parseDoctype(p.tok.Data) | ||
472 | p.doc.AppendChild(n) | ||
473 | p.quirks = quirks | ||
474 | p.im = beforeHTMLIM | ||
475 | return true | ||
476 | } | ||
477 | p.quirks = true | ||
478 | p.im = beforeHTMLIM | ||
479 | return false | ||
480 | } | ||
481 | |||
482 | // Section 12.2.5.4.2. | ||
483 | func beforeHTMLIM(p *parser) bool { | ||
484 | switch p.tok.Type { | ||
485 | case DoctypeToken: | ||
486 | // Ignore the token. | ||
487 | return true | ||
488 | case TextToken: | ||
489 | p.tok.Data = strings.TrimLeft(p.tok.Data, whitespace) | ||
490 | if len(p.tok.Data) == 0 { | ||
491 | // It was all whitespace, so ignore it. | ||
492 | return true | ||
493 | } | ||
494 | case StartTagToken: | ||
495 | if p.tok.DataAtom == a.Html { | ||
496 | p.addElement() | ||
497 | p.im = beforeHeadIM | ||
498 | return true | ||
499 | } | ||
500 | case EndTagToken: | ||
501 | switch p.tok.DataAtom { | ||
502 | case a.Head, a.Body, a.Html, a.Br: | ||
503 | p.parseImpliedToken(StartTagToken, a.Html, a.Html.String()) | ||
504 | return false | ||
505 | default: | ||
506 | // Ignore the token. | ||
507 | return true | ||
508 | } | ||
509 | case CommentToken: | ||
510 | p.doc.AppendChild(&Node{ | ||
511 | Type: CommentNode, | ||
512 | Data: p.tok.Data, | ||
513 | }) | ||
514 | return true | ||
515 | } | ||
516 | p.parseImpliedToken(StartTagToken, a.Html, a.Html.String()) | ||
517 | return false | ||
518 | } | ||
519 | |||
520 | // Section 12.2.5.4.3. | ||
521 | func beforeHeadIM(p *parser) bool { | ||
522 | switch p.tok.Type { | ||
523 | case TextToken: | ||
524 | p.tok.Data = strings.TrimLeft(p.tok.Data, whitespace) | ||
525 | if len(p.tok.Data) == 0 { | ||
526 | // It was all whitespace, so ignore it. | ||
527 | return true | ||
528 | } | ||
529 | case StartTagToken: | ||
530 | switch p.tok.DataAtom { | ||
531 | case a.Head: | ||
532 | p.addElement() | ||
533 | p.head = p.top() | ||
534 | p.im = inHeadIM | ||
535 | return true | ||
536 | case a.Html: | ||
537 | return inBodyIM(p) | ||
538 | } | ||
539 | case EndTagToken: | ||
540 | switch p.tok.DataAtom { | ||
541 | case a.Head, a.Body, a.Html, a.Br: | ||
542 | p.parseImpliedToken(StartTagToken, a.Head, a.Head.String()) | ||
543 | return false | ||
544 | default: | ||
545 | // Ignore the token. | ||
546 | return true | ||
547 | } | ||
548 | case CommentToken: | ||
549 | p.addChild(&Node{ | ||
550 | Type: CommentNode, | ||
551 | Data: p.tok.Data, | ||
552 | }) | ||
553 | return true | ||
554 | case DoctypeToken: | ||
555 | // Ignore the token. | ||
556 | return true | ||
557 | } | ||
558 | |||
559 | p.parseImpliedToken(StartTagToken, a.Head, a.Head.String()) | ||
560 | return false | ||
561 | } | ||
562 | |||
563 | // Section 12.2.5.4.4. | ||
564 | func inHeadIM(p *parser) bool { | ||
565 | switch p.tok.Type { | ||
566 | case TextToken: | ||
567 | s := strings.TrimLeft(p.tok.Data, whitespace) | ||
568 | if len(s) < len(p.tok.Data) { | ||
569 | // Add the initial whitespace to the current node. | ||
570 | p.addText(p.tok.Data[:len(p.tok.Data)-len(s)]) | ||
571 | if s == "" { | ||
572 | return true | ||
573 | } | ||
574 | p.tok.Data = s | ||
575 | } | ||
576 | case StartTagToken: | ||
577 | switch p.tok.DataAtom { | ||
578 | case a.Html: | ||
579 | return inBodyIM(p) | ||
580 | case a.Base, a.Basefont, a.Bgsound, a.Command, a.Link, a.Meta: | ||
581 | p.addElement() | ||
582 | p.oe.pop() | ||
583 | p.acknowledgeSelfClosingTag() | ||
584 | return true | ||
585 | case a.Script, a.Title, a.Noscript, a.Noframes, a.Style: | ||
586 | p.addElement() | ||
587 | p.setOriginalIM() | ||
588 | p.im = textIM | ||
589 | return true | ||
590 | case a.Head: | ||
591 | // Ignore the token. | ||
592 | return true | ||
593 | } | ||
594 | case EndTagToken: | ||
595 | switch p.tok.DataAtom { | ||
596 | case a.Head: | ||
597 | n := p.oe.pop() | ||
598 | if n.DataAtom != a.Head { | ||
599 | panic("html: bad parser state: <head> element not found, in the in-head insertion mode") | ||
600 | } | ||
601 | p.im = afterHeadIM | ||
602 | return true | ||
603 | case a.Body, a.Html, a.Br: | ||
604 | p.parseImpliedToken(EndTagToken, a.Head, a.Head.String()) | ||
605 | return false | ||
606 | default: | ||
607 | // Ignore the token. | ||
608 | return true | ||
609 | } | ||
610 | case CommentToken: | ||
611 | p.addChild(&Node{ | ||
612 | Type: CommentNode, | ||
613 | Data: p.tok.Data, | ||
614 | }) | ||
615 | return true | ||
616 | case DoctypeToken: | ||
617 | // Ignore the token. | ||
618 | return true | ||
619 | } | ||
620 | |||
621 | p.parseImpliedToken(EndTagToken, a.Head, a.Head.String()) | ||
622 | return false | ||
623 | } | ||
624 | |||
625 | // Section 12.2.5.4.6. | ||
626 | func afterHeadIM(p *parser) bool { | ||
627 | switch p.tok.Type { | ||
628 | case TextToken: | ||
629 | s := strings.TrimLeft(p.tok.Data, whitespace) | ||
630 | if len(s) < len(p.tok.Data) { | ||
631 | // Add the initial whitespace to the current node. | ||
632 | p.addText(p.tok.Data[:len(p.tok.Data)-len(s)]) | ||
633 | if s == "" { | ||
634 | return true | ||
635 | } | ||
636 | p.tok.Data = s | ||
637 | } | ||
638 | case StartTagToken: | ||
639 | switch p.tok.DataAtom { | ||
640 | case a.Html: | ||
641 | return inBodyIM(p) | ||
642 | case a.Body: | ||
643 | p.addElement() | ||
644 | p.framesetOK = false | ||
645 | p.im = inBodyIM | ||
646 | return true | ||
647 | case a.Frameset: | ||
648 | p.addElement() | ||
649 | p.im = inFramesetIM | ||
650 | return true | ||
651 | case a.Base, a.Basefont, a.Bgsound, a.Link, a.Meta, a.Noframes, a.Script, a.Style, a.Title: | ||
652 | p.oe = append(p.oe, p.head) | ||
653 | defer p.oe.remove(p.head) | ||
654 | return inHeadIM(p) | ||
655 | case a.Head: | ||
656 | // Ignore the token. | ||
657 | return true | ||
658 | } | ||
659 | case EndTagToken: | ||
660 | switch p.tok.DataAtom { | ||
661 | case a.Body, a.Html, a.Br: | ||
662 | // Drop down to creating an implied <body> tag. | ||
663 | default: | ||
664 | // Ignore the token. | ||
665 | return true | ||
666 | } | ||
667 | case CommentToken: | ||
668 | p.addChild(&Node{ | ||
669 | Type: CommentNode, | ||
670 | Data: p.tok.Data, | ||
671 | }) | ||
672 | return true | ||
673 | case DoctypeToken: | ||
674 | // Ignore the token. | ||
675 | return true | ||
676 | } | ||
677 | |||
678 | p.parseImpliedToken(StartTagToken, a.Body, a.Body.String()) | ||
679 | p.framesetOK = true | ||
680 | return false | ||
681 | } | ||
682 | |||
683 | // copyAttributes copies attributes of src not found on dst to dst. | ||
684 | func copyAttributes(dst *Node, src Token) { | ||
685 | if len(src.Attr) == 0 { | ||
686 | return | ||
687 | } | ||
688 | attr := map[string]string{} | ||
689 | for _, t := range dst.Attr { | ||
690 | attr[t.Key] = t.Val | ||
691 | } | ||
692 | for _, t := range src.Attr { | ||
693 | if _, ok := attr[t.Key]; !ok { | ||
694 | dst.Attr = append(dst.Attr, t) | ||
695 | attr[t.Key] = t.Val | ||
696 | } | ||
697 | } | ||
698 | } | ||
699 | |||
700 | // Section 12.2.5.4.7. | ||
701 | func inBodyIM(p *parser) bool { | ||
702 | switch p.tok.Type { | ||
703 | case TextToken: | ||
704 | d := p.tok.Data | ||
705 | switch n := p.oe.top(); n.DataAtom { | ||
706 | case a.Pre, a.Listing: | ||
707 | if n.FirstChild == nil { | ||
708 | // Ignore a newline at the start of a <pre> block. | ||
709 | if d != "" && d[0] == '\r' { | ||
710 | d = d[1:] | ||
711 | } | ||
712 | if d != "" && d[0] == '\n' { | ||
713 | d = d[1:] | ||
714 | } | ||
715 | } | ||
716 | } | ||
717 | d = strings.Replace(d, "\x00", "", -1) | ||
718 | if d == "" { | ||
719 | return true | ||
720 | } | ||
721 | p.reconstructActiveFormattingElements() | ||
722 | p.addText(d) | ||
723 | if p.framesetOK && strings.TrimLeft(d, whitespace) != "" { | ||
724 | // There were non-whitespace characters inserted. | ||
725 | p.framesetOK = false | ||
726 | } | ||
727 | case StartTagToken: | ||
728 | switch p.tok.DataAtom { | ||
729 | case a.Html: | ||
730 | copyAttributes(p.oe[0], p.tok) | ||
731 | case a.Base, a.Basefont, a.Bgsound, a.Command, a.Link, a.Meta, a.Noframes, a.Script, a.Style, a.Title: | ||
732 | return inHeadIM(p) | ||
733 | case a.Body: | ||
734 | if len(p.oe) >= 2 { | ||
735 | body := p.oe[1] | ||
736 | if body.Type == ElementNode && body.DataAtom == a.Body { | ||
737 | p.framesetOK = false | ||
738 | copyAttributes(body, p.tok) | ||
739 | } | ||
740 | } | ||
741 | case a.Frameset: | ||
742 | if !p.framesetOK || len(p.oe) < 2 || p.oe[1].DataAtom != a.Body { | ||
743 | // Ignore the token. | ||
744 | return true | ||
745 | } | ||
746 | body := p.oe[1] | ||
747 | if body.Parent != nil { | ||
748 | body.Parent.RemoveChild(body) | ||
749 | } | ||
750 | p.oe = p.oe[:1] | ||
751 | p.addElement() | ||
752 | p.im = inFramesetIM | ||
753 | return true | ||
754 | case a.Address, a.Article, a.Aside, a.Blockquote, a.Center, a.Details, a.Dir, a.Div, a.Dl, a.Fieldset, a.Figcaption, a.Figure, a.Footer, a.Header, a.Hgroup, a.Menu, a.Nav, a.Ol, a.P, a.Section, a.Summary, a.Ul: | ||
755 | p.popUntil(buttonScope, a.P) | ||
756 | p.addElement() | ||
757 | case a.H1, a.H2, a.H3, a.H4, a.H5, a.H6: | ||
758 | p.popUntil(buttonScope, a.P) | ||
759 | switch n := p.top(); n.DataAtom { | ||
760 | case a.H1, a.H2, a.H3, a.H4, a.H5, a.H6: | ||
761 | p.oe.pop() | ||
762 | } | ||
763 | p.addElement() | ||
764 | case a.Pre, a.Listing: | ||
765 | p.popUntil(buttonScope, a.P) | ||
766 | p.addElement() | ||
767 | // The newline, if any, will be dealt with by the TextToken case. | ||
768 | p.framesetOK = false | ||
769 | case a.Form: | ||
770 | if p.form == nil { | ||
771 | p.popUntil(buttonScope, a.P) | ||
772 | p.addElement() | ||
773 | p.form = p.top() | ||
774 | } | ||
775 | case a.Li: | ||
776 | p.framesetOK = false | ||
777 | for i := len(p.oe) - 1; i >= 0; i-- { | ||
778 | node := p.oe[i] | ||
779 | switch node.DataAtom { | ||
780 | case a.Li: | ||
781 | p.oe = p.oe[:i] | ||
782 | case a.Address, a.Div, a.P: | ||
783 | continue | ||
784 | default: | ||
785 | if !isSpecialElement(node) { | ||
786 | continue | ||
787 | } | ||
788 | } | ||
789 | break | ||
790 | } | ||
791 | p.popUntil(buttonScope, a.P) | ||
792 | p.addElement() | ||
793 | case a.Dd, a.Dt: | ||
794 | p.framesetOK = false | ||
795 | for i := len(p.oe) - 1; i >= 0; i-- { | ||
796 | node := p.oe[i] | ||
797 | switch node.DataAtom { | ||
798 | case a.Dd, a.Dt: | ||
799 | p.oe = p.oe[:i] | ||
800 | case a.Address, a.Div, a.P: | ||
801 | continue | ||
802 | default: | ||
803 | if !isSpecialElement(node) { | ||
804 | continue | ||
805 | } | ||
806 | } | ||
807 | break | ||
808 | } | ||
809 | p.popUntil(buttonScope, a.P) | ||
810 | p.addElement() | ||
811 | case a.Plaintext: | ||
812 | p.popUntil(buttonScope, a.P) | ||
813 | p.addElement() | ||
814 | case a.Button: | ||
815 | p.popUntil(defaultScope, a.Button) | ||
816 | p.reconstructActiveFormattingElements() | ||
817 | p.addElement() | ||
818 | p.framesetOK = false | ||
819 | case a.A: | ||
820 | for i := len(p.afe) - 1; i >= 0 && p.afe[i].Type != scopeMarkerNode; i-- { | ||
821 | if n := p.afe[i]; n.Type == ElementNode && n.DataAtom == a.A { | ||
822 | p.inBodyEndTagFormatting(a.A) | ||
823 | p.oe.remove(n) | ||
824 | p.afe.remove(n) | ||
825 | break | ||
826 | } | ||
827 | } | ||
828 | p.reconstructActiveFormattingElements() | ||
829 | p.addFormattingElement() | ||
830 | case a.B, a.Big, a.Code, a.Em, a.Font, a.I, a.S, a.Small, a.Strike, a.Strong, a.Tt, a.U: | ||
831 | p.reconstructActiveFormattingElements() | ||
832 | p.addFormattingElement() | ||
833 | case a.Nobr: | ||
834 | p.reconstructActiveFormattingElements() | ||
835 | if p.elementInScope(defaultScope, a.Nobr) { | ||
836 | p.inBodyEndTagFormatting(a.Nobr) | ||
837 | p.reconstructActiveFormattingElements() | ||
838 | } | ||
839 | p.addFormattingElement() | ||
840 | case a.Applet, a.Marquee, a.Object: | ||
841 | p.reconstructActiveFormattingElements() | ||
842 | p.addElement() | ||
843 | p.afe = append(p.afe, &scopeMarker) | ||
844 | p.framesetOK = false | ||
845 | case a.Table: | ||
846 | if !p.quirks { | ||
847 | p.popUntil(buttonScope, a.P) | ||
848 | } | ||
849 | p.addElement() | ||
850 | p.framesetOK = false | ||
851 | p.im = inTableIM | ||
852 | return true | ||
853 | case a.Area, a.Br, a.Embed, a.Img, a.Input, a.Keygen, a.Wbr: | ||
854 | p.reconstructActiveFormattingElements() | ||
855 | p.addElement() | ||
856 | p.oe.pop() | ||
857 | p.acknowledgeSelfClosingTag() | ||
858 | if p.tok.DataAtom == a.Input { | ||
859 | for _, t := range p.tok.Attr { | ||
860 | if t.Key == "type" { | ||
861 | if strings.ToLower(t.Val) == "hidden" { | ||
862 | // Skip setting framesetOK = false | ||
863 | return true | ||
864 | } | ||
865 | } | ||
866 | } | ||
867 | } | ||
868 | p.framesetOK = false | ||
869 | case a.Param, a.Source, a.Track: | ||
870 | p.addElement() | ||
871 | p.oe.pop() | ||
872 | p.acknowledgeSelfClosingTag() | ||
873 | case a.Hr: | ||
874 | p.popUntil(buttonScope, a.P) | ||
875 | p.addElement() | ||
876 | p.oe.pop() | ||
877 | p.acknowledgeSelfClosingTag() | ||
878 | p.framesetOK = false | ||
879 | case a.Image: | ||
880 | p.tok.DataAtom = a.Img | ||
881 | p.tok.Data = a.Img.String() | ||
882 | return false | ||
883 | case a.Isindex: | ||
884 | if p.form != nil { | ||
885 | // Ignore the token. | ||
886 | return true | ||
887 | } | ||
888 | action := "" | ||
889 | prompt := "This is a searchable index. Enter search keywords: " | ||
890 | attr := []Attribute{{Key: "name", Val: "isindex"}} | ||
891 | for _, t := range p.tok.Attr { | ||
892 | switch t.Key { | ||
893 | case "action": | ||
894 | action = t.Val | ||
895 | case "name": | ||
896 | // Ignore the attribute. | ||
897 | case "prompt": | ||
898 | prompt = t.Val | ||
899 | default: | ||
900 | attr = append(attr, t) | ||
901 | } | ||
902 | } | ||
903 | p.acknowledgeSelfClosingTag() | ||
904 | p.popUntil(buttonScope, a.P) | ||
905 | p.parseImpliedToken(StartTagToken, a.Form, a.Form.String()) | ||
906 | if action != "" { | ||
907 | p.form.Attr = []Attribute{{Key: "action", Val: action}} | ||
908 | } | ||
909 | p.parseImpliedToken(StartTagToken, a.Hr, a.Hr.String()) | ||
910 | p.parseImpliedToken(StartTagToken, a.Label, a.Label.String()) | ||
911 | p.addText(prompt) | ||
912 | p.addChild(&Node{ | ||
913 | Type: ElementNode, | ||
914 | DataAtom: a.Input, | ||
915 | Data: a.Input.String(), | ||
916 | Attr: attr, | ||
917 | }) | ||
918 | p.oe.pop() | ||
919 | p.parseImpliedToken(EndTagToken, a.Label, a.Label.String()) | ||
920 | p.parseImpliedToken(StartTagToken, a.Hr, a.Hr.String()) | ||
921 | p.parseImpliedToken(EndTagToken, a.Form, a.Form.String()) | ||
922 | case a.Textarea: | ||
923 | p.addElement() | ||
924 | p.setOriginalIM() | ||
925 | p.framesetOK = false | ||
926 | p.im = textIM | ||
927 | case a.Xmp: | ||
928 | p.popUntil(buttonScope, a.P) | ||
929 | p.reconstructActiveFormattingElements() | ||
930 | p.framesetOK = false | ||
931 | p.addElement() | ||
932 | p.setOriginalIM() | ||
933 | p.im = textIM | ||
934 | case a.Iframe: | ||
935 | p.framesetOK = false | ||
936 | p.addElement() | ||
937 | p.setOriginalIM() | ||
938 | p.im = textIM | ||
939 | case a.Noembed, a.Noscript: | ||
940 | p.addElement() | ||
941 | p.setOriginalIM() | ||
942 | p.im = textIM | ||
943 | case a.Select: | ||
944 | p.reconstructActiveFormattingElements() | ||
945 | p.addElement() | ||
946 | p.framesetOK = false | ||
947 | p.im = inSelectIM | ||
948 | return true | ||
949 | case a.Optgroup, a.Option: | ||
950 | if p.top().DataAtom == a.Option { | ||
951 | p.oe.pop() | ||
952 | } | ||
953 | p.reconstructActiveFormattingElements() | ||
954 | p.addElement() | ||
955 | case a.Rp, a.Rt: | ||
956 | if p.elementInScope(defaultScope, a.Ruby) { | ||
957 | p.generateImpliedEndTags() | ||
958 | } | ||
959 | p.addElement() | ||
960 | case a.Math, a.Svg: | ||
961 | p.reconstructActiveFormattingElements() | ||
962 | if p.tok.DataAtom == a.Math { | ||
963 | adjustAttributeNames(p.tok.Attr, mathMLAttributeAdjustments) | ||
964 | } else { | ||
965 | adjustAttributeNames(p.tok.Attr, svgAttributeAdjustments) | ||
966 | } | ||
967 | adjustForeignAttributes(p.tok.Attr) | ||
968 | p.addElement() | ||
969 | p.top().Namespace = p.tok.Data | ||
970 | if p.hasSelfClosingToken { | ||
971 | p.oe.pop() | ||
972 | p.acknowledgeSelfClosingTag() | ||
973 | } | ||
974 | return true | ||
975 | case a.Caption, a.Col, a.Colgroup, a.Frame, a.Head, a.Tbody, a.Td, a.Tfoot, a.Th, a.Thead, a.Tr: | ||
976 | // Ignore the token. | ||
977 | default: | ||
978 | p.reconstructActiveFormattingElements() | ||
979 | p.addElement() | ||
980 | } | ||
981 | case EndTagToken: | ||
982 | switch p.tok.DataAtom { | ||
983 | case a.Body: | ||
984 | if p.elementInScope(defaultScope, a.Body) { | ||
985 | p.im = afterBodyIM | ||
986 | } | ||
987 | case a.Html: | ||
988 | if p.elementInScope(defaultScope, a.Body) { | ||
989 | p.parseImpliedToken(EndTagToken, a.Body, a.Body.String()) | ||
990 | return false | ||
991 | } | ||
992 | return true | ||
993 | case a.Address, a.Article, a.Aside, a.Blockquote, a.Button, a.Center, a.Details, a.Dir, a.Div, a.Dl, a.Fieldset, a.Figcaption, a.Figure, a.Footer, a.Header, a.Hgroup, a.Listing, a.Menu, a.Nav, a.Ol, a.Pre, a.Section, a.Summary, a.Ul: | ||
994 | p.popUntil(defaultScope, p.tok.DataAtom) | ||
995 | case a.Form: | ||
996 | node := p.form | ||
997 | p.form = nil | ||
998 | i := p.indexOfElementInScope(defaultScope, a.Form) | ||
999 | if node == nil || i == -1 || p.oe[i] != node { | ||
1000 | // Ignore the token. | ||
1001 | return true | ||
1002 | } | ||
1003 | p.generateImpliedEndTags() | ||
1004 | p.oe.remove(node) | ||
1005 | case a.P: | ||
1006 | if !p.elementInScope(buttonScope, a.P) { | ||
1007 | p.parseImpliedToken(StartTagToken, a.P, a.P.String()) | ||
1008 | } | ||
1009 | p.popUntil(buttonScope, a.P) | ||
1010 | case a.Li: | ||
1011 | p.popUntil(listItemScope, a.Li) | ||
1012 | case a.Dd, a.Dt: | ||
1013 | p.popUntil(defaultScope, p.tok.DataAtom) | ||
1014 | case a.H1, a.H2, a.H3, a.H4, a.H5, a.H6: | ||
1015 | p.popUntil(defaultScope, a.H1, a.H2, a.H3, a.H4, a.H5, a.H6) | ||
1016 | case a.A, a.B, a.Big, a.Code, a.Em, a.Font, a.I, a.Nobr, a.S, a.Small, a.Strike, a.Strong, a.Tt, a.U: | ||
1017 | p.inBodyEndTagFormatting(p.tok.DataAtom) | ||
1018 | case a.Applet, a.Marquee, a.Object: | ||
1019 | if p.popUntil(defaultScope, p.tok.DataAtom) { | ||
1020 | p.clearActiveFormattingElements() | ||
1021 | } | ||
1022 | case a.Br: | ||
1023 | p.tok.Type = StartTagToken | ||
1024 | return false | ||
1025 | default: | ||
1026 | p.inBodyEndTagOther(p.tok.DataAtom) | ||
1027 | } | ||
1028 | case CommentToken: | ||
1029 | p.addChild(&Node{ | ||
1030 | Type: CommentNode, | ||
1031 | Data: p.tok.Data, | ||
1032 | }) | ||
1033 | } | ||
1034 | |||
1035 | return true | ||
1036 | } | ||
1037 | |||
1038 | func (p *parser) inBodyEndTagFormatting(tagAtom a.Atom) { | ||
1039 | // This is the "adoption agency" algorithm, described at | ||
1040 | // https://html.spec.whatwg.org/multipage/syntax.html#adoptionAgency | ||
1041 | |||
1042 | // TODO: this is a fairly literal line-by-line translation of that algorithm. | ||
1043 | // Once the code successfully parses the comprehensive test suite, we should | ||
1044 | // refactor this code to be more idiomatic. | ||
1045 | |||
1046 | // Steps 1-4. The outer loop. | ||
1047 | for i := 0; i < 8; i++ { | ||
1048 | // Step 5. Find the formatting element. | ||
1049 | var formattingElement *Node | ||
1050 | for j := len(p.afe) - 1; j >= 0; j-- { | ||
1051 | if p.afe[j].Type == scopeMarkerNode { | ||
1052 | break | ||
1053 | } | ||
1054 | if p.afe[j].DataAtom == tagAtom { | ||
1055 | formattingElement = p.afe[j] | ||
1056 | break | ||
1057 | } | ||
1058 | } | ||
1059 | if formattingElement == nil { | ||
1060 | p.inBodyEndTagOther(tagAtom) | ||
1061 | return | ||
1062 | } | ||
1063 | feIndex := p.oe.index(formattingElement) | ||
1064 | if feIndex == -1 { | ||
1065 | p.afe.remove(formattingElement) | ||
1066 | return | ||
1067 | } | ||
1068 | if !p.elementInScope(defaultScope, tagAtom) { | ||
1069 | // Ignore the tag. | ||
1070 | return | ||
1071 | } | ||
1072 | |||
1073 | // Steps 9-10. Find the furthest block. | ||
1074 | var furthestBlock *Node | ||
1075 | for _, e := range p.oe[feIndex:] { | ||
1076 | if isSpecialElement(e) { | ||
1077 | furthestBlock = e | ||
1078 | break | ||
1079 | } | ||
1080 | } | ||
1081 | if furthestBlock == nil { | ||
1082 | e := p.oe.pop() | ||
1083 | for e != formattingElement { | ||
1084 | e = p.oe.pop() | ||
1085 | } | ||
1086 | p.afe.remove(e) | ||
1087 | return | ||
1088 | } | ||
1089 | |||
1090 | // Steps 11-12. Find the common ancestor and bookmark node. | ||
1091 | commonAncestor := p.oe[feIndex-1] | ||
1092 | bookmark := p.afe.index(formattingElement) | ||
1093 | |||
1094 | // Step 13. The inner loop. Find the lastNode to reparent. | ||
1095 | lastNode := furthestBlock | ||
1096 | node := furthestBlock | ||
1097 | x := p.oe.index(node) | ||
1098 | // Steps 13.1-13.2 | ||
1099 | for j := 0; j < 3; j++ { | ||
1100 | // Step 13.3. | ||
1101 | x-- | ||
1102 | node = p.oe[x] | ||
1103 | // Step 13.4 - 13.5. | ||
1104 | if p.afe.index(node) == -1 { | ||
1105 | p.oe.remove(node) | ||
1106 | continue | ||
1107 | } | ||
1108 | // Step 13.6. | ||
1109 | if node == formattingElement { | ||
1110 | break | ||
1111 | } | ||
1112 | // Step 13.7. | ||
1113 | clone := node.clone() | ||
1114 | p.afe[p.afe.index(node)] = clone | ||
1115 | p.oe[p.oe.index(node)] = clone | ||
1116 | node = clone | ||
1117 | // Step 13.8. | ||
1118 | if lastNode == furthestBlock { | ||
1119 | bookmark = p.afe.index(node) + 1 | ||
1120 | } | ||
1121 | // Step 13.9. | ||
1122 | if lastNode.Parent != nil { | ||
1123 | lastNode.Parent.RemoveChild(lastNode) | ||
1124 | } | ||
1125 | node.AppendChild(lastNode) | ||
1126 | // Step 13.10. | ||
1127 | lastNode = node | ||
1128 | } | ||
1129 | |||
1130 | // Step 14. Reparent lastNode to the common ancestor, | ||
1131 | // or for misnested table nodes, to the foster parent. | ||
1132 | if lastNode.Parent != nil { | ||
1133 | lastNode.Parent.RemoveChild(lastNode) | ||
1134 | } | ||
1135 | switch commonAncestor.DataAtom { | ||
1136 | case a.Table, a.Tbody, a.Tfoot, a.Thead, a.Tr: | ||
1137 | p.fosterParent(lastNode) | ||
1138 | default: | ||
1139 | commonAncestor.AppendChild(lastNode) | ||
1140 | } | ||
1141 | |||
1142 | // Steps 15-17. Reparent nodes from the furthest block's children | ||
1143 | // to a clone of the formatting element. | ||
1144 | clone := formattingElement.clone() | ||
1145 | reparentChildren(clone, furthestBlock) | ||
1146 | furthestBlock.AppendChild(clone) | ||
1147 | |||
1148 | // Step 18. Fix up the list of active formatting elements. | ||
1149 | if oldLoc := p.afe.index(formattingElement); oldLoc != -1 && oldLoc < bookmark { | ||
1150 | // Move the bookmark with the rest of the list. | ||
1151 | bookmark-- | ||
1152 | } | ||
1153 | p.afe.remove(formattingElement) | ||
1154 | p.afe.insert(bookmark, clone) | ||
1155 | |||
1156 | // Step 19. Fix up the stack of open elements. | ||
1157 | p.oe.remove(formattingElement) | ||
1158 | p.oe.insert(p.oe.index(furthestBlock)+1, clone) | ||
1159 | } | ||
1160 | } | ||
1161 | |||
1162 | // inBodyEndTagOther performs the "any other end tag" algorithm for inBodyIM. | ||
1163 | // "Any other end tag" handling from 12.2.5.5 The rules for parsing tokens in foreign content | ||
1164 | // https://html.spec.whatwg.org/multipage/syntax.html#parsing-main-inforeign | ||
1165 | func (p *parser) inBodyEndTagOther(tagAtom a.Atom) { | ||
1166 | for i := len(p.oe) - 1; i >= 0; i-- { | ||
1167 | if p.oe[i].DataAtom == tagAtom { | ||
1168 | p.oe = p.oe[:i] | ||
1169 | break | ||
1170 | } | ||
1171 | if isSpecialElement(p.oe[i]) { | ||
1172 | break | ||
1173 | } | ||
1174 | } | ||
1175 | } | ||
1176 | |||
1177 | // Section 12.2.5.4.8. | ||
1178 | func textIM(p *parser) bool { | ||
1179 | switch p.tok.Type { | ||
1180 | case ErrorToken: | ||
1181 | p.oe.pop() | ||
1182 | case TextToken: | ||
1183 | d := p.tok.Data | ||
1184 | if n := p.oe.top(); n.DataAtom == a.Textarea && n.FirstChild == nil { | ||
1185 | // Ignore a newline at the start of a <textarea> block. | ||
1186 | if d != "" && d[0] == '\r' { | ||
1187 | d = d[1:] | ||
1188 | } | ||
1189 | if d != "" && d[0] == '\n' { | ||
1190 | d = d[1:] | ||
1191 | } | ||
1192 | } | ||
1193 | if d == "" { | ||
1194 | return true | ||
1195 | } | ||
1196 | p.addText(d) | ||
1197 | return true | ||
1198 | case EndTagToken: | ||
1199 | p.oe.pop() | ||
1200 | } | ||
1201 | p.im = p.originalIM | ||
1202 | p.originalIM = nil | ||
1203 | return p.tok.Type == EndTagToken | ||
1204 | } | ||
1205 | |||
1206 | // Section 12.2.5.4.9. | ||
1207 | func inTableIM(p *parser) bool { | ||
1208 | switch p.tok.Type { | ||
1209 | case ErrorToken: | ||
1210 | // Stop parsing. | ||
1211 | return true | ||
1212 | case TextToken: | ||
1213 | p.tok.Data = strings.Replace(p.tok.Data, "\x00", "", -1) | ||
1214 | switch p.oe.top().DataAtom { | ||
1215 | case a.Table, a.Tbody, a.Tfoot, a.Thead, a.Tr: | ||
1216 | if strings.Trim(p.tok.Data, whitespace) == "" { | ||
1217 | p.addText(p.tok.Data) | ||
1218 | return true | ||
1219 | } | ||
1220 | } | ||
1221 | case StartTagToken: | ||
1222 | switch p.tok.DataAtom { | ||
1223 | case a.Caption: | ||
1224 | p.clearStackToContext(tableScope) | ||
1225 | p.afe = append(p.afe, &scopeMarker) | ||
1226 | p.addElement() | ||
1227 | p.im = inCaptionIM | ||
1228 | return true | ||
1229 | case a.Colgroup: | ||
1230 | p.clearStackToContext(tableScope) | ||
1231 | p.addElement() | ||
1232 | p.im = inColumnGroupIM | ||
1233 | return true | ||
1234 | case a.Col: | ||
1235 | p.parseImpliedToken(StartTagToken, a.Colgroup, a.Colgroup.String()) | ||
1236 | return false | ||
1237 | case a.Tbody, a.Tfoot, a.Thead: | ||
1238 | p.clearStackToContext(tableScope) | ||
1239 | p.addElement() | ||
1240 | p.im = inTableBodyIM | ||
1241 | return true | ||
1242 | case a.Td, a.Th, a.Tr: | ||
1243 | p.parseImpliedToken(StartTagToken, a.Tbody, a.Tbody.String()) | ||
1244 | return false | ||
1245 | case a.Table: | ||
1246 | if p.popUntil(tableScope, a.Table) { | ||
1247 | p.resetInsertionMode() | ||
1248 | return false | ||
1249 | } | ||
1250 | // Ignore the token. | ||
1251 | return true | ||
1252 | case a.Style, a.Script: | ||
1253 | return inHeadIM(p) | ||
1254 | case a.Input: | ||
1255 | for _, t := range p.tok.Attr { | ||
1256 | if t.Key == "type" && strings.ToLower(t.Val) == "hidden" { | ||
1257 | p.addElement() | ||
1258 | p.oe.pop() | ||
1259 | return true | ||
1260 | } | ||
1261 | } | ||
1262 | // Otherwise drop down to the default action. | ||
1263 | case a.Form: | ||
1264 | if p.form != nil { | ||
1265 | // Ignore the token. | ||
1266 | return true | ||
1267 | } | ||
1268 | p.addElement() | ||
1269 | p.form = p.oe.pop() | ||
1270 | case a.Select: | ||
1271 | p.reconstructActiveFormattingElements() | ||
1272 | switch p.top().DataAtom { | ||
1273 | case a.Table, a.Tbody, a.Tfoot, a.Thead, a.Tr: | ||
1274 | p.fosterParenting = true | ||
1275 | } | ||
1276 | p.addElement() | ||
1277 | p.fosterParenting = false | ||
1278 | p.framesetOK = false | ||
1279 | p.im = inSelectInTableIM | ||
1280 | return true | ||
1281 | } | ||
1282 | case EndTagToken: | ||
1283 | switch p.tok.DataAtom { | ||
1284 | case a.Table: | ||
1285 | if p.popUntil(tableScope, a.Table) { | ||
1286 | p.resetInsertionMode() | ||
1287 | return true | ||
1288 | } | ||
1289 | // Ignore the token. | ||
1290 | return true | ||
1291 | case a.Body, a.Caption, a.Col, a.Colgroup, a.Html, a.Tbody, a.Td, a.Tfoot, a.Th, a.Thead, a.Tr: | ||
1292 | // Ignore the token. | ||
1293 | return true | ||
1294 | } | ||
1295 | case CommentToken: | ||
1296 | p.addChild(&Node{ | ||
1297 | Type: CommentNode, | ||
1298 | Data: p.tok.Data, | ||
1299 | }) | ||
1300 | return true | ||
1301 | case DoctypeToken: | ||
1302 | // Ignore the token. | ||
1303 | return true | ||
1304 | } | ||
1305 | |||
1306 | p.fosterParenting = true | ||
1307 | defer func() { p.fosterParenting = false }() | ||
1308 | |||
1309 | return inBodyIM(p) | ||
1310 | } | ||
1311 | |||
1312 | // Section 12.2.5.4.11. | ||
1313 | func inCaptionIM(p *parser) bool { | ||
1314 | switch p.tok.Type { | ||
1315 | case StartTagToken: | ||
1316 | switch p.tok.DataAtom { | ||
1317 | case a.Caption, a.Col, a.Colgroup, a.Tbody, a.Td, a.Tfoot, a.Thead, a.Tr: | ||
1318 | if p.popUntil(tableScope, a.Caption) { | ||
1319 | p.clearActiveFormattingElements() | ||
1320 | p.im = inTableIM | ||
1321 | return false | ||
1322 | } else { | ||
1323 | // Ignore the token. | ||
1324 | return true | ||
1325 | } | ||
1326 | case a.Select: | ||
1327 | p.reconstructActiveFormattingElements() | ||
1328 | p.addElement() | ||
1329 | p.framesetOK = false | ||
1330 | p.im = inSelectInTableIM | ||
1331 | return true | ||
1332 | } | ||
1333 | case EndTagToken: | ||
1334 | switch p.tok.DataAtom { | ||
1335 | case a.Caption: | ||
1336 | if p.popUntil(tableScope, a.Caption) { | ||
1337 | p.clearActiveFormattingElements() | ||
1338 | p.im = inTableIM | ||
1339 | } | ||
1340 | return true | ||
1341 | case a.Table: | ||
1342 | if p.popUntil(tableScope, a.Caption) { | ||
1343 | p.clearActiveFormattingElements() | ||
1344 | p.im = inTableIM | ||
1345 | return false | ||
1346 | } else { | ||
1347 | // Ignore the token. | ||
1348 | return true | ||
1349 | } | ||
1350 | case a.Body, a.Col, a.Colgroup, a.Html, a.Tbody, a.Td, a.Tfoot, a.Th, a.Thead, a.Tr: | ||
1351 | // Ignore the token. | ||
1352 | return true | ||
1353 | } | ||
1354 | } | ||
1355 | return inBodyIM(p) | ||
1356 | } | ||
1357 | |||
1358 | // Section 12.2.5.4.12. | ||
1359 | func inColumnGroupIM(p *parser) bool { | ||
1360 | switch p.tok.Type { | ||
1361 | case TextToken: | ||
1362 | s := strings.TrimLeft(p.tok.Data, whitespace) | ||
1363 | if len(s) < len(p.tok.Data) { | ||
1364 | // Add the initial whitespace to the current node. | ||
1365 | p.addText(p.tok.Data[:len(p.tok.Data)-len(s)]) | ||
1366 | if s == "" { | ||
1367 | return true | ||
1368 | } | ||
1369 | p.tok.Data = s | ||
1370 | } | ||
1371 | case CommentToken: | ||
1372 | p.addChild(&Node{ | ||
1373 | Type: CommentNode, | ||
1374 | Data: p.tok.Data, | ||
1375 | }) | ||
1376 | return true | ||
1377 | case DoctypeToken: | ||
1378 | // Ignore the token. | ||
1379 | return true | ||
1380 | case StartTagToken: | ||
1381 | switch p.tok.DataAtom { | ||
1382 | case a.Html: | ||
1383 | return inBodyIM(p) | ||
1384 | case a.Col: | ||
1385 | p.addElement() | ||
1386 | p.oe.pop() | ||
1387 | p.acknowledgeSelfClosingTag() | ||
1388 | return true | ||
1389 | } | ||
1390 | case EndTagToken: | ||
1391 | switch p.tok.DataAtom { | ||
1392 | case a.Colgroup: | ||
1393 | if p.oe.top().DataAtom != a.Html { | ||
1394 | p.oe.pop() | ||
1395 | p.im = inTableIM | ||
1396 | } | ||
1397 | return true | ||
1398 | case a.Col: | ||
1399 | // Ignore the token. | ||
1400 | return true | ||
1401 | } | ||
1402 | } | ||
1403 | if p.oe.top().DataAtom != a.Html { | ||
1404 | p.oe.pop() | ||
1405 | p.im = inTableIM | ||
1406 | return false | ||
1407 | } | ||
1408 | return true | ||
1409 | } | ||
1410 | |||
1411 | // Section 12.2.5.4.13. | ||
1412 | func inTableBodyIM(p *parser) bool { | ||
1413 | switch p.tok.Type { | ||
1414 | case StartTagToken: | ||
1415 | switch p.tok.DataAtom { | ||
1416 | case a.Tr: | ||
1417 | p.clearStackToContext(tableBodyScope) | ||
1418 | p.addElement() | ||
1419 | p.im = inRowIM | ||
1420 | return true | ||
1421 | case a.Td, a.Th: | ||
1422 | p.parseImpliedToken(StartTagToken, a.Tr, a.Tr.String()) | ||
1423 | return false | ||
1424 | case a.Caption, a.Col, a.Colgroup, a.Tbody, a.Tfoot, a.Thead: | ||
1425 | if p.popUntil(tableScope, a.Tbody, a.Thead, a.Tfoot) { | ||
1426 | p.im = inTableIM | ||
1427 | return false | ||
1428 | } | ||
1429 | // Ignore the token. | ||
1430 | return true | ||
1431 | } | ||
1432 | case EndTagToken: | ||
1433 | switch p.tok.DataAtom { | ||
1434 | case a.Tbody, a.Tfoot, a.Thead: | ||
1435 | if p.elementInScope(tableScope, p.tok.DataAtom) { | ||
1436 | p.clearStackToContext(tableBodyScope) | ||
1437 | p.oe.pop() | ||
1438 | p.im = inTableIM | ||
1439 | } | ||
1440 | return true | ||
1441 | case a.Table: | ||
1442 | if p.popUntil(tableScope, a.Tbody, a.Thead, a.Tfoot) { | ||
1443 | p.im = inTableIM | ||
1444 | return false | ||
1445 | } | ||
1446 | // Ignore the token. | ||
1447 | return true | ||
1448 | case a.Body, a.Caption, a.Col, a.Colgroup, a.Html, a.Td, a.Th, a.Tr: | ||
1449 | // Ignore the token. | ||
1450 | return true | ||
1451 | } | ||
1452 | case CommentToken: | ||
1453 | p.addChild(&Node{ | ||
1454 | Type: CommentNode, | ||
1455 | Data: p.tok.Data, | ||
1456 | }) | ||
1457 | return true | ||
1458 | } | ||
1459 | |||
1460 | return inTableIM(p) | ||
1461 | } | ||
1462 | |||
1463 | // Section 12.2.5.4.14. | ||
1464 | func inRowIM(p *parser) bool { | ||
1465 | switch p.tok.Type { | ||
1466 | case StartTagToken: | ||
1467 | switch p.tok.DataAtom { | ||
1468 | case a.Td, a.Th: | ||
1469 | p.clearStackToContext(tableRowScope) | ||
1470 | p.addElement() | ||
1471 | p.afe = append(p.afe, &scopeMarker) | ||
1472 | p.im = inCellIM | ||
1473 | return true | ||
1474 | case a.Caption, a.Col, a.Colgroup, a.Tbody, a.Tfoot, a.Thead, a.Tr: | ||
1475 | if p.popUntil(tableScope, a.Tr) { | ||
1476 | p.im = inTableBodyIM | ||
1477 | return false | ||
1478 | } | ||
1479 | // Ignore the token. | ||
1480 | return true | ||
1481 | } | ||
1482 | case EndTagToken: | ||
1483 | switch p.tok.DataAtom { | ||
1484 | case a.Tr: | ||
1485 | if p.popUntil(tableScope, a.Tr) { | ||
1486 | p.im = inTableBodyIM | ||
1487 | return true | ||
1488 | } | ||
1489 | // Ignore the token. | ||
1490 | return true | ||
1491 | case a.Table: | ||
1492 | if p.popUntil(tableScope, a.Tr) { | ||
1493 | p.im = inTableBodyIM | ||
1494 | return false | ||
1495 | } | ||
1496 | // Ignore the token. | ||
1497 | return true | ||
1498 | case a.Tbody, a.Tfoot, a.Thead: | ||
1499 | if p.elementInScope(tableScope, p.tok.DataAtom) { | ||
1500 | p.parseImpliedToken(EndTagToken, a.Tr, a.Tr.String()) | ||
1501 | return false | ||
1502 | } | ||
1503 | // Ignore the token. | ||
1504 | return true | ||
1505 | case a.Body, a.Caption, a.Col, a.Colgroup, a.Html, a.Td, a.Th: | ||
1506 | // Ignore the token. | ||
1507 | return true | ||
1508 | } | ||
1509 | } | ||
1510 | |||
1511 | return inTableIM(p) | ||
1512 | } | ||
1513 | |||
1514 | // Section 12.2.5.4.15. | ||
1515 | func inCellIM(p *parser) bool { | ||
1516 | switch p.tok.Type { | ||
1517 | case StartTagToken: | ||
1518 | switch p.tok.DataAtom { | ||
1519 | case a.Caption, a.Col, a.Colgroup, a.Tbody, a.Td, a.Tfoot, a.Th, a.Thead, a.Tr: | ||
1520 | if p.popUntil(tableScope, a.Td, a.Th) { | ||
1521 | // Close the cell and reprocess. | ||
1522 | p.clearActiveFormattingElements() | ||
1523 | p.im = inRowIM | ||
1524 | return false | ||
1525 | } | ||
1526 | // Ignore the token. | ||
1527 | return true | ||
1528 | case a.Select: | ||
1529 | p.reconstructActiveFormattingElements() | ||
1530 | p.addElement() | ||
1531 | p.framesetOK = false | ||
1532 | p.im = inSelectInTableIM | ||
1533 | return true | ||
1534 | } | ||
1535 | case EndTagToken: | ||
1536 | switch p.tok.DataAtom { | ||
1537 | case a.Td, a.Th: | ||
1538 | if !p.popUntil(tableScope, p.tok.DataAtom) { | ||
1539 | // Ignore the token. | ||
1540 | return true | ||
1541 | } | ||
1542 | p.clearActiveFormattingElements() | ||
1543 | p.im = inRowIM | ||
1544 | return true | ||
1545 | case a.Body, a.Caption, a.Col, a.Colgroup, a.Html: | ||
1546 | // Ignore the token. | ||
1547 | return true | ||
1548 | case a.Table, a.Tbody, a.Tfoot, a.Thead, a.Tr: | ||
1549 | if !p.elementInScope(tableScope, p.tok.DataAtom) { | ||
1550 | // Ignore the token. | ||
1551 | return true | ||
1552 | } | ||
1553 | // Close the cell and reprocess. | ||
1554 | p.popUntil(tableScope, a.Td, a.Th) | ||
1555 | p.clearActiveFormattingElements() | ||
1556 | p.im = inRowIM | ||
1557 | return false | ||
1558 | } | ||
1559 | } | ||
1560 | return inBodyIM(p) | ||
1561 | } | ||
1562 | |||
1563 | // Section 12.2.5.4.16. | ||
1564 | func inSelectIM(p *parser) bool { | ||
1565 | switch p.tok.Type { | ||
1566 | case ErrorToken: | ||
1567 | // Stop parsing. | ||
1568 | return true | ||
1569 | case TextToken: | ||
1570 | p.addText(strings.Replace(p.tok.Data, "\x00", "", -1)) | ||
1571 | case StartTagToken: | ||
1572 | switch p.tok.DataAtom { | ||
1573 | case a.Html: | ||
1574 | return inBodyIM(p) | ||
1575 | case a.Option: | ||
1576 | if p.top().DataAtom == a.Option { | ||
1577 | p.oe.pop() | ||
1578 | } | ||
1579 | p.addElement() | ||
1580 | case a.Optgroup: | ||
1581 | if p.top().DataAtom == a.Option { | ||
1582 | p.oe.pop() | ||
1583 | } | ||
1584 | if p.top().DataAtom == a.Optgroup { | ||
1585 | p.oe.pop() | ||
1586 | } | ||
1587 | p.addElement() | ||
1588 | case a.Select: | ||
1589 | p.tok.Type = EndTagToken | ||
1590 | return false | ||
1591 | case a.Input, a.Keygen, a.Textarea: | ||
1592 | if p.elementInScope(selectScope, a.Select) { | ||
1593 | p.parseImpliedToken(EndTagToken, a.Select, a.Select.String()) | ||
1594 | return false | ||
1595 | } | ||
1596 | // In order to properly ignore <textarea>, we need to change the tokenizer mode. | ||
1597 | p.tokenizer.NextIsNotRawText() | ||
1598 | // Ignore the token. | ||
1599 | return true | ||
1600 | case a.Script: | ||
1601 | return inHeadIM(p) | ||
1602 | } | ||
1603 | case EndTagToken: | ||
1604 | switch p.tok.DataAtom { | ||
1605 | case a.Option: | ||
1606 | if p.top().DataAtom == a.Option { | ||
1607 | p.oe.pop() | ||
1608 | } | ||
1609 | case a.Optgroup: | ||
1610 | i := len(p.oe) - 1 | ||
1611 | if p.oe[i].DataAtom == a.Option { | ||
1612 | i-- | ||
1613 | } | ||
1614 | if p.oe[i].DataAtom == a.Optgroup { | ||
1615 | p.oe = p.oe[:i] | ||
1616 | } | ||
1617 | case a.Select: | ||
1618 | if p.popUntil(selectScope, a.Select) { | ||
1619 | p.resetInsertionMode() | ||
1620 | } | ||
1621 | } | ||
1622 | case CommentToken: | ||
1623 | p.addChild(&Node{ | ||
1624 | Type: CommentNode, | ||
1625 | Data: p.tok.Data, | ||
1626 | }) | ||
1627 | case DoctypeToken: | ||
1628 | // Ignore the token. | ||
1629 | return true | ||
1630 | } | ||
1631 | |||
1632 | return true | ||
1633 | } | ||
1634 | |||
1635 | // Section 12.2.5.4.17. | ||
1636 | func inSelectInTableIM(p *parser) bool { | ||
1637 | switch p.tok.Type { | ||
1638 | case StartTagToken, EndTagToken: | ||
1639 | switch p.tok.DataAtom { | ||
1640 | case a.Caption, a.Table, a.Tbody, a.Tfoot, a.Thead, a.Tr, a.Td, a.Th: | ||
1641 | if p.tok.Type == StartTagToken || p.elementInScope(tableScope, p.tok.DataAtom) { | ||
1642 | p.parseImpliedToken(EndTagToken, a.Select, a.Select.String()) | ||
1643 | return false | ||
1644 | } else { | ||
1645 | // Ignore the token. | ||
1646 | return true | ||
1647 | } | ||
1648 | } | ||
1649 | } | ||
1650 | return inSelectIM(p) | ||
1651 | } | ||
1652 | |||
1653 | // Section 12.2.5.4.18. | ||
1654 | func afterBodyIM(p *parser) bool { | ||
1655 | switch p.tok.Type { | ||
1656 | case ErrorToken: | ||
1657 | // Stop parsing. | ||
1658 | return true | ||
1659 | case TextToken: | ||
1660 | s := strings.TrimLeft(p.tok.Data, whitespace) | ||
1661 | if len(s) == 0 { | ||
1662 | // It was all whitespace. | ||
1663 | return inBodyIM(p) | ||
1664 | } | ||
1665 | case StartTagToken: | ||
1666 | if p.tok.DataAtom == a.Html { | ||
1667 | return inBodyIM(p) | ||
1668 | } | ||
1669 | case EndTagToken: | ||
1670 | if p.tok.DataAtom == a.Html { | ||
1671 | if !p.fragment { | ||
1672 | p.im = afterAfterBodyIM | ||
1673 | } | ||
1674 | return true | ||
1675 | } | ||
1676 | case CommentToken: | ||
1677 | // The comment is attached to the <html> element. | ||
1678 | if len(p.oe) < 1 || p.oe[0].DataAtom != a.Html { | ||
1679 | panic("html: bad parser state: <html> element not found, in the after-body insertion mode") | ||
1680 | } | ||
1681 | p.oe[0].AppendChild(&Node{ | ||
1682 | Type: CommentNode, | ||
1683 | Data: p.tok.Data, | ||
1684 | }) | ||
1685 | return true | ||
1686 | } | ||
1687 | p.im = inBodyIM | ||
1688 | return false | ||
1689 | } | ||
1690 | |||
1691 | // Section 12.2.5.4.19. | ||
1692 | func inFramesetIM(p *parser) bool { | ||
1693 | switch p.tok.Type { | ||
1694 | case CommentToken: | ||
1695 | p.addChild(&Node{ | ||
1696 | Type: CommentNode, | ||
1697 | Data: p.tok.Data, | ||
1698 | }) | ||
1699 | case TextToken: | ||
1700 | // Ignore all text but whitespace. | ||
1701 | s := strings.Map(func(c rune) rune { | ||
1702 | switch c { | ||
1703 | case ' ', '\t', '\n', '\f', '\r': | ||
1704 | return c | ||
1705 | } | ||
1706 | return -1 | ||
1707 | }, p.tok.Data) | ||
1708 | if s != "" { | ||
1709 | p.addText(s) | ||
1710 | } | ||
1711 | case StartTagToken: | ||
1712 | switch p.tok.DataAtom { | ||
1713 | case a.Html: | ||
1714 | return inBodyIM(p) | ||
1715 | case a.Frameset: | ||
1716 | p.addElement() | ||
1717 | case a.Frame: | ||
1718 | p.addElement() | ||
1719 | p.oe.pop() | ||
1720 | p.acknowledgeSelfClosingTag() | ||
1721 | case a.Noframes: | ||
1722 | return inHeadIM(p) | ||
1723 | } | ||
1724 | case EndTagToken: | ||
1725 | switch p.tok.DataAtom { | ||
1726 | case a.Frameset: | ||
1727 | if p.oe.top().DataAtom != a.Html { | ||
1728 | p.oe.pop() | ||
1729 | if p.oe.top().DataAtom != a.Frameset { | ||
1730 | p.im = afterFramesetIM | ||
1731 | return true | ||
1732 | } | ||
1733 | } | ||
1734 | } | ||
1735 | default: | ||
1736 | // Ignore the token. | ||
1737 | } | ||
1738 | return true | ||
1739 | } | ||
1740 | |||
1741 | // Section 12.2.5.4.20. | ||
1742 | func afterFramesetIM(p *parser) bool { | ||
1743 | switch p.tok.Type { | ||
1744 | case CommentToken: | ||
1745 | p.addChild(&Node{ | ||
1746 | Type: CommentNode, | ||
1747 | Data: p.tok.Data, | ||
1748 | }) | ||
1749 | case TextToken: | ||
1750 | // Ignore all text but whitespace. | ||
1751 | s := strings.Map(func(c rune) rune { | ||
1752 | switch c { | ||
1753 | case ' ', '\t', '\n', '\f', '\r': | ||
1754 | return c | ||
1755 | } | ||
1756 | return -1 | ||
1757 | }, p.tok.Data) | ||
1758 | if s != "" { | ||
1759 | p.addText(s) | ||
1760 | } | ||
1761 | case StartTagToken: | ||
1762 | switch p.tok.DataAtom { | ||
1763 | case a.Html: | ||
1764 | return inBodyIM(p) | ||
1765 | case a.Noframes: | ||
1766 | return inHeadIM(p) | ||
1767 | } | ||
1768 | case EndTagToken: | ||
1769 | switch p.tok.DataAtom { | ||
1770 | case a.Html: | ||
1771 | p.im = afterAfterFramesetIM | ||
1772 | return true | ||
1773 | } | ||
1774 | default: | ||
1775 | // Ignore the token. | ||
1776 | } | ||
1777 | return true | ||
1778 | } | ||
1779 | |||
1780 | // Section 12.2.5.4.21. | ||
1781 | func afterAfterBodyIM(p *parser) bool { | ||
1782 | switch p.tok.Type { | ||
1783 | case ErrorToken: | ||
1784 | // Stop parsing. | ||
1785 | return true | ||
1786 | case TextToken: | ||
1787 | s := strings.TrimLeft(p.tok.Data, whitespace) | ||
1788 | if len(s) == 0 { | ||
1789 | // It was all whitespace. | ||
1790 | return inBodyIM(p) | ||
1791 | } | ||
1792 | case StartTagToken: | ||
1793 | if p.tok.DataAtom == a.Html { | ||
1794 | return inBodyIM(p) | ||
1795 | } | ||
1796 | case CommentToken: | ||
1797 | p.doc.AppendChild(&Node{ | ||
1798 | Type: CommentNode, | ||
1799 | Data: p.tok.Data, | ||
1800 | }) | ||
1801 | return true | ||
1802 | case DoctypeToken: | ||
1803 | return inBodyIM(p) | ||
1804 | } | ||
1805 | p.im = inBodyIM | ||
1806 | return false | ||
1807 | } | ||
1808 | |||
1809 | // Section 12.2.5.4.22. | ||
1810 | func afterAfterFramesetIM(p *parser) bool { | ||
1811 | switch p.tok.Type { | ||
1812 | case CommentToken: | ||
1813 | p.doc.AppendChild(&Node{ | ||
1814 | Type: CommentNode, | ||
1815 | Data: p.tok.Data, | ||
1816 | }) | ||
1817 | case TextToken: | ||
1818 | // Ignore all text but whitespace. | ||
1819 | s := strings.Map(func(c rune) rune { | ||
1820 | switch c { | ||
1821 | case ' ', '\t', '\n', '\f', '\r': | ||
1822 | return c | ||
1823 | } | ||
1824 | return -1 | ||
1825 | }, p.tok.Data) | ||
1826 | if s != "" { | ||
1827 | p.tok.Data = s | ||
1828 | return inBodyIM(p) | ||
1829 | } | ||
1830 | case StartTagToken: | ||
1831 | switch p.tok.DataAtom { | ||
1832 | case a.Html: | ||
1833 | return inBodyIM(p) | ||
1834 | case a.Noframes: | ||
1835 | return inHeadIM(p) | ||
1836 | } | ||
1837 | case DoctypeToken: | ||
1838 | return inBodyIM(p) | ||
1839 | default: | ||
1840 | // Ignore the token. | ||
1841 | } | ||
1842 | return true | ||
1843 | } | ||
1844 | |||
1845 | const whitespaceOrNUL = whitespace + "\x00" | ||
1846 | |||
1847 | // Section 12.2.5.5. | ||
1848 | func parseForeignContent(p *parser) bool { | ||
1849 | switch p.tok.Type { | ||
1850 | case TextToken: | ||
1851 | if p.framesetOK { | ||
1852 | p.framesetOK = strings.TrimLeft(p.tok.Data, whitespaceOrNUL) == "" | ||
1853 | } | ||
1854 | p.tok.Data = strings.Replace(p.tok.Data, "\x00", "\ufffd", -1) | ||
1855 | p.addText(p.tok.Data) | ||
1856 | case CommentToken: | ||
1857 | p.addChild(&Node{ | ||
1858 | Type: CommentNode, | ||
1859 | Data: p.tok.Data, | ||
1860 | }) | ||
1861 | case StartTagToken: | ||
1862 | b := breakout[p.tok.Data] | ||
1863 | if p.tok.DataAtom == a.Font { | ||
1864 | loop: | ||
1865 | for _, attr := range p.tok.Attr { | ||
1866 | switch attr.Key { | ||
1867 | case "color", "face", "size": | ||
1868 | b = true | ||
1869 | break loop | ||
1870 | } | ||
1871 | } | ||
1872 | } | ||
1873 | if b { | ||
1874 | for i := len(p.oe) - 1; i >= 0; i-- { | ||
1875 | n := p.oe[i] | ||
1876 | if n.Namespace == "" || htmlIntegrationPoint(n) || mathMLTextIntegrationPoint(n) { | ||
1877 | p.oe = p.oe[:i+1] | ||
1878 | break | ||
1879 | } | ||
1880 | } | ||
1881 | return false | ||
1882 | } | ||
1883 | switch p.top().Namespace { | ||
1884 | case "math": | ||
1885 | adjustAttributeNames(p.tok.Attr, mathMLAttributeAdjustments) | ||
1886 | case "svg": | ||
1887 | // Adjust SVG tag names. The tokenizer lower-cases tag names, but | ||
1888 | // SVG wants e.g. "foreignObject" with a capital second "O". | ||
1889 | if x := svgTagNameAdjustments[p.tok.Data]; x != "" { | ||
1890 | p.tok.DataAtom = a.Lookup([]byte(x)) | ||
1891 | p.tok.Data = x | ||
1892 | } | ||
1893 | adjustAttributeNames(p.tok.Attr, svgAttributeAdjustments) | ||
1894 | default: | ||
1895 | panic("html: bad parser state: unexpected namespace") | ||
1896 | } | ||
1897 | adjustForeignAttributes(p.tok.Attr) | ||
1898 | namespace := p.top().Namespace | ||
1899 | p.addElement() | ||
1900 | p.top().Namespace = namespace | ||
1901 | if namespace != "" { | ||
1902 | // Don't let the tokenizer go into raw text mode in foreign content | ||
1903 | // (e.g. in an SVG <title> tag). | ||
1904 | p.tokenizer.NextIsNotRawText() | ||
1905 | } | ||
1906 | if p.hasSelfClosingToken { | ||
1907 | p.oe.pop() | ||
1908 | p.acknowledgeSelfClosingTag() | ||
1909 | } | ||
1910 | case EndTagToken: | ||
1911 | for i := len(p.oe) - 1; i >= 0; i-- { | ||
1912 | if p.oe[i].Namespace == "" { | ||
1913 | return p.im(p) | ||
1914 | } | ||
1915 | if strings.EqualFold(p.oe[i].Data, p.tok.Data) { | ||
1916 | p.oe = p.oe[:i] | ||
1917 | break | ||
1918 | } | ||
1919 | } | ||
1920 | return true | ||
1921 | default: | ||
1922 | // Ignore the token. | ||
1923 | } | ||
1924 | return true | ||
1925 | } | ||
1926 | |||
1927 | // Section 12.2.5. | ||
1928 | func (p *parser) inForeignContent() bool { | ||
1929 | if len(p.oe) == 0 { | ||
1930 | return false | ||
1931 | } | ||
1932 | n := p.oe[len(p.oe)-1] | ||
1933 | if n.Namespace == "" { | ||
1934 | return false | ||
1935 | } | ||
1936 | if mathMLTextIntegrationPoint(n) { | ||
1937 | if p.tok.Type == StartTagToken && p.tok.DataAtom != a.Mglyph && p.tok.DataAtom != a.Malignmark { | ||
1938 | return false | ||
1939 | } | ||
1940 | if p.tok.Type == TextToken { | ||
1941 | return false | ||
1942 | } | ||
1943 | } | ||
1944 | if n.Namespace == "math" && n.DataAtom == a.AnnotationXml && p.tok.Type == StartTagToken && p.tok.DataAtom == a.Svg { | ||
1945 | return false | ||
1946 | } | ||
1947 | if htmlIntegrationPoint(n) && (p.tok.Type == StartTagToken || p.tok.Type == TextToken) { | ||
1948 | return false | ||
1949 | } | ||
1950 | if p.tok.Type == ErrorToken { | ||
1951 | return false | ||
1952 | } | ||
1953 | return true | ||
1954 | } | ||
1955 | |||
1956 | // parseImpliedToken parses a token as though it had appeared in the parser's | ||
1957 | // input. | ||
1958 | func (p *parser) parseImpliedToken(t TokenType, dataAtom a.Atom, data string) { | ||
1959 | realToken, selfClosing := p.tok, p.hasSelfClosingToken | ||
1960 | p.tok = Token{ | ||
1961 | Type: t, | ||
1962 | DataAtom: dataAtom, | ||
1963 | Data: data, | ||
1964 | } | ||
1965 | p.hasSelfClosingToken = false | ||
1966 | p.parseCurrentToken() | ||
1967 | p.tok, p.hasSelfClosingToken = realToken, selfClosing | ||
1968 | } | ||
1969 | |||
1970 | // parseCurrentToken runs the current token through the parsing routines | ||
1971 | // until it is consumed. | ||
1972 | func (p *parser) parseCurrentToken() { | ||
1973 | if p.tok.Type == SelfClosingTagToken { | ||
1974 | p.hasSelfClosingToken = true | ||
1975 | p.tok.Type = StartTagToken | ||
1976 | } | ||
1977 | |||
1978 | consumed := false | ||
1979 | for !consumed { | ||
1980 | if p.inForeignContent() { | ||
1981 | consumed = parseForeignContent(p) | ||
1982 | } else { | ||
1983 | consumed = p.im(p) | ||
1984 | } | ||
1985 | } | ||
1986 | |||
1987 | if p.hasSelfClosingToken { | ||
1988 | // This is a parse error, but ignore it. | ||
1989 | p.hasSelfClosingToken = false | ||
1990 | } | ||
1991 | } | ||
1992 | |||
1993 | func (p *parser) parse() error { | ||
1994 | // Iterate until EOF. Any other error will cause an early return. | ||
1995 | var err error | ||
1996 | for err != io.EOF { | ||
1997 | // CDATA sections are allowed only in foreign content. | ||
1998 | n := p.oe.top() | ||
1999 | p.tokenizer.AllowCDATA(n != nil && n.Namespace != "") | ||
2000 | // Read and parse the next token. | ||
2001 | p.tokenizer.Next() | ||
2002 | p.tok = p.tokenizer.Token() | ||
2003 | if p.tok.Type == ErrorToken { | ||
2004 | err = p.tokenizer.Err() | ||
2005 | if err != nil && err != io.EOF { | ||
2006 | return err | ||
2007 | } | ||
2008 | } | ||
2009 | p.parseCurrentToken() | ||
2010 | } | ||
2011 | return nil | ||
2012 | } | ||
2013 | |||
2014 | // Parse returns the parse tree for the HTML from the given Reader. | ||
2015 | // The input is assumed to be UTF-8 encoded. | ||
2016 | func Parse(r io.Reader) (*Node, error) { | ||
2017 | p := &parser{ | ||
2018 | tokenizer: NewTokenizer(r), | ||
2019 | doc: &Node{ | ||
2020 | Type: DocumentNode, | ||
2021 | }, | ||
2022 | scripting: true, | ||
2023 | framesetOK: true, | ||
2024 | im: initialIM, | ||
2025 | } | ||
2026 | err := p.parse() | ||
2027 | if err != nil { | ||
2028 | return nil, err | ||
2029 | } | ||
2030 | return p.doc, nil | ||
2031 | } | ||
2032 | |||
2033 | // ParseFragment parses a fragment of HTML and returns the nodes that were | ||
2034 | // found. If the fragment is the InnerHTML for an existing element, pass that | ||
2035 | // element in context. | ||
2036 | func ParseFragment(r io.Reader, context *Node) ([]*Node, error) { | ||
2037 | contextTag := "" | ||
2038 | if context != nil { | ||
2039 | if context.Type != ElementNode { | ||
2040 | return nil, errors.New("html: ParseFragment of non-element Node") | ||
2041 | } | ||
2042 | // The next check isn't just context.DataAtom.String() == context.Data because | ||
2043 | // it is valid to pass an element whose tag isn't a known atom. For example, | ||
2044 | // DataAtom == 0 and Data = "tagfromthefuture" is perfectly consistent. | ||
2045 | if context.DataAtom != a.Lookup([]byte(context.Data)) { | ||
2046 | return nil, fmt.Errorf("html: inconsistent Node: DataAtom=%q, Data=%q", context.DataAtom, context.Data) | ||
2047 | } | ||
2048 | contextTag = context.DataAtom.String() | ||
2049 | } | ||
2050 | p := &parser{ | ||
2051 | tokenizer: NewTokenizerFragment(r, contextTag), | ||
2052 | doc: &Node{ | ||
2053 | Type: DocumentNode, | ||
2054 | }, | ||
2055 | scripting: true, | ||
2056 | fragment: true, | ||
2057 | context: context, | ||
2058 | } | ||
2059 | |||
2060 | root := &Node{ | ||
2061 | Type: ElementNode, | ||
2062 | DataAtom: a.Html, | ||
2063 | Data: a.Html.String(), | ||
2064 | } | ||
2065 | p.doc.AppendChild(root) | ||
2066 | p.oe = nodeStack{root} | ||
2067 | p.resetInsertionMode() | ||
2068 | |||
2069 | for n := context; n != nil; n = n.Parent { | ||
2070 | if n.Type == ElementNode && n.DataAtom == a.Form { | ||
2071 | p.form = n | ||
2072 | break | ||
2073 | } | ||
2074 | } | ||
2075 | |||
2076 | err := p.parse() | ||
2077 | if err != nil { | ||
2078 | return nil, err | ||
2079 | } | ||
2080 | |||
2081 | parent := p.doc | ||
2082 | if context != nil { | ||
2083 | parent = root | ||
2084 | } | ||
2085 | |||
2086 | var result []*Node | ||
2087 | for c := parent.FirstChild; c != nil; { | ||
2088 | next := c.NextSibling | ||
2089 | parent.RemoveChild(c) | ||
2090 | result = append(result, c) | ||
2091 | c = next | ||
2092 | } | ||
2093 | return result, nil | ||
2094 | } | ||
diff --git a/vendor/golang.org/x/net/html/render.go b/vendor/golang.org/x/net/html/render.go new file mode 100644 index 0000000..d34564f --- /dev/null +++ b/vendor/golang.org/x/net/html/render.go | |||
@@ -0,0 +1,271 @@ | |||
1 | // Copyright 2011 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package html | ||
6 | |||
7 | import ( | ||
8 | "bufio" | ||
9 | "errors" | ||
10 | "fmt" | ||
11 | "io" | ||
12 | "strings" | ||
13 | ) | ||
14 | |||
15 | type writer interface { | ||
16 | io.Writer | ||
17 | io.ByteWriter | ||
18 | WriteString(string) (int, error) | ||
19 | } | ||
20 | |||
21 | // Render renders the parse tree n to the given writer. | ||
22 | // | ||
23 | // Rendering is done on a 'best effort' basis: calling Parse on the output of | ||
24 | // Render will always result in something similar to the original tree, but it | ||
25 | // is not necessarily an exact clone unless the original tree was 'well-formed'. | ||
26 | // 'Well-formed' is not easily specified; the HTML5 specification is | ||
27 | // complicated. | ||
28 | // | ||
29 | // Calling Parse on arbitrary input typically results in a 'well-formed' parse | ||
30 | // tree. However, it is possible for Parse to yield a 'badly-formed' parse tree. | ||
31 | // For example, in a 'well-formed' parse tree, no <a> element is a child of | ||
32 | // another <a> element: parsing "<a><a>" results in two sibling elements. | ||
33 | // Similarly, in a 'well-formed' parse tree, no <a> element is a child of a | ||
34 | // <table> element: parsing "<p><table><a>" results in a <p> with two sibling | ||
35 | // children; the <a> is reparented to the <table>'s parent. However, calling | ||
36 | // Parse on "<a><table><a>" does not return an error, but the result has an <a> | ||
37 | // element with an <a> child, and is therefore not 'well-formed'. | ||
38 | // | ||
39 | // Programmatically constructed trees are typically also 'well-formed', but it | ||
40 | // is possible to construct a tree that looks innocuous but, when rendered and | ||
41 | // re-parsed, results in a different tree. A simple example is that a solitary | ||
42 | // text node would become a tree containing <html>, <head> and <body> elements. | ||
43 | // Another example is that the programmatic equivalent of "a<head>b</head>c" | ||
44 | // becomes "<html><head><head/><body>abc</body></html>". | ||
45 | func Render(w io.Writer, n *Node) error { | ||
46 | if x, ok := w.(writer); ok { | ||
47 | return render(x, n) | ||
48 | } | ||
49 | buf := bufio.NewWriter(w) | ||
50 | if err := render(buf, n); err != nil { | ||
51 | return err | ||
52 | } | ||
53 | return buf.Flush() | ||
54 | } | ||
55 | |||
56 | // plaintextAbort is returned from render1 when a <plaintext> element | ||
57 | // has been rendered. No more end tags should be rendered after that. | ||
58 | var plaintextAbort = errors.New("html: internal error (plaintext abort)") | ||
59 | |||
60 | func render(w writer, n *Node) error { | ||
61 | err := render1(w, n) | ||
62 | if err == plaintextAbort { | ||
63 | err = nil | ||
64 | } | ||
65 | return err | ||
66 | } | ||
67 | |||
68 | func render1(w writer, n *Node) error { | ||
69 | // Render non-element nodes; these are the easy cases. | ||
70 | switch n.Type { | ||
71 | case ErrorNode: | ||
72 | return errors.New("html: cannot render an ErrorNode node") | ||
73 | case TextNode: | ||
74 | return escape(w, n.Data) | ||
75 | case DocumentNode: | ||
76 | for c := n.FirstChild; c != nil; c = c.NextSibling { | ||
77 | if err := render1(w, c); err != nil { | ||
78 | return err | ||
79 | } | ||
80 | } | ||
81 | return nil | ||
82 | case ElementNode: | ||
83 | // No-op. | ||
84 | case CommentNode: | ||
85 | if _, err := w.WriteString("<!--"); err != nil { | ||
86 | return err | ||
87 | } | ||
88 | if _, err := w.WriteString(n.Data); err != nil { | ||
89 | return err | ||
90 | } | ||
91 | if _, err := w.WriteString("-->"); err != nil { | ||
92 | return err | ||
93 | } | ||
94 | return nil | ||
95 | case DoctypeNode: | ||
96 | if _, err := w.WriteString("<!DOCTYPE "); err != nil { | ||
97 | return err | ||
98 | } | ||
99 | if _, err := w.WriteString(n.Data); err != nil { | ||
100 | return err | ||
101 | } | ||
102 | if n.Attr != nil { | ||
103 | var p, s string | ||
104 | for _, a := range n.Attr { | ||
105 | switch a.Key { | ||
106 | case "public": | ||
107 | p = a.Val | ||
108 | case "system": | ||
109 | s = a.Val | ||
110 | } | ||
111 | } | ||
112 | if p != "" { | ||
113 | if _, err := w.WriteString(" PUBLIC "); err != nil { | ||
114 | return err | ||
115 | } | ||
116 | if err := writeQuoted(w, p); err != nil { | ||
117 | return err | ||
118 | } | ||
119 | if s != "" { | ||
120 | if err := w.WriteByte(' '); err != nil { | ||
121 | return err | ||
122 | } | ||
123 | if err := writeQuoted(w, s); err != nil { | ||
124 | return err | ||
125 | } | ||
126 | } | ||
127 | } else if s != "" { | ||
128 | if _, err := w.WriteString(" SYSTEM "); err != nil { | ||
129 | return err | ||
130 | } | ||
131 | if err := writeQuoted(w, s); err != nil { | ||
132 | return err | ||
133 | } | ||
134 | } | ||
135 | } | ||
136 | return w.WriteByte('>') | ||
137 | default: | ||
138 | return errors.New("html: unknown node type") | ||
139 | } | ||
140 | |||
141 | // Render the <xxx> opening tag. | ||
142 | if err := w.WriteByte('<'); err != nil { | ||
143 | return err | ||
144 | } | ||
145 | if _, err := w.WriteString(n.Data); err != nil { | ||
146 | return err | ||
147 | } | ||
148 | for _, a := range n.Attr { | ||
149 | if err := w.WriteByte(' '); err != nil { | ||
150 | return err | ||
151 | } | ||
152 | if a.Namespace != "" { | ||
153 | if _, err := w.WriteString(a.Namespace); err != nil { | ||
154 | return err | ||
155 | } | ||
156 | if err := w.WriteByte(':'); err != nil { | ||
157 | return err | ||
158 | } | ||
159 | } | ||
160 | if _, err := w.WriteString(a.Key); err != nil { | ||
161 | return err | ||
162 | } | ||
163 | if _, err := w.WriteString(`="`); err != nil { | ||
164 | return err | ||
165 | } | ||
166 | if err := escape(w, a.Val); err != nil { | ||
167 | return err | ||
168 | } | ||
169 | if err := w.WriteByte('"'); err != nil { | ||
170 | return err | ||
171 | } | ||
172 | } | ||
173 | if voidElements[n.Data] { | ||
174 | if n.FirstChild != nil { | ||
175 | return fmt.Errorf("html: void element <%s> has child nodes", n.Data) | ||
176 | } | ||
177 | _, err := w.WriteString("/>") | ||
178 | return err | ||
179 | } | ||
180 | if err := w.WriteByte('>'); err != nil { | ||
181 | return err | ||
182 | } | ||
183 | |||
184 | // Add initial newline where there is danger of a newline beging ignored. | ||
185 | if c := n.FirstChild; c != nil && c.Type == TextNode && strings.HasPrefix(c.Data, "\n") { | ||
186 | switch n.Data { | ||
187 | case "pre", "listing", "textarea": | ||
188 | if err := w.WriteByte('\n'); err != nil { | ||
189 | return err | ||
190 | } | ||
191 | } | ||
192 | } | ||
193 | |||
194 | // Render any child nodes. | ||
195 | switch n.Data { | ||
196 | case "iframe", "noembed", "noframes", "noscript", "plaintext", "script", "style", "xmp": | ||
197 | for c := n.FirstChild; c != nil; c = c.NextSibling { | ||
198 | if c.Type == TextNode { | ||
199 | if _, err := w.WriteString(c.Data); err != nil { | ||
200 | return err | ||
201 | } | ||
202 | } else { | ||
203 | if err := render1(w, c); err != nil { | ||
204 | return err | ||
205 | } | ||
206 | } | ||
207 | } | ||
208 | if n.Data == "plaintext" { | ||
209 | // Don't render anything else. <plaintext> must be the | ||
210 | // last element in the file, with no closing tag. | ||
211 | return plaintextAbort | ||
212 | } | ||
213 | default: | ||
214 | for c := n.FirstChild; c != nil; c = c.NextSibling { | ||
215 | if err := render1(w, c); err != nil { | ||
216 | return err | ||
217 | } | ||
218 | } | ||
219 | } | ||
220 | |||
221 | // Render the </xxx> closing tag. | ||
222 | if _, err := w.WriteString("</"); err != nil { | ||
223 | return err | ||
224 | } | ||
225 | if _, err := w.WriteString(n.Data); err != nil { | ||
226 | return err | ||
227 | } | ||
228 | return w.WriteByte('>') | ||
229 | } | ||
230 | |||
231 | // writeQuoted writes s to w surrounded by quotes. Normally it will use double | ||
232 | // quotes, but if s contains a double quote, it will use single quotes. | ||
233 | // It is used for writing the identifiers in a doctype declaration. | ||
234 | // In valid HTML, they can't contain both types of quotes. | ||
235 | func writeQuoted(w writer, s string) error { | ||
236 | var q byte = '"' | ||
237 | if strings.Contains(s, `"`) { | ||
238 | q = '\'' | ||
239 | } | ||
240 | if err := w.WriteByte(q); err != nil { | ||
241 | return err | ||
242 | } | ||
243 | if _, err := w.WriteString(s); err != nil { | ||
244 | return err | ||
245 | } | ||
246 | if err := w.WriteByte(q); err != nil { | ||
247 | return err | ||
248 | } | ||
249 | return nil | ||
250 | } | ||
251 | |||
252 | // Section 12.1.2, "Elements", gives this list of void elements. Void elements | ||
253 | // are those that can't have any contents. | ||
254 | var voidElements = map[string]bool{ | ||
255 | "area": true, | ||
256 | "base": true, | ||
257 | "br": true, | ||
258 | "col": true, | ||
259 | "command": true, | ||
260 | "embed": true, | ||
261 | "hr": true, | ||
262 | "img": true, | ||
263 | "input": true, | ||
264 | "keygen": true, | ||
265 | "link": true, | ||
266 | "meta": true, | ||
267 | "param": true, | ||
268 | "source": true, | ||
269 | "track": true, | ||
270 | "wbr": true, | ||
271 | } | ||
diff --git a/vendor/golang.org/x/net/html/token.go b/vendor/golang.org/x/net/html/token.go new file mode 100644 index 0000000..893e272 --- /dev/null +++ b/vendor/golang.org/x/net/html/token.go | |||
@@ -0,0 +1,1219 @@ | |||
1 | // Copyright 2010 The Go Authors. All rights reserved. | ||
2 | // Use of this source code is governed by a BSD-style | ||
3 | // license that can be found in the LICENSE file. | ||
4 | |||
5 | package html | ||
6 | |||
7 | import ( | ||
8 | "bytes" | ||
9 | "errors" | ||
10 | "io" | ||
11 | "strconv" | ||
12 | "strings" | ||
13 | |||
14 | "golang.org/x/net/html/atom" | ||
15 | ) | ||
16 | |||
17 | // A TokenType is the type of a Token. | ||
18 | type TokenType uint32 | ||
19 | |||
20 | const ( | ||
21 | // ErrorToken means that an error occurred during tokenization. | ||
22 | ErrorToken TokenType = iota | ||
23 | // TextToken means a text node. | ||
24 | TextToken | ||
25 | // A StartTagToken looks like <a>. | ||
26 | StartTagToken | ||
27 | // An EndTagToken looks like </a>. | ||
28 | EndTagToken | ||
29 | // A SelfClosingTagToken tag looks like <br/>. | ||
30 | SelfClosingTagToken | ||
31 | // A CommentToken looks like <!--x-->. | ||
32 | CommentToken | ||
33 | // A DoctypeToken looks like <!DOCTYPE x> | ||
34 | DoctypeToken | ||
35 | ) | ||
36 | |||
37 | // ErrBufferExceeded means that the buffering limit was exceeded. | ||
38 | var ErrBufferExceeded = errors.New("max buffer exceeded") | ||
39 | |||
40 | // String returns a string representation of the TokenType. | ||
41 | func (t TokenType) String() string { | ||
42 | switch t { | ||
43 | case ErrorToken: | ||
44 | return "Error" | ||
45 | case TextToken: | ||
46 | return "Text" | ||
47 | case StartTagToken: | ||
48 | return "StartTag" | ||
49 | case EndTagToken: | ||
50 | return "EndTag" | ||
51 | case SelfClosingTagToken: | ||
52 | return "SelfClosingTag" | ||
53 | case CommentToken: | ||
54 | return "Comment" | ||
55 | case DoctypeToken: | ||
56 | return "Doctype" | ||
57 | } | ||
58 | return "Invalid(" + strconv.Itoa(int(t)) + ")" | ||
59 | } | ||
60 | |||
61 | // An Attribute is an attribute namespace-key-value triple. Namespace is | ||
62 | // non-empty for foreign attributes like xlink, Key is alphabetic (and hence | ||
63 | // does not contain escapable characters like '&', '<' or '>'), and Val is | ||
64 | // unescaped (it looks like "a<b" rather than "a<b"). | ||
65 | // | ||
66 | // Namespace is only used by the parser, not the tokenizer. | ||
67 | type Attribute struct { | ||
68 | Namespace, Key, Val string | ||
69 | } | ||
70 | |||
71 | // A Token consists of a TokenType and some Data (tag name for start and end | ||
72 | // tags, content for text, comments and doctypes). A tag Token may also contain | ||
73 | // a slice of Attributes. Data is unescaped for all Tokens (it looks like "a<b" | ||
74 | // rather than "a<b"). For tag Tokens, DataAtom is the atom for Data, or | ||
75 | // zero if Data is not a known tag name. | ||
76 | type Token struct { | ||
77 | Type TokenType | ||
78 | DataAtom atom.Atom | ||
79 | Data string | ||
80 | Attr []Attribute | ||
81 | } | ||
82 | |||
83 | // tagString returns a string representation of a tag Token's Data and Attr. | ||
84 | func (t Token) tagString() string { | ||
85 | if len(t.Attr) == 0 { | ||
86 | return t.Data | ||
87 | } | ||
88 | buf := bytes.NewBufferString(t.Data) | ||
89 | for _, a := range t.Attr { | ||
90 | buf.WriteByte(' ') | ||
91 | buf.WriteString(a.Key) | ||
92 | buf.WriteString(`="`) | ||
93 | escape(buf, a.Val) | ||
94 | buf.WriteByte('"') | ||
95 | } | ||
96 | return buf.String() | ||
97 | } | ||
98 | |||
99 | // String returns a string representation of the Token. | ||
100 | func (t Token) String() string { | ||
101 | switch t.Type { | ||
102 | case ErrorToken: | ||
103 | return "" | ||
104 | case TextToken: | ||
105 | return EscapeString(t.Data) | ||
106 | case StartTagToken: | ||
107 | return "<" + t.tagString() + ">" | ||
108 | case EndTagToken: | ||
109 | return "</" + t.tagString() + ">" | ||
110 | case SelfClosingTagToken: | ||
111 | return "<" + t.tagString() + "/>" | ||
112 | case CommentToken: | ||
113 | return "<!--" + t.Data + "-->" | ||
114 | case DoctypeToken: | ||
115 | return "<!DOCTYPE " + t.Data + ">" | ||
116 | } | ||
117 | return "Invalid(" + strconv.Itoa(int(t.Type)) + ")" | ||
118 | } | ||
119 | |||
120 | // span is a range of bytes in a Tokenizer's buffer. The start is inclusive, | ||
121 | // the end is exclusive. | ||
122 | type span struct { | ||
123 | start, end int | ||
124 | } | ||
125 | |||
126 | // A Tokenizer returns a stream of HTML Tokens. | ||
127 | type Tokenizer struct { | ||
128 | // r is the source of the HTML text. | ||
129 | r io.Reader | ||
130 | // tt is the TokenType of the current token. | ||
131 | tt TokenType | ||
132 | // err is the first error encountered during tokenization. It is possible | ||
133 | // for tt != Error && err != nil to hold: this means that Next returned a | ||
134 | // valid token but the subsequent Next call will return an error token. | ||
135 | // For example, if the HTML text input was just "plain", then the first | ||
136 | // Next call would set z.err to io.EOF but return a TextToken, and all | ||
137 | // subsequent Next calls would return an ErrorToken. | ||
138 | // err is never reset. Once it becomes non-nil, it stays non-nil. | ||
139 | err error | ||
140 | // readErr is the error returned by the io.Reader r. It is separate from | ||
141 | // err because it is valid for an io.Reader to return (n int, err1 error) | ||
142 | // such that n > 0 && err1 != nil, and callers should always process the | ||
143 | // n > 0 bytes before considering the error err1. | ||
144 | readErr error | ||
145 | // buf[raw.start:raw.end] holds the raw bytes of the current token. | ||
146 | // buf[raw.end:] is buffered input that will yield future tokens. | ||
147 | raw span | ||
148 | buf []byte | ||
149 | // maxBuf limits the data buffered in buf. A value of 0 means unlimited. | ||
150 | maxBuf int | ||
151 | // buf[data.start:data.end] holds the raw bytes of the current token's data: | ||
152 | // a text token's text, a tag token's tag name, etc. | ||
153 | data span | ||
154 | // pendingAttr is the attribute key and value currently being tokenized. | ||
155 | // When complete, pendingAttr is pushed onto attr. nAttrReturned is | ||
156 | // incremented on each call to TagAttr. | ||
157 | pendingAttr [2]span | ||
158 | attr [][2]span | ||
159 | nAttrReturned int | ||
160 | // rawTag is the "script" in "</script>" that closes the next token. If | ||
161 | // non-empty, the subsequent call to Next will return a raw or RCDATA text | ||
162 | // token: one that treats "<p>" as text instead of an element. | ||
163 | // rawTag's contents are lower-cased. | ||
164 | rawTag string | ||
165 | // textIsRaw is whether the current text token's data is not escaped. | ||
166 | textIsRaw bool | ||
167 | // convertNUL is whether NUL bytes in the current token's data should | ||
168 | // be converted into \ufffd replacement characters. | ||
169 | convertNUL bool | ||
170 | // allowCDATA is whether CDATA sections are allowed in the current context. | ||
171 | allowCDATA bool | ||
172 | } | ||
173 | |||
174 | // AllowCDATA sets whether or not the tokenizer recognizes <![CDATA[foo]]> as | ||
175 | // the text "foo". The default value is false, which means to recognize it as | ||
176 | // a bogus comment "<!-- [CDATA[foo]] -->" instead. | ||
177 | // | ||
178 | // Strictly speaking, an HTML5 compliant tokenizer should allow CDATA if and | ||
179 | // only if tokenizing foreign content, such as MathML and SVG. However, | ||
180 | // tracking foreign-contentness is difficult to do purely in the tokenizer, | ||
181 | // as opposed to the parser, due to HTML integration points: an <svg> element | ||
182 | // can contain a <foreignObject> that is foreign-to-SVG but not foreign-to- | ||
183 | // HTML. For strict compliance with the HTML5 tokenization algorithm, it is the | ||
184 | // responsibility of the user of a tokenizer to call AllowCDATA as appropriate. | ||
185 | // In practice, if using the tokenizer without caring whether MathML or SVG | ||
186 | // CDATA is text or comments, such as tokenizing HTML to find all the anchor | ||
187 | // text, it is acceptable to ignore this responsibility. | ||
188 | func (z *Tokenizer) AllowCDATA(allowCDATA bool) { | ||
189 | z.allowCDATA = allowCDATA | ||
190 | } | ||
191 | |||
192 | // NextIsNotRawText instructs the tokenizer that the next token should not be | ||
193 | // considered as 'raw text'. Some elements, such as script and title elements, | ||
194 | // normally require the next token after the opening tag to be 'raw text' that | ||
195 | // has no child elements. For example, tokenizing "<title>a<b>c</b>d</title>" | ||
196 | // yields a start tag token for "<title>", a text token for "a<b>c</b>d", and | ||
197 | // an end tag token for "</title>". There are no distinct start tag or end tag | ||
198 | // tokens for the "<b>" and "</b>". | ||
199 | // | ||
200 | // This tokenizer implementation will generally look for raw text at the right | ||
201 | // times. Strictly speaking, an HTML5 compliant tokenizer should not look for | ||
202 | // raw text if in foreign content: <title> generally needs raw text, but a | ||
203 | // <title> inside an <svg> does not. Another example is that a <textarea> | ||
204 | // generally needs raw text, but a <textarea> is not allowed as an immediate | ||
205 | // child of a <select>; in normal parsing, a <textarea> implies </select>, but | ||
206 | // one cannot close the implicit element when parsing a <select>'s InnerHTML. | ||
207 | // Similarly to AllowCDATA, tracking the correct moment to override raw-text- | ||
208 | // ness is difficult to do purely in the tokenizer, as opposed to the parser. | ||
209 | // For strict compliance with the HTML5 tokenization algorithm, it is the | ||
210 | // responsibility of the user of a tokenizer to call NextIsNotRawText as | ||
211 | // appropriate. In practice, like AllowCDATA, it is acceptable to ignore this | ||
212 | // responsibility for basic usage. | ||
213 | // | ||
214 | // Note that this 'raw text' concept is different from the one offered by the | ||
215 | // Tokenizer.Raw method. | ||
216 | func (z *Tokenizer) NextIsNotRawText() { | ||
217 | z.rawTag = "" | ||
218 | } | ||
219 | |||
220 | // Err returns the error associated with the most recent ErrorToken token. | ||
221 | // This is typically io.EOF, meaning the end of tokenization. | ||
222 | func (z *Tokenizer) Err() error { | ||
223 | if z.tt != ErrorToken { | ||
224 | return nil | ||
225 | } | ||
226 | return z.err | ||
227 | } | ||
228 | |||
229 | // readByte returns the next byte from the input stream, doing a buffered read | ||
230 | // from z.r into z.buf if necessary. z.buf[z.raw.start:z.raw.end] remains a contiguous byte | ||
231 | // slice that holds all the bytes read so far for the current token. | ||
232 | // It sets z.err if the underlying reader returns an error. | ||
233 | // Pre-condition: z.err == nil. | ||
234 | func (z *Tokenizer) readByte() byte { | ||
235 | if z.raw.end >= len(z.buf) { | ||
236 | // Our buffer is exhausted and we have to read from z.r. Check if the | ||
237 | // previous read resulted in an error. | ||
238 | if z.readErr != nil { | ||
239 | z.err = z.readErr | ||
240 | return 0 | ||
241 | } | ||
242 | // We copy z.buf[z.raw.start:z.raw.end] to the beginning of z.buf. If the length | ||
243 | // z.raw.end - z.raw.start is more than half the capacity of z.buf, then we | ||
244 | // allocate a new buffer before the copy. | ||
245 | c := cap(z.buf) | ||
246 | d := z.raw.end - z.raw.start | ||
247 | var buf1 []byte | ||
248 | if 2*d > c { | ||
249 | buf1 = make([]byte, d, 2*c) | ||
250 | } else { | ||
251 | buf1 = z.buf[:d] | ||
252 | } | ||
253 | copy(buf1, z.buf[z.raw.start:z.raw.end]) | ||
254 | if x := z.raw.start; x != 0 { | ||
255 | // Adjust the data/attr spans to refer to the same contents after the copy. | ||
256 | z.data.start -= x | ||
257 | z.data.end -= x | ||
258 | z.pendingAttr[0].start -= x | ||
259 | z.pendingAttr[0].end -= x | ||
260 | z.pendingAttr[1].start -= x | ||
261 | z.pendingAttr[1].end -= x | ||
262 | for i := range z.attr { | ||
263 | z.attr[i][0].start -= x | ||
264 | z.attr[i][0].end -= x | ||
265 | z.attr[i][1].start -= x | ||
266 | z.attr[i][1].end -= x | ||
267 | } | ||
268 | } | ||
269 | z.raw.start, z.raw.end, z.buf = 0, d, buf1[:d] | ||
270 | // Now that we have copied the live bytes to the start of the buffer, | ||
271 | // we read from z.r into the remainder. | ||
272 | var n int | ||
273 | n, z.readErr = readAtLeastOneByte(z.r, buf1[d:cap(buf1)]) | ||
274 | if n == 0 { | ||
275 | z.err = z.readErr | ||
276 | return 0 | ||
277 | } | ||
278 | z.buf = buf1[:d+n] | ||
279 | } | ||
280 | x := z.buf[z.raw.end] | ||
281 | z.raw.end++ | ||
282 | if z.maxBuf > 0 && z.raw.end-z.raw.start >= z.maxBuf { | ||
283 | z.err = ErrBufferExceeded | ||
284 | return 0 | ||
285 | } | ||
286 | return x | ||
287 | } | ||
288 | |||
289 | // Buffered returns a slice containing data buffered but not yet tokenized. | ||
290 | func (z *Tokenizer) Buffered() []byte { | ||
291 | return z.buf[z.raw.end:] | ||
292 | } | ||
293 | |||
294 | // readAtLeastOneByte wraps an io.Reader so that reading cannot return (0, nil). | ||
295 | // It returns io.ErrNoProgress if the underlying r.Read method returns (0, nil) | ||
296 | // too many times in succession. | ||
297 | func readAtLeastOneByte(r io.Reader, b []byte) (int, error) { | ||
298 | for i := 0; i < 100; i++ { | ||
299 | n, err := r.Read(b) | ||
300 | if n != 0 || err != nil { | ||
301 | return n, err | ||
302 | } | ||
303 | } | ||
304 | return 0, io.ErrNoProgress | ||
305 | } | ||
306 | |||
307 | // skipWhiteSpace skips past any white space. | ||
308 | func (z *Tokenizer) skipWhiteSpace() { | ||
309 | if z.err != nil { | ||
310 | return | ||
311 | } | ||
312 | for { | ||
313 | c := z.readByte() | ||
314 | if z.err != nil { | ||
315 | return | ||
316 | } | ||
317 | switch c { | ||
318 | case ' ', '\n', '\r', '\t', '\f': | ||
319 | // No-op. | ||
320 | default: | ||
321 | z.raw.end-- | ||
322 | return | ||
323 | } | ||
324 | } | ||
325 | } | ||
326 | |||
327 | // readRawOrRCDATA reads until the next "</foo>", where "foo" is z.rawTag and | ||
328 | // is typically something like "script" or "textarea". | ||
329 | func (z *Tokenizer) readRawOrRCDATA() { | ||
330 | if z.rawTag == "script" { | ||
331 | z.readScript() | ||
332 | z.textIsRaw = true | ||
333 | z.rawTag = "" | ||
334 | return | ||
335 | } | ||
336 | loop: | ||
337 | for { | ||
338 | c := z.readByte() | ||
339 | if z.err != nil { | ||
340 | break loop | ||
341 | } | ||
342 | if c != '<' { | ||
343 | continue loop | ||
344 | } | ||
345 | c = z.readByte() | ||
346 | if z.err != nil { | ||
347 | break loop | ||
348 | } | ||
349 | if c != '/' { | ||
350 | continue loop | ||
351 | } | ||
352 | if z.readRawEndTag() || z.err != nil { | ||
353 | break loop | ||
354 | } | ||
355 | } | ||
356 | z.data.end = z.raw.end | ||
357 | // A textarea's or title's RCDATA can contain escaped entities. | ||
358 | z.textIsRaw = z.rawTag != "textarea" && z.rawTag != "title" | ||
359 | z.rawTag = "" | ||
360 | } | ||
361 | |||
362 | // readRawEndTag attempts to read a tag like "</foo>", where "foo" is z.rawTag. | ||
363 | // If it succeeds, it backs up the input position to reconsume the tag and | ||
364 | // returns true. Otherwise it returns false. The opening "</" has already been | ||
365 | // consumed. | ||
366 | func (z *Tokenizer) readRawEndTag() bool { | ||
367 | for i := 0; i < len(z.rawTag); i++ { | ||
368 | c := z.readByte() | ||
369 | if z.err != nil { | ||
370 | return false | ||
371 | } | ||
372 | if c != z.rawTag[i] && c != z.rawTag[i]-('a'-'A') { | ||
373 | z.raw.end-- | ||
374 | return false | ||
375 | } | ||
376 | } | ||
377 | c := z.readByte() | ||
378 | if z.err != nil { | ||
379 | return false | ||
380 | } | ||
381 | switch c { | ||
382 | case ' ', '\n', '\r', '\t', '\f', '/', '>': | ||
383 | // The 3 is 2 for the leading "</" plus 1 for the trailing character c. | ||
384 | z.raw.end -= 3 + len(z.rawTag) | ||
385 | return true | ||
386 | } | ||
387 | z.raw.end-- | ||
388 | return false | ||
389 | } | ||
390 | |||
391 | // readScript reads until the next </script> tag, following the byzantine | ||
392 | // rules for escaping/hiding the closing tag. | ||
393 | func (z *Tokenizer) readScript() { | ||
394 | defer func() { | ||
395 | z.data.end = z.raw.end | ||
396 | }() | ||
397 | var c byte | ||
398 | |||
399 | scriptData: | ||
400 | c = z.readByte() | ||
401 | if z.err != nil { | ||
402 | return | ||
403 | } | ||
404 | if c == '<' { | ||
405 | goto scriptDataLessThanSign | ||
406 | } | ||
407 | goto scriptData | ||
408 | |||
409 | scriptDataLessThanSign: | ||
410 | c = z.readByte() | ||
411 | if z.err != nil { | ||
412 | return | ||
413 | } | ||
414 | switch c { | ||
415 | case '/': | ||
416 | goto scriptDataEndTagOpen | ||
417 | case '!': | ||
418 | goto scriptDataEscapeStart | ||
419 | } | ||
420 | z.raw.end-- | ||
421 | goto scriptData | ||
422 | |||
423 | scriptDataEndTagOpen: | ||
424 | if z.readRawEndTag() || z.err != nil { | ||
425 | return | ||
426 | } | ||
427 | goto scriptData | ||
428 | |||
429 | scriptDataEscapeStart: | ||
430 | c = z.readByte() | ||
431 | if z.err != nil { | ||
432 | return | ||
433 | } | ||
434 | if c == '-' { | ||
435 | goto scriptDataEscapeStartDash | ||
436 | } | ||
437 | z.raw.end-- | ||
438 | goto scriptData | ||
439 | |||
440 | scriptDataEscapeStartDash: | ||
441 | c = z.readByte() | ||
442 | if z.err != nil { | ||
443 | return | ||
444 | } | ||
445 | if c == '-' { | ||
446 | goto scriptDataEscapedDashDash | ||
447 | } | ||
448 | z.raw.end-- | ||
449 | goto scriptData | ||
450 | |||
451 | scriptDataEscaped: | ||
452 | c = z.readByte() | ||
453 | if z.err != nil { | ||
454 | return | ||
455 | } | ||
456 | switch c { | ||
457 | case '-': | ||
458 | goto scriptDataEscapedDash | ||
459 | case '<': | ||
460 | goto scriptDataEscapedLessThanSign | ||
461 | } | ||
462 | goto scriptDataEscaped | ||
463 | |||
464 | scriptDataEscapedDash: | ||
465 | c = z.readByte() | ||
466 | if z.err != nil { | ||
467 | return | ||
468 | } | ||
469 | switch c { | ||
470 | case '-': | ||
471 | goto scriptDataEscapedDashDash | ||
472 | case '<': | ||
473 | goto scriptDataEscapedLessThanSign | ||
474 | } | ||
475 | goto scriptDataEscaped | ||
476 | |||
477 | scriptDataEscapedDashDash: | ||
478 | c = z.readByte() | ||
479 | if z.err != nil { | ||
480 | return | ||
481 | } | ||
482 | switch c { | ||
483 | case '-': | ||
484 | goto scriptDataEscapedDashDash | ||
485 | case '<': | ||
486 | goto scriptDataEscapedLessThanSign | ||
487 | case '>': | ||
488 | goto scriptData | ||
489 | } | ||
490 | goto scriptDataEscaped | ||
491 | |||
492 | scriptDataEscapedLessThanSign: | ||
493 | c = z.readByte() | ||
494 | if z.err != nil { | ||
495 | return | ||
496 | } | ||
497 | if c == '/' { | ||
498 | goto scriptDataEscapedEndTagOpen | ||
499 | } | ||
500 | if 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' { | ||
501 | goto scriptDataDoubleEscapeStart | ||
502 | } | ||
503 | z.raw.end-- | ||
504 | goto scriptData | ||
505 | |||
506 | scriptDataEscapedEndTagOpen: | ||
507 | if z.readRawEndTag() || z.err != nil { | ||
508 | return | ||
509 | } | ||
510 | goto scriptDataEscaped | ||
511 | |||
512 | scriptDataDoubleEscapeStart: | ||
513 | z.raw.end-- | ||
514 | for i := 0; i < len("script"); i++ { | ||
515 | c = z.readByte() | ||
516 | if z.err != nil { | ||
517 | return | ||
518 | } | ||
519 | if c != "script"[i] && c != "SCRIPT"[i] { | ||
520 | z.raw.end-- | ||
521 | goto scriptDataEscaped | ||
522 | } | ||
523 | } | ||
524 | c = z.readByte() | ||
525 | if z.err != nil { | ||
526 | return | ||
527 | } | ||
528 | switch c { | ||
529 | case ' ', '\n', '\r', '\t', '\f', '/', '>': | ||
530 | goto scriptDataDoubleEscaped | ||
531 | } | ||
532 | z.raw.end-- | ||
533 | goto scriptDataEscaped | ||
534 | |||
535 | scriptDataDoubleEscaped: | ||
536 | c = z.readByte() | ||
537 | if z.err != nil { | ||
538 | return | ||
539 | } | ||
540 | switch c { | ||
541 | case '-': | ||
542 | goto scriptDataDoubleEscapedDash | ||
543 | case '<': | ||
544 | goto scriptDataDoubleEscapedLessThanSign | ||
545 | } | ||
546 | goto scriptDataDoubleEscaped | ||
547 | |||
548 | scriptDataDoubleEscapedDash: | ||
549 | c = z.readByte() | ||
550 | if z.err != nil { | ||
551 | return | ||
552 | } | ||
553 | switch c { | ||
554 | case '-': | ||
555 | goto scriptDataDoubleEscapedDashDash | ||
556 | case '<': | ||
557 | goto scriptDataDoubleEscapedLessThanSign | ||
558 | } | ||
559 | goto scriptDataDoubleEscaped | ||
560 | |||
561 | scriptDataDoubleEscapedDashDash: | ||
562 | c = z.readByte() | ||
563 | if z.err != nil { | ||
564 | return | ||
565 | } | ||
566 | switch c { | ||
567 | case '-': | ||
568 | goto scriptDataDoubleEscapedDashDash | ||
569 | case '<': | ||
570 | goto scriptDataDoubleEscapedLessThanSign | ||
571 | case '>': | ||
572 | goto scriptData | ||
573 | } | ||
574 | goto scriptDataDoubleEscaped | ||
575 | |||
576 | scriptDataDoubleEscapedLessThanSign: | ||
577 | c = z.readByte() | ||
578 | if z.err != nil { | ||
579 | return | ||
580 | } | ||
581 | if c == '/' { | ||
582 | goto scriptDataDoubleEscapeEnd | ||
583 | } | ||
584 | z.raw.end-- | ||
585 | goto scriptDataDoubleEscaped | ||
586 | |||
587 | scriptDataDoubleEscapeEnd: | ||
588 | if z.readRawEndTag() { | ||
589 | z.raw.end += len("</script>") | ||
590 | goto scriptDataEscaped | ||
591 | } | ||
592 | if z.err != nil { | ||
593 | return | ||
594 | } | ||
595 | goto scriptDataDoubleEscaped | ||
596 | } | ||
597 | |||
598 | // readComment reads the next comment token starting with "<!--". The opening | ||
599 | // "<!--" has already been consumed. | ||
600 | func (z *Tokenizer) readComment() { | ||
601 | z.data.start = z.raw.end | ||
602 | defer func() { | ||
603 | if z.data.end < z.data.start { | ||
604 | // It's a comment with no data, like <!-->. | ||
605 | z.data.end = z.data.start | ||
606 | } | ||
607 | }() | ||
608 | for dashCount := 2; ; { | ||
609 | c := z.readByte() | ||
610 | if z.err != nil { | ||
611 | // Ignore up to two dashes at EOF. | ||
612 | if dashCount > 2 { | ||
613 | dashCount = 2 | ||
614 | } | ||
615 | z.data.end = z.raw.end - dashCount | ||
616 | return | ||
617 | } | ||
618 | switch c { | ||
619 | case '-': | ||
620 | dashCount++ | ||
621 | continue | ||
622 | case '>': | ||
623 | if dashCount >= 2 { | ||
624 | z.data.end = z.raw.end - len("-->") | ||
625 | return | ||
626 | } | ||
627 | case '!': | ||
628 | if dashCount >= 2 { | ||
629 | c = z.readByte() | ||
630 | if z.err != nil { | ||
631 | z.data.end = z.raw.end | ||
632 | return | ||
633 | } | ||
634 | if c == '>' { | ||
635 | z.data.end = z.raw.end - len("--!>") | ||
636 | return | ||
637 | } | ||
638 | } | ||
639 | } | ||
640 | dashCount = 0 | ||
641 | } | ||
642 | } | ||
643 | |||
644 | // readUntilCloseAngle reads until the next ">". | ||
645 | func (z *Tokenizer) readUntilCloseAngle() { | ||
646 | z.data.start = z.raw.end | ||
647 | for { | ||
648 | c := z.readByte() | ||
649 | if z.err != nil { | ||
650 | z.data.end = z.raw.end | ||
651 | return | ||
652 | } | ||
653 | if c == '>' { | ||
654 | z.data.end = z.raw.end - len(">") | ||
655 | return | ||
656 | } | ||
657 | } | ||
658 | } | ||
659 | |||
660 | // readMarkupDeclaration reads the next token starting with "<!". It might be | ||
661 | // a "<!--comment-->", a "<!DOCTYPE foo>", a "<![CDATA[section]]>" or | ||
662 | // "<!a bogus comment". The opening "<!" has already been consumed. | ||
663 | func (z *Tokenizer) readMarkupDeclaration() TokenType { | ||
664 | z.data.start = z.raw.end | ||
665 | var c [2]byte | ||
666 | for i := 0; i < 2; i++ { | ||
667 | c[i] = z.readByte() | ||
668 | if z.err != nil { | ||
669 | z.data.end = z.raw.end | ||
670 | return CommentToken | ||
671 | } | ||
672 | } | ||
673 | if c[0] == '-' && c[1] == '-' { | ||
674 | z.readComment() | ||
675 | return CommentToken | ||
676 | } | ||
677 | z.raw.end -= 2 | ||
678 | if z.readDoctype() { | ||
679 | return DoctypeToken | ||
680 | } | ||
681 | if z.allowCDATA && z.readCDATA() { | ||
682 | z.convertNUL = true | ||
683 | return TextToken | ||
684 | } | ||
685 | // It's a bogus comment. | ||
686 | z.readUntilCloseAngle() | ||
687 | return CommentToken | ||
688 | } | ||
689 | |||
690 | // readDoctype attempts to read a doctype declaration and returns true if | ||
691 | // successful. The opening "<!" has already been consumed. | ||
692 | func (z *Tokenizer) readDoctype() bool { | ||
693 | const s = "DOCTYPE" | ||
694 | for i := 0; i < len(s); i++ { | ||
695 | c := z.readByte() | ||
696 | if z.err != nil { | ||
697 | z.data.end = z.raw.end | ||
698 | return false | ||
699 | } | ||
700 | if c != s[i] && c != s[i]+('a'-'A') { | ||
701 | // Back up to read the fragment of "DOCTYPE" again. | ||
702 | z.raw.end = z.data.start | ||
703 | return false | ||
704 | } | ||
705 | } | ||
706 | if z.skipWhiteSpace(); z.err != nil { | ||
707 | z.data.start = z.raw.end | ||
708 | z.data.end = z.raw.end | ||
709 | return true | ||
710 | } | ||
711 | z.readUntilCloseAngle() | ||
712 | return true | ||
713 | } | ||
714 | |||
715 | // readCDATA attempts to read a CDATA section and returns true if | ||
716 | // successful. The opening "<!" has already been consumed. | ||
717 | func (z *Tokenizer) readCDATA() bool { | ||
718 | const s = "[CDATA[" | ||
719 | for i := 0; i < len(s); i++ { | ||
720 | c := z.readByte() | ||
721 | if z.err != nil { | ||
722 | z.data.end = z.raw.end | ||
723 | return false | ||
724 | } | ||
725 | if c != s[i] { | ||
726 | // Back up to read the fragment of "[CDATA[" again. | ||
727 | z.raw.end = z.data.start | ||
728 | return false | ||
729 | } | ||
730 | } | ||
731 | z.data.start = z.raw.end | ||
732 | brackets := 0 | ||
733 | for { | ||
734 | c := z.readByte() | ||
735 | if z.err != nil { | ||
736 | z.data.end = z.raw.end | ||
737 | return true | ||
738 | } | ||
739 | switch c { | ||
740 | case ']': | ||
741 | brackets++ | ||
742 | case '>': | ||
743 | if brackets >= 2 { | ||
744 | z.data.end = z.raw.end - len("]]>") | ||
745 | return true | ||
746 | } | ||
747 | brackets = 0 | ||
748 | default: | ||
749 | brackets = 0 | ||
750 | } | ||
751 | } | ||
752 | } | ||
753 | |||
754 | // startTagIn returns whether the start tag in z.buf[z.data.start:z.data.end] | ||
755 | // case-insensitively matches any element of ss. | ||
756 | func (z *Tokenizer) startTagIn(ss ...string) bool { | ||
757 | loop: | ||
758 | for _, s := range ss { | ||
759 | if z.data.end-z.data.start != len(s) { | ||
760 | continue loop | ||
761 | } | ||
762 | for i := 0; i < len(s); i++ { | ||
763 | c := z.buf[z.data.start+i] | ||
764 | if 'A' <= c && c <= 'Z' { | ||
765 | c += 'a' - 'A' | ||
766 | } | ||
767 | if c != s[i] { | ||
768 | continue loop | ||
769 | } | ||
770 | } | ||
771 | return true | ||
772 | } | ||
773 | return false | ||
774 | } | ||
775 | |||
776 | // readStartTag reads the next start tag token. The opening "<a" has already | ||
777 | // been consumed, where 'a' means anything in [A-Za-z]. | ||
778 | func (z *Tokenizer) readStartTag() TokenType { | ||
779 | z.readTag(true) | ||
780 | if z.err != nil { | ||
781 | return ErrorToken | ||
782 | } | ||
783 | // Several tags flag the tokenizer's next token as raw. | ||
784 | c, raw := z.buf[z.data.start], false | ||
785 | if 'A' <= c && c <= 'Z' { | ||
786 | c += 'a' - 'A' | ||
787 | } | ||
788 | switch c { | ||
789 | case 'i': | ||
790 | raw = z.startTagIn("iframe") | ||
791 | case 'n': | ||
792 | raw = z.startTagIn("noembed", "noframes", "noscript") | ||
793 | case 'p': | ||
794 | raw = z.startTagIn("plaintext") | ||
795 | case 's': | ||
796 | raw = z.startTagIn("script", "style") | ||
797 | case 't': | ||
798 | raw = z.startTagIn("textarea", "title") | ||
799 | case 'x': | ||
800 | raw = z.startTagIn("xmp") | ||
801 | } | ||
802 | if raw { | ||
803 | z.rawTag = strings.ToLower(string(z.buf[z.data.start:z.data.end])) | ||
804 | } | ||
805 | // Look for a self-closing token like "<br/>". | ||
806 | if z.err == nil && z.buf[z.raw.end-2] == '/' { | ||
807 | return SelfClosingTagToken | ||
808 | } | ||
809 | return StartTagToken | ||
810 | } | ||
811 | |||
812 | // readTag reads the next tag token and its attributes. If saveAttr, those | ||
813 | // attributes are saved in z.attr, otherwise z.attr is set to an empty slice. | ||
814 | // The opening "<a" or "</a" has already been consumed, where 'a' means anything | ||
815 | // in [A-Za-z]. | ||
816 | func (z *Tokenizer) readTag(saveAttr bool) { | ||
817 | z.attr = z.attr[:0] | ||
818 | z.nAttrReturned = 0 | ||
819 | // Read the tag name and attribute key/value pairs. | ||
820 | z.readTagName() | ||
821 | if z.skipWhiteSpace(); z.err != nil { | ||
822 | return | ||
823 | } | ||
824 | for { | ||
825 | c := z.readByte() | ||
826 | if z.err != nil || c == '>' { | ||
827 | break | ||
828 | } | ||
829 | z.raw.end-- | ||
830 | z.readTagAttrKey() | ||
831 | z.readTagAttrVal() | ||
832 | // Save pendingAttr if saveAttr and that attribute has a non-empty key. | ||
833 | if saveAttr && z.pendingAttr[0].start != z.pendingAttr[0].end { | ||
834 | z.attr = append(z.attr, z.pendingAttr) | ||
835 | } | ||
836 | if z.skipWhiteSpace(); z.err != nil { | ||
837 | break | ||
838 | } | ||
839 | } | ||
840 | } | ||
841 | |||
842 | // readTagName sets z.data to the "div" in "<div k=v>". The reader (z.raw.end) | ||
843 | // is positioned such that the first byte of the tag name (the "d" in "<div") | ||
844 | // has already been consumed. | ||
845 | func (z *Tokenizer) readTagName() { | ||
846 | z.data.start = z.raw.end - 1 | ||
847 | for { | ||
848 | c := z.readByte() | ||
849 | if z.err != nil { | ||
850 | z.data.end = z.raw.end | ||
851 | return | ||
852 | } | ||
853 | switch c { | ||
854 | case ' ', '\n', '\r', '\t', '\f': | ||
855 | z.data.end = z.raw.end - 1 | ||
856 | return | ||
857 | case '/', '>': | ||
858 | z.raw.end-- | ||
859 | z.data.end = z.raw.end | ||
860 | return | ||
861 | } | ||
862 | } | ||
863 | } | ||
864 | |||
865 | // readTagAttrKey sets z.pendingAttr[0] to the "k" in "<div k=v>". | ||
866 | // Precondition: z.err == nil. | ||
867 | func (z *Tokenizer) readTagAttrKey() { | ||
868 | z.pendingAttr[0].start = z.raw.end | ||
869 | for { | ||
870 | c := z.readByte() | ||
871 | if z.err != nil { | ||
872 | z.pendingAttr[0].end = z.raw.end | ||
873 | return | ||
874 | } | ||
875 | switch c { | ||
876 | case ' ', '\n', '\r', '\t', '\f', '/': | ||
877 | z.pendingAttr[0].end = z.raw.end - 1 | ||
878 | return | ||
879 | case '=', '>': | ||
880 | z.raw.end-- | ||
881 | z.pendingAttr[0].end = z.raw.end | ||
882 | return | ||
883 | } | ||
884 | } | ||
885 | } | ||
886 | |||
887 | // readTagAttrVal sets z.pendingAttr[1] to the "v" in "<div k=v>". | ||
888 | func (z *Tokenizer) readTagAttrVal() { | ||
889 | z.pendingAttr[1].start = z.raw.end | ||
890 | z.pendingAttr[1].end = z.raw.end | ||
891 | if z.skipWhiteSpace(); z.err != nil { | ||
892 | return | ||
893 | } | ||
894 | c := z.readByte() | ||
895 | if z.err != nil { | ||
896 | return | ||
897 | } | ||
898 | if c != '=' { | ||
899 | z.raw.end-- | ||
900 | return | ||
901 | } | ||
902 | if z.skipWhiteSpace(); z.err != nil { | ||
903 | return | ||
904 | } | ||
905 | quote := z.readByte() | ||
906 | if z.err != nil { | ||
907 | return | ||
908 | } | ||
909 | switch quote { | ||
910 | case '>': | ||
911 | z.raw.end-- | ||
912 | return | ||
913 | |||
914 | case '\'', '"': | ||
915 | z.pendingAttr[1].start = z.raw.end | ||
916 | for { | ||
917 | c := z.readByte() | ||
918 | if z.err != nil { | ||
919 | z.pendingAttr[1].end = z.raw.end | ||
920 | return | ||
921 | } | ||
922 | if c == quote { | ||
923 | z.pendingAttr[1].end = z.raw.end - 1 | ||
924 | return | ||
925 | } | ||
926 | } | ||
927 | |||
928 | default: | ||
929 | z.pendingAttr[1].start = z.raw.end - 1 | ||
930 | for { | ||
931 | c := z.readByte() | ||
932 | if z.err != nil { | ||
933 | z.pendingAttr[1].end = z.raw.end | ||
934 | return | ||
935 | } | ||
936 | switch c { | ||
937 | case ' ', '\n', '\r', '\t', '\f': | ||
938 | z.pendingAttr[1].end = z.raw.end - 1 | ||
939 | return | ||
940 | case '>': | ||
941 | z.raw.end-- | ||
942 | z.pendingAttr[1].end = z.raw.end | ||
943 | return | ||
944 | } | ||
945 | } | ||
946 | } | ||
947 | } | ||
948 | |||
949 | // Next scans the next token and returns its type. | ||
950 | func (z *Tokenizer) Next() TokenType { | ||
951 | z.raw.start = z.raw.end | ||
952 | z.data.start = z.raw.end | ||
953 | z.data.end = z.raw.end | ||
954 | if z.err != nil { | ||
955 | z.tt = ErrorToken | ||
956 | return z.tt | ||
957 | } | ||
958 | if z.rawTag != "" { | ||
959 | if z.rawTag == "plaintext" { | ||
960 | // Read everything up to EOF. | ||
961 | for z.err == nil { | ||
962 | z.readByte() | ||
963 | } | ||
964 | z.data.end = z.raw.end | ||
965 | z.textIsRaw = true | ||
966 | } else { | ||
967 | z.readRawOrRCDATA() | ||
968 | } | ||
969 | if z.data.end > z.data.start { | ||
970 | z.tt = TextToken | ||
971 | z.convertNUL = true | ||
972 | return z.tt | ||
973 | } | ||
974 | } | ||
975 | z.textIsRaw = false | ||
976 | z.convertNUL = false | ||
977 | |||
978 | loop: | ||
979 | for { | ||
980 | c := z.readByte() | ||
981 | if z.err != nil { | ||
982 | break loop | ||
983 | } | ||
984 | if c != '<' { | ||
985 | continue loop | ||
986 | } | ||
987 | |||
988 | // Check if the '<' we have just read is part of a tag, comment | ||
989 | // or doctype. If not, it's part of the accumulated text token. | ||
990 | c = z.readByte() | ||
991 | if z.err != nil { | ||
992 | break loop | ||
993 | } | ||
994 | var tokenType TokenType | ||
995 | switch { | ||
996 | case 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z': | ||
997 | tokenType = StartTagToken | ||
998 | case c == '/': | ||
999 | tokenType = EndTagToken | ||
1000 | case c == '!' || c == '?': | ||
1001 | // We use CommentToken to mean any of "<!--actual comments-->", | ||
1002 | // "<!DOCTYPE declarations>" and "<?xml processing instructions?>". | ||
1003 | tokenType = CommentToken | ||
1004 | default: | ||
1005 | // Reconsume the current character. | ||
1006 | z.raw.end-- | ||
1007 | continue | ||
1008 | } | ||
1009 | |||
1010 | // We have a non-text token, but we might have accumulated some text | ||
1011 | // before that. If so, we return the text first, and return the non- | ||
1012 | // text token on the subsequent call to Next. | ||
1013 | if x := z.raw.end - len("<a"); z.raw.start < x { | ||
1014 | z.raw.end = x | ||
1015 | z.data.end = x | ||
1016 | z.tt = TextToken | ||
1017 | return z.tt | ||
1018 | } | ||
1019 | switch tokenType { | ||
1020 | case StartTagToken: | ||
1021 | z.tt = z.readStartTag() | ||
1022 | return z.tt | ||
1023 | case EndTagToken: | ||
1024 | c = z.readByte() | ||
1025 | if z.err != nil { | ||
1026 | break loop | ||
1027 | } | ||
1028 | if c == '>' { | ||
1029 | // "</>" does not generate a token at all. Generate an empty comment | ||
1030 | // to allow passthrough clients to pick up the data using Raw. | ||
1031 | // Reset the tokenizer state and start again. | ||
1032 | z.tt = CommentToken | ||
1033 | return z.tt | ||
1034 | } | ||
1035 | if 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' { | ||
1036 | z.readTag(false) | ||
1037 | if z.err != nil { | ||
1038 | z.tt = ErrorToken | ||
1039 | } else { | ||
1040 | z.tt = EndTagToken | ||
1041 | } | ||
1042 | return z.tt | ||
1043 | } | ||
1044 | z.raw.end-- | ||
1045 | z.readUntilCloseAngle() | ||
1046 | z.tt = CommentToken | ||
1047 | return z.tt | ||
1048 | case CommentToken: | ||
1049 | if c == '!' { | ||
1050 | z.tt = z.readMarkupDeclaration() | ||
1051 | return z.tt | ||
1052 | } | ||
1053 | z.raw.end-- | ||
1054 | z.readUntilCloseAngle() | ||
1055 | z.tt = CommentToken | ||
1056 | return z.tt | ||
1057 | } | ||
1058 | } | ||
1059 | if z.raw.start < z.raw.end { | ||
1060 | z.data.end = z.raw.end | ||
1061 | z.tt = TextToken | ||
1062 | return z.tt | ||
1063 | } | ||
1064 | z.tt = ErrorToken | ||
1065 | return z.tt | ||
1066 | } | ||
1067 | |||
1068 | // Raw returns the unmodified text of the current token. Calling Next, Token, | ||
1069 | // Text, TagName or TagAttr may change the contents of the returned slice. | ||
1070 | func (z *Tokenizer) Raw() []byte { | ||
1071 | return z.buf[z.raw.start:z.raw.end] | ||
1072 | } | ||
1073 | |||
1074 | // convertNewlines converts "\r" and "\r\n" in s to "\n". | ||
1075 | // The conversion happens in place, but the resulting slice may be shorter. | ||
1076 | func convertNewlines(s []byte) []byte { | ||
1077 | for i, c := range s { | ||
1078 | if c != '\r' { | ||
1079 | continue | ||
1080 | } | ||
1081 | |||
1082 | src := i + 1 | ||
1083 | if src >= len(s) || s[src] != '\n' { | ||
1084 | s[i] = '\n' | ||
1085 | continue | ||
1086 | } | ||
1087 | |||
1088 | dst := i | ||
1089 | for src < len(s) { | ||
1090 | if s[src] == '\r' { | ||
1091 | if src+1 < len(s) && s[src+1] == '\n' { | ||
1092 | src++ | ||
1093 | } | ||
1094 | s[dst] = '\n' | ||
1095 | } else { | ||
1096 | s[dst] = s[src] | ||
1097 | } | ||
1098 | src++ | ||
1099 | dst++ | ||
1100 | } | ||
1101 | return s[:dst] | ||
1102 | } | ||
1103 | return s | ||
1104 | } | ||
1105 | |||
1106 | var ( | ||
1107 | nul = []byte("\x00") | ||
1108 | replacement = []byte("\ufffd") | ||
1109 | ) | ||
1110 | |||
1111 | // Text returns the unescaped text of a text, comment or doctype token. The | ||
1112 | // contents of the returned slice may change on the next call to Next. | ||
1113 | func (z *Tokenizer) Text() []byte { | ||
1114 | switch z.tt { | ||
1115 | case TextToken, CommentToken, DoctypeToken: | ||
1116 | s := z.buf[z.data.start:z.data.end] | ||
1117 | z.data.start = z.raw.end | ||
1118 | z.data.end = z.raw.end | ||
1119 | s = convertNewlines(s) | ||
1120 | if (z.convertNUL || z.tt == CommentToken) && bytes.Contains(s, nul) { | ||
1121 | s = bytes.Replace(s, nul, replacement, -1) | ||
1122 | } | ||
1123 | if !z.textIsRaw { | ||
1124 | s = unescape(s, false) | ||
1125 | } | ||
1126 | return s | ||
1127 | } | ||
1128 | return nil | ||
1129 | } | ||
1130 | |||
1131 | // TagName returns the lower-cased name of a tag token (the `img` out of | ||
1132 | // `<IMG SRC="foo">`) and whether the tag has attributes. | ||
1133 | // The contents of the returned slice may change on the next call to Next. | ||
1134 | func (z *Tokenizer) TagName() (name []byte, hasAttr bool) { | ||
1135 | if z.data.start < z.data.end { | ||
1136 | switch z.tt { | ||
1137 | case StartTagToken, EndTagToken, SelfClosingTagToken: | ||
1138 | s := z.buf[z.data.start:z.data.end] | ||
1139 | z.data.start = z.raw.end | ||
1140 | z.data.end = z.raw.end | ||
1141 | return lower(s), z.nAttrReturned < len(z.attr) | ||
1142 | } | ||
1143 | } | ||
1144 | return nil, false | ||
1145 | } | ||
1146 | |||
1147 | // TagAttr returns the lower-cased key and unescaped value of the next unparsed | ||
1148 | // attribute for the current tag token and whether there are more attributes. | ||
1149 | // The contents of the returned slices may change on the next call to Next. | ||
1150 | func (z *Tokenizer) TagAttr() (key, val []byte, moreAttr bool) { | ||
1151 | if z.nAttrReturned < len(z.attr) { | ||
1152 | switch z.tt { | ||
1153 | case StartTagToken, SelfClosingTagToken: | ||
1154 | x := z.attr[z.nAttrReturned] | ||
1155 | z.nAttrReturned++ | ||
1156 | key = z.buf[x[0].start:x[0].end] | ||
1157 | val = z.buf[x[1].start:x[1].end] | ||
1158 | return lower(key), unescape(convertNewlines(val), true), z.nAttrReturned < len(z.attr) | ||
1159 | } | ||
1160 | } | ||
1161 | return nil, nil, false | ||
1162 | } | ||
1163 | |||
1164 | // Token returns the next Token. The result's Data and Attr values remain valid | ||
1165 | // after subsequent Next calls. | ||
1166 | func (z *Tokenizer) Token() Token { | ||
1167 | t := Token{Type: z.tt} | ||
1168 | switch z.tt { | ||
1169 | case TextToken, CommentToken, DoctypeToken: | ||
1170 | t.Data = string(z.Text()) | ||
1171 | case StartTagToken, SelfClosingTagToken, EndTagToken: | ||
1172 | name, moreAttr := z.TagName() | ||
1173 | for moreAttr { | ||
1174 | var key, val []byte | ||
1175 | key, val, moreAttr = z.TagAttr() | ||
1176 | t.Attr = append(t.Attr, Attribute{"", atom.String(key), string(val)}) | ||
1177 | } | ||
1178 | if a := atom.Lookup(name); a != 0 { | ||
1179 | t.DataAtom, t.Data = a, a.String() | ||
1180 | } else { | ||
1181 | t.DataAtom, t.Data = 0, string(name) | ||
1182 | } | ||
1183 | } | ||
1184 | return t | ||
1185 | } | ||
1186 | |||
1187 | // SetMaxBuf sets a limit on the amount of data buffered during tokenization. | ||
1188 | // A value of 0 means unlimited. | ||
1189 | func (z *Tokenizer) SetMaxBuf(n int) { | ||
1190 | z.maxBuf = n | ||
1191 | } | ||
1192 | |||
1193 | // NewTokenizer returns a new HTML Tokenizer for the given Reader. | ||
1194 | // The input is assumed to be UTF-8 encoded. | ||
1195 | func NewTokenizer(r io.Reader) *Tokenizer { | ||
1196 | return NewTokenizerFragment(r, "") | ||
1197 | } | ||
1198 | |||
1199 | // NewTokenizerFragment returns a new HTML Tokenizer for the given Reader, for | ||
1200 | // tokenizing an existing element's InnerHTML fragment. contextTag is that | ||
1201 | // element's tag, such as "div" or "iframe". | ||
1202 | // | ||
1203 | // For example, how the InnerHTML "a<b" is tokenized depends on whether it is | ||
1204 | // for a <p> tag or a <script> tag. | ||
1205 | // | ||
1206 | // The input is assumed to be UTF-8 encoded. | ||
1207 | func NewTokenizerFragment(r io.Reader, contextTag string) *Tokenizer { | ||
1208 | z := &Tokenizer{ | ||
1209 | r: r, | ||
1210 | buf: make([]byte, 0, 4096), | ||
1211 | } | ||
1212 | if contextTag != "" { | ||
1213 | switch s := strings.ToLower(contextTag); s { | ||
1214 | case "iframe", "noembed", "noframes", "noscript", "plaintext", "script", "style", "title", "textarea", "xmp": | ||
1215 | z.rawTag = s | ||
1216 | } | ||
1217 | } | ||
1218 | return z | ||
1219 | } | ||
diff --git a/vendor/vendor.json b/vendor/vendor.json index 9a3f454..8b44fbd 100644 --- a/vendor/vendor.json +++ b/vendor/vendor.json | |||
@@ -1,6 +1,6 @@ | |||
1 | { | 1 | { |
2 | "comment": "", | 2 | "comment": "", |
3 | "ignore": "appengine test github.com/hashicorp/nomad/", | 3 | "ignore": "appengine test github.com/hashicorp/nomad/ github.com/hashicorp/terraform/backend", |
4 | "package": [ | 4 | "package": [ |
5 | { | 5 | { |
6 | "checksumSHA1": "MV5JueYPwNkLZ+KNqmDcNDhsKi4=", | 6 | "checksumSHA1": "MV5JueYPwNkLZ+KNqmDcNDhsKi4=", |
@@ -227,6 +227,12 @@ | |||
227 | "revisionTime": "2014-04-22T17:41:19Z" | 227 | "revisionTime": "2014-04-22T17:41:19Z" |
228 | }, | 228 | }, |
229 | { | 229 | { |
230 | "checksumSHA1": "OT4XN9z5k69e2RsMSpwW74B+yk4=", | ||
231 | "path": "github.com/blang/semver", | ||
232 | "revision": "2ee87856327ba09384cabd113bc6b5d174e9ec0f", | ||
233 | "revisionTime": "2017-07-27T06:48:18Z" | ||
234 | }, | ||
235 | { | ||
230 | "checksumSHA1": "dvabztWVQX8f6oMLRyv4dLH+TGY=", | 236 | "checksumSHA1": "dvabztWVQX8f6oMLRyv4dLH+TGY=", |
231 | "path": "github.com/davecgh/go-spew/spew", | 237 | "path": "github.com/davecgh/go-spew/spew", |
232 | "revision": "346938d642f2ec3594ed81d874461961cd0faa76", | 238 | "revision": "346938d642f2ec3594ed81d874461961cd0faa76", |
@@ -252,6 +258,12 @@ | |||
252 | "revision": "7554cd9344cec97297fa6649b055a8c98c2a1e55" | 258 | "revision": "7554cd9344cec97297fa6649b055a8c98c2a1e55" |
253 | }, | 259 | }, |
254 | { | 260 | { |
261 | "checksumSHA1": "b8F628srIitj5p7Y130xc9k0QWs=", | ||
262 | "path": "github.com/hashicorp/go-cleanhttp", | ||
263 | "revision": "3573b8b52aa7b37b9358d966a898feb387f62437", | ||
264 | "revisionTime": "2017-02-11T01:34:15Z" | ||
265 | }, | ||
266 | { | ||
255 | "checksumSHA1": "nsL2kI426RMuq1jw15e7igFqdIY=", | 267 | "checksumSHA1": "nsL2kI426RMuq1jw15e7igFqdIY=", |
256 | "path": "github.com/hashicorp/go-getter", | 268 | "path": "github.com/hashicorp/go-getter", |
257 | "revision": "c3d66e76678dce180a7b452653472f949aedfbcd", | 269 | "revision": "c3d66e76678dce180a7b452653472f949aedfbcd", |
@@ -370,116 +382,130 @@ | |||
370 | "revisionTime": "2015-06-09T07:04:31Z" | 382 | "revisionTime": "2015-06-09T07:04:31Z" |
371 | }, | 383 | }, |
372 | { | 384 | { |
373 | "checksumSHA1": "BcxYPk5ME2ZyrHS1yK7gK9mzS1A=", | 385 | "checksumSHA1": "KPrCMDPNcLmO7K6xPcJSl86LwPk=", |
374 | "path": "github.com/hashicorp/terraform/config", | 386 | "path": "github.com/hashicorp/terraform/config", |
375 | "revision": "8d560482c34e865458fd884cb0790b4f73f09ad1", | 387 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", |
376 | "revisionTime": "2017-06-08T00:14:54Z", | 388 | "revisionTime": "2017-08-02T18:39:14Z", |
377 | "version": "v0.9.8", | 389 | "version": "v0.10.0", |
378 | "versionExact": "v0.9.8" | 390 | "versionExact": "v0.10.0" |
379 | }, | 391 | }, |
380 | { | 392 | { |
381 | "checksumSHA1": "YiREjXkb7CDMZuUmkPGK0yySe8A=", | 393 | "checksumSHA1": "uPCJ6seQo9kvoNSfwNWKX9KzVMk=", |
382 | "path": "github.com/hashicorp/terraform/config/module", | 394 | "path": "github.com/hashicorp/terraform/config/module", |
383 | "revision": "8d560482c34e865458fd884cb0790b4f73f09ad1", | 395 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", |
384 | "revisionTime": "2017-06-08T00:14:54Z", | 396 | "revisionTime": "2017-08-02T18:39:14Z", |
385 | "version": "v0.9.8", | 397 | "version": "v0.10.0", |
386 | "versionExact": "v0.9.8" | 398 | "versionExact": "v0.10.0" |
387 | }, | 399 | }, |
388 | { | 400 | { |
389 | "checksumSHA1": "w+l+UGTmwYNJ+L0p2vTd6+yqjok=", | 401 | "checksumSHA1": "w+l+UGTmwYNJ+L0p2vTd6+yqjok=", |
390 | "path": "github.com/hashicorp/terraform/dag", | 402 | "path": "github.com/hashicorp/terraform/dag", |
391 | "revision": "8d560482c34e865458fd884cb0790b4f73f09ad1", | 403 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", |
392 | "revisionTime": "2017-06-08T00:14:54Z", | 404 | "revisionTime": "2017-08-02T18:39:14Z", |
393 | "version": "v0.9.8", | 405 | "version": "v0.10.0", |
394 | "versionExact": "v0.9.8" | 406 | "versionExact": "v0.10.0" |
395 | }, | 407 | }, |
396 | { | 408 | { |
397 | "checksumSHA1": "p4y7tbu9KD/3cKQKe92I3DyjgRc=", | 409 | "checksumSHA1": "P8gNPDuOzmiK4Lz9xG7OBy4Rlm8=", |
398 | "path": "github.com/hashicorp/terraform/flatmap", | 410 | "path": "github.com/hashicorp/terraform/flatmap", |
399 | "revision": "8d560482c34e865458fd884cb0790b4f73f09ad1", | 411 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", |
400 | "revisionTime": "2017-06-08T00:14:54Z", | 412 | "revisionTime": "2017-08-02T18:39:14Z", |
401 | "version": "v0.9.8", | 413 | "version": "v0.10.0", |
402 | "versionExact": "v0.9.8" | 414 | "versionExact": "v0.10.0" |
403 | }, | 415 | }, |
404 | { | 416 | { |
405 | "checksumSHA1": "uT6Q9RdSRAkDjyUgQlJ2XKJRab4=", | 417 | "checksumSHA1": "uT6Q9RdSRAkDjyUgQlJ2XKJRab4=", |
406 | "path": "github.com/hashicorp/terraform/helper/config", | 418 | "path": "github.com/hashicorp/terraform/helper/config", |
407 | "revision": "8d560482c34e865458fd884cb0790b4f73f09ad1", | 419 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", |
408 | "revisionTime": "2017-06-08T00:14:54Z", | 420 | "revisionTime": "2017-08-02T18:39:14Z", |
409 | "version": "v0.9.8", | 421 | "version": "v0.10.0", |
410 | "versionExact": "v0.9.8" | 422 | "versionExact": "v0.10.0" |
411 | }, | 423 | }, |
412 | { | 424 | { |
413 | "checksumSHA1": "Vbo55GDzPgG/L/+W2pcvDhxrPZc=", | 425 | "checksumSHA1": "Vbo55GDzPgG/L/+W2pcvDhxrPZc=", |
414 | "path": "github.com/hashicorp/terraform/helper/experiment", | 426 | "path": "github.com/hashicorp/terraform/helper/experiment", |
415 | "revision": "8d560482c34e865458fd884cb0790b4f73f09ad1", | 427 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", |
416 | "revisionTime": "2017-06-08T00:14:54Z", | 428 | "revisionTime": "2017-08-02T18:39:14Z", |
417 | "version": "v0.9.8", | 429 | "version": "v0.10.0", |
418 | "versionExact": "v0.9.8" | 430 | "versionExact": "v0.10.0" |
419 | }, | 431 | }, |
420 | { | 432 | { |
421 | "checksumSHA1": "BmIPKTr0zDutSJdyq7pYXrK1I3E=", | 433 | "checksumSHA1": "BmIPKTr0zDutSJdyq7pYXrK1I3E=", |
422 | "path": "github.com/hashicorp/terraform/helper/hashcode", | 434 | "path": "github.com/hashicorp/terraform/helper/hashcode", |
423 | "revision": "8d560482c34e865458fd884cb0790b4f73f09ad1", | 435 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", |
424 | "revisionTime": "2017-06-08T00:14:54Z", | 436 | "revisionTime": "2017-08-02T18:39:14Z", |
425 | "version": "v0.9.8", | 437 | "version": "v0.10.0", |
426 | "versionExact": "v0.9.8" | 438 | "versionExact": "v0.10.0" |
427 | }, | 439 | }, |
428 | { | 440 | { |
429 | "checksumSHA1": "B267stWNQd0/pBTXHfI/tJsxzfc=", | 441 | "checksumSHA1": "B267stWNQd0/pBTXHfI/tJsxzfc=", |
430 | "path": "github.com/hashicorp/terraform/helper/hilmapstructure", | 442 | "path": "github.com/hashicorp/terraform/helper/hilmapstructure", |
431 | "revision": "8d560482c34e865458fd884cb0790b4f73f09ad1", | 443 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", |
432 | "revisionTime": "2017-06-08T00:14:54Z", | 444 | "revisionTime": "2017-08-02T18:39:14Z", |
433 | "version": "v0.9.8", | 445 | "version": "v0.10.0", |
434 | "versionExact": "v0.9.8" | 446 | "versionExact": "v0.10.0" |
435 | }, | 447 | }, |
436 | { | 448 | { |
437 | "checksumSHA1": "2wJa9F3BGlbe2DNqH5lb5POayRI=", | 449 | "checksumSHA1": "2wJa9F3BGlbe2DNqH5lb5POayRI=", |
438 | "path": "github.com/hashicorp/terraform/helper/logging", | 450 | "path": "github.com/hashicorp/terraform/helper/logging", |
439 | "revision": "8d560482c34e865458fd884cb0790b4f73f09ad1", | 451 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", |
440 | "revisionTime": "2017-06-08T00:14:54Z", | 452 | "revisionTime": "2017-08-02T18:39:14Z", |
441 | "version": "v0.9.8", | 453 | "version": "v0.10.0", |
442 | "versionExact": "v0.9.8" | 454 | "versionExact": "v0.10.0" |
443 | }, | 455 | }, |
444 | { | 456 | { |
445 | "checksumSHA1": "8VL90fHe5YRasHcZwv2q2qms/Jo=", | 457 | "checksumSHA1": "dhU2woQaSEI2OnbYLdkHxf7/nu8=", |
446 | "path": "github.com/hashicorp/terraform/helper/resource", | 458 | "path": "github.com/hashicorp/terraform/helper/resource", |
447 | "revision": "8d560482c34e865458fd884cb0790b4f73f09ad1", | 459 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", |
448 | "revisionTime": "2017-06-08T00:14:54Z", | 460 | "revisionTime": "2017-08-02T18:39:14Z", |
449 | "version": "v0.9.8", | 461 | "version": "v0.10.0", |
450 | "versionExact": "v0.9.8" | 462 | "versionExact": "v0.10.0" |
451 | }, | 463 | }, |
452 | { | 464 | { |
453 | "checksumSHA1": "bgaeB6ivKIK5H+7JCsp7w8aAdAg=", | 465 | "checksumSHA1": "0smlb90amL15c/6nWtW4DV6Lqh8=", |
454 | "path": "github.com/hashicorp/terraform/helper/schema", | 466 | "path": "github.com/hashicorp/terraform/helper/schema", |
455 | "revision": "8d560482c34e865458fd884cb0790b4f73f09ad1", | 467 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", |
456 | "revisionTime": "2017-06-08T00:14:54Z", | 468 | "revisionTime": "2017-08-02T18:39:14Z", |
457 | "version": "v0.9.8", | 469 | "version": "v0.10.0", |
458 | "versionExact": "v0.9.8" | 470 | "versionExact": "v0.10.0" |
459 | }, | 471 | }, |
460 | { | 472 | { |
461 | "checksumSHA1": "oLui7dYxhzfAczwwdNZDm4tzHtk=", | 473 | "checksumSHA1": "1yCGh/Wl4H4ODBBRmIRFcV025b0=", |
462 | "path": "github.com/hashicorp/terraform/helper/shadow", | 474 | "path": "github.com/hashicorp/terraform/helper/shadow", |
463 | "revision": "8d560482c34e865458fd884cb0790b4f73f09ad1", | 475 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", |
464 | "revisionTime": "2017-06-08T00:14:54Z", | 476 | "revisionTime": "2017-08-02T18:39:14Z", |
465 | "version": "v0.9.8", | 477 | "version": "v0.10.0", |
466 | "versionExact": "v0.9.8" | 478 | "versionExact": "v0.10.0" |
467 | }, | 479 | }, |
468 | { | 480 | { |
469 | "checksumSHA1": "6AA7ZAzswfl7SOzleP6e6he0lq4=", | 481 | "checksumSHA1": "yFWmdS6yEJZpRJzUqd/mULqCYGk=", |
482 | "path": "github.com/hashicorp/terraform/moduledeps", | ||
483 | "revision": "5bcc1bae5925f44208a83279b6d4d250da01597b", | ||
484 | "revisionTime": "2017-08-09T21:54:59Z" | ||
485 | }, | ||
486 | { | ||
487 | "checksumSHA1": "4ODNVUds3lyBf7gV02X1EeYR4GA=", | ||
470 | "path": "github.com/hashicorp/terraform/plugin", | 488 | "path": "github.com/hashicorp/terraform/plugin", |
471 | "revision": "8d560482c34e865458fd884cb0790b4f73f09ad1", | 489 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", |
472 | "revisionTime": "2017-06-08T00:14:54Z", | 490 | "revisionTime": "2017-08-02T18:39:14Z", |
473 | "version": "v0.9.8", | 491 | "version": "v0.10.0", |
474 | "versionExact": "v0.9.8" | 492 | "versionExact": "v0.10.0" |
475 | }, | 493 | }, |
476 | { | 494 | { |
477 | "checksumSHA1": "GfGSXndpVIh9sSeNf+b1TjxBEpQ=", | 495 | "checksumSHA1": "mujz3BDg1X82ynvJncCFUT6/7XI=", |
496 | "path": "github.com/hashicorp/terraform/plugin/discovery", | ||
497 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", | ||
498 | "revisionTime": "2017-08-02T18:39:14Z", | ||
499 | "version": "v0.10.0", | ||
500 | "versionExact": "v0.10.0" | ||
501 | }, | ||
502 | { | ||
503 | "checksumSHA1": "ksfNQjZs/6llziARojABd6iuvdw=", | ||
478 | "path": "github.com/hashicorp/terraform/terraform", | 504 | "path": "github.com/hashicorp/terraform/terraform", |
479 | "revision": "8d560482c34e865458fd884cb0790b4f73f09ad1", | 505 | "revision": "2041053ee9444fa8175a298093b55a89586a1823", |
480 | "revisionTime": "2017-06-08T00:14:54Z", | 506 | "revisionTime": "2017-08-02T18:39:14Z", |
481 | "version": "v0.9.8", | 507 | "version": "v0.10.0", |
482 | "versionExact": "v0.9.8" | 508 | "versionExact": "v0.10.0" |
483 | }, | 509 | }, |
484 | { | 510 | { |
485 | "checksumSHA1": "ZhK6IO2XN81Y+3RAjTcVm1Ic7oU=", | 511 | "checksumSHA1": "ZhK6IO2XN81Y+3RAjTcVm1Ic7oU=", |
@@ -548,6 +574,60 @@ | |||
548 | "revisionTime": "2016-10-31T15:37:30Z" | 574 | "revisionTime": "2016-10-31T15:37:30Z" |
549 | }, | 575 | }, |
550 | { | 576 | { |
577 | "checksumSHA1": "TT1rac6kpQp2vz24m5yDGUNQ/QQ=", | ||
578 | "path": "golang.org/x/crypto/cast5", | ||
579 | "revision": "b176d7def5d71bdd214203491f89843ed217f420", | ||
580 | "revisionTime": "2017-07-23T04:49:35Z" | ||
581 | }, | ||
582 | { | ||
583 | "checksumSHA1": "IIhFTrLlmlc6lEFSitqi4aw2lw0=", | ||
584 | "path": "golang.org/x/crypto/openpgp", | ||
585 | "revision": "b176d7def5d71bdd214203491f89843ed217f420", | ||
586 | "revisionTime": "2017-07-23T04:49:35Z" | ||
587 | }, | ||
588 | { | ||
589 | "checksumSHA1": "olOKkhrdkYQHZ0lf1orrFQPQrv4=", | ||
590 | "path": "golang.org/x/crypto/openpgp/armor", | ||
591 | "revision": "b176d7def5d71bdd214203491f89843ed217f420", | ||
592 | "revisionTime": "2017-07-23T04:49:35Z" | ||
593 | }, | ||
594 | { | ||
595 | "checksumSHA1": "eo/KtdjieJQXH7Qy+faXFcF70ME=", | ||
596 | "path": "golang.org/x/crypto/openpgp/elgamal", | ||
597 | "revision": "b176d7def5d71bdd214203491f89843ed217f420", | ||
598 | "revisionTime": "2017-07-23T04:49:35Z" | ||
599 | }, | ||
600 | { | ||
601 | "checksumSHA1": "rlxVSaGgqdAgwblsErxTxIfuGfg=", | ||
602 | "path": "golang.org/x/crypto/openpgp/errors", | ||
603 | "revision": "b176d7def5d71bdd214203491f89843ed217f420", | ||
604 | "revisionTime": "2017-07-23T04:49:35Z" | ||
605 | }, | ||
606 | { | ||
607 | "checksumSHA1": "Pq88+Dgh04UdXWZN6P+bLgYnbRc=", | ||
608 | "path": "golang.org/x/crypto/openpgp/packet", | ||
609 | "revision": "b176d7def5d71bdd214203491f89843ed217f420", | ||
610 | "revisionTime": "2017-07-23T04:49:35Z" | ||
611 | }, | ||
612 | { | ||
613 | "checksumSHA1": "s2qT4UwvzBSkzXuiuMkowif1Olw=", | ||
614 | "path": "golang.org/x/crypto/openpgp/s2k", | ||
615 | "revision": "b176d7def5d71bdd214203491f89843ed217f420", | ||
616 | "revisionTime": "2017-07-23T04:49:35Z" | ||
617 | }, | ||
618 | { | ||
619 | "checksumSHA1": "vqc3a+oTUGX8PmD0TS+qQ7gmN8I=", | ||
620 | "path": "golang.org/x/net/html", | ||
621 | "revision": "1c05540f6879653db88113bc4a2b70aec4bd491f", | ||
622 | "revisionTime": "2017-08-04T00:04:37Z" | ||
623 | }, | ||
624 | { | ||
625 | "checksumSHA1": "z79z5msRzgU48FCZxSuxfU8b4rs=", | ||
626 | "path": "golang.org/x/net/html/atom", | ||
627 | "revision": "1c05540f6879653db88113bc4a2b70aec4bd491f", | ||
628 | "revisionTime": "2017-08-04T00:04:37Z" | ||
629 | }, | ||
630 | { | ||
551 | "checksumSHA1": "wICWAGQfZcHD2y0dHesz9R2YSiw=", | 631 | "checksumSHA1": "wICWAGQfZcHD2y0dHesz9R2YSiw=", |
552 | "path": "k8s.io/kubernetes/pkg/apimachinery", | 632 | "path": "k8s.io/kubernetes/pkg/apimachinery", |
553 | "revision": "b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", | 633 | "revision": "b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", |