Skip to content

Commit 7e89039

Browse files
authored
adding discord notes from Alex on how we might be able to leverage CloudKit (#112)
* adding discord notes from Alex on how we might be able to leverage CloudKit fairly directly * minor doc updates
1 parent a2cda8b commit 7e89039

File tree

2 files changed

+27
-5
lines changed

2 files changed

+27
-5
lines changed

Sources/Automerge/Automerge.docc/Curation/Document.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,6 @@
88
- ``init(_:logLevel:)``
99
- ``LogVerbosity``
1010

11-
### Transfering Documents
12-
13-
- ``Document/transferRepresentation``
14-
- ``Automerge/UniformTypeIdentifiers/UTType/automerge``
15-
1611
### Inspecting Documents
1712

1813
- ``actor``
@@ -119,3 +114,12 @@
119114
- ``generateSyncMessage(state:)``
120115
- ``receiveSyncMessage(state:message:)``
121116
- ``receiveSyncMessageWithPatches(state:message:)``
117+
118+
### Observing Documents
119+
120+
- ``objectWillChange``
121+
122+
### Transfering Documents
123+
124+
- ``Document/transferRepresentation``
125+
- ``Automerge/UniformTypeIdentifiers/UTType/automerge``

notes/CloudKitIntegration.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
# Automerge integration with CloudKit
2+
3+
## alexg — Feb 14, 2024 at 1:25 PM
4+
5+
I have had some thoughts about how CloudKit could be used for application sync.
6+
I think if we use effectively the same scheme as we use for managing concurrent changes to storage in automerge-repo it should be possible.
7+
The requirements of the storage layer are that it provide a key/value interface with byte arrays as values and a range query.
8+
I believe both of these requirements are satisfied by CloudKit.
9+
10+
The way this works is that every time a change is made to a document you write the new change to a key of the form `<document ID>/incremental/<change hash>`.
11+
You can then load the document by querying all the keys that begin with `<document ID>/incremental/`, concatenating the bytes of all those changes, and loading them into an automerge document.
12+
However, this would not take advantage of the compaction which save() provides.
13+
To compact then, we first save the document and write the output to `<document ID>/snapshot/<hash of the heads of the document>`, then we delete all the keys which were used when loading the document.
14+
15+
This is safe in the face of concurrent compacting operations because:
16+
a) we only delete changes we have already written out, so no data is lost, and
17+
b) if two processes are racing to snapshot then they are either compacting the same data, in which case they will write the same bytes to the same key, or they are compacting different data in which case the key they write to will be different.
18+
Loading now becomes querying for all keys beginning with `<document ID>/`, concatnating them, and loading the result.

0 commit comments

Comments
 (0)