FPHam commited on
Commit
8b873ed
·
verified ·
1 Parent(s): be8a236

Upload 3 files

Browse files
Files changed (3) hide show
  1. last.json +79 -0
  2. script.py +1692 -0
  3. state.json +15 -0
last.json ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "outline": "Beat 1",
4
+ "outline_title": "Intro",
5
+ "prompt": "Write a paragraph explaining what story beats are",
6
+ "scenetext_v1": "In the world of fiction writing, a story beat refers to a plot point or action that propels the narrative forward and affects the overall story arc. Beats serve as the building blocks of a plot, breaking down the story into distinct moments of development and change. These beats can range from small, subtle shifts in character relationships to major, game-changing events that alter the course of the entire story.",
7
+ "scenetext_v2": "Story beats are the building blocks of a narrative, representing individual moments or scenes that, when linked together, form the larger story. \nBeats can be thought of as paragraphs, scenes, or even smaller moments within a scene, such as a character's inner monologue or a descriptive passage setting the atmosphere. By focusing on one beat at a time, writers can craft a rich, nuanced story that flows logically from one moment to the next, with each beat informing and influencing those that follow. ",
8
+ "scenetext_v3": "",
9
+ "version": "v2",
10
+ "notes": "",
11
+ "is_summary": false
12
+ },
13
+ {
14
+ "outline": "Beat 2",
15
+ "outline_title": "Guide",
16
+ "prompt": "Write a simple guide how to use StoryCrafter.",
17
+ "scenetext_v1": "**Welcome to StoryCrafter: A Guide to Crafting Your Story**\n\nStoryCrafter is a tool designed to help you write and generate stories beat by beat. With its intuitive interface and innovative features, you'll be able to craft compelling narratives that capture your readers' imaginations. Here's a step-by-step guide to get you started:\n\n1. **Add a New Beat**: Begin by creating a new beat, which is a short passage or paragraph that forms a key part of your story. This can be a scene, a character introduction, a plot twist, or any other important event that drives your narrative forward.\n\nNote: Beats are numbered automatically. You can rearrange beats and delete them, but their number will not change. The number is more like an ID of the beat than anything else so the software can locate it and not the order. \n\n2. **Write Prompt**: Write your beat prompt in the designated prompt area, bringing your idea to life with vivid descriptions, engaging dialogue, and well-crafted prose. Focus on getting the essence of the scene down.\n\n3. **Generate Text**: Once you've written your prompt, press Generate. This will generate your beat text and make it part of your ongoing narrative.\n\n4. **Review and Edit**: Take a moment to review your beat, making any necessary edits to ensure it flows smoothly and effectively conveys the intended message. Doing edits early will also help further text generations to write in the same style and context. You can of course edit your beat at any time, and the changes will be reflected in your story.\n\n5. **Multiple Versions**: StoryCrafter allows you to create multiple versions of each beat, giving you the flexibility to explore different approaches and styles. Try out different tones, perspectives, or plot directions, and see which one works best for your story. \nJust Select v1, v2 or v3 version and generate or write there. The selected version will be also the one in the full story.",
18
+ "scenetext_v2": "",
19
+ "scenetext_v3": "",
20
+ "version": "v1",
21
+ "notes": "",
22
+ "is_summary": false
23
+ },
24
+ {
25
+ "outline": "Beat 3",
26
+ "outline_title": "Future Cues",
27
+ "prompt": "Write a short guide how future cues work.",
28
+ "scenetext_v1": "**Unlocking the Power of Future Cues in StoryCrafter**\n\nFuture Cues are a powerful feature in StoryCrafter that allow you to shape the narrative trajectory of your story, ensuring consistency and coherence as you build upon your beats. Here's a concise guide on how to harness the potential of Future Cues:\n\n1. **Understanding Future Cues**: Future Cues are directives that apply to the beats that follow the current one. They help in maintaining continuity and can be used to introduce significant changes or reminders that should be considered in subsequent beats.\n\n2. **Adding Future Cues**: When you're adding or editing a beat, you can specify Future Cues. This could be anything from a change in a character's appearance, a shift in location, or any other detail that will be relevant to the story moving forward.\n\n3. **Examples of Future Cues**:\n - Character Developments: \"From this point on, refer to John as having a beard.\"\n - Setting Changes: \"All future beats take place in the summer season.\"\n - Plot Twists: \"Remember, the main character has amnesia and won't recall events before this beat.\"\n\n4. **Using Future Cues in Beats**: As you generate new beats, StoryCrafter will take into account the Future Cues from previous beats. This means you can seamlessly continue your story, incorporating the changes and reminders you've set up.\n\n5. **Flexibility and Control**: The beauty of Future Cues lies in their flexibility. You can add, modify, or remove them as your story evolves, giving you complete control over the narrative's direction.\n\nBy incorporating Future Cues into your storytelling process, you can craft a narrative that's not only engaging but also rich in detail and consistency. This feature allows you to plan ahead, ensuring that your story unfolds in a way that's both surprising and logical.",
29
+ "scenetext_v2": "",
30
+ "scenetext_v3": "",
31
+ "version": "v1",
32
+ "notes": "",
33
+ "is_summary": false
34
+ },
35
+ {
36
+ "outline": "Beat 4",
37
+ "outline_title": "Modes",
38
+ "prompt": "Write a paragraph describing Instruct Mode text generation (it uses instruct template of the model) and Narrative mode (which works like a notebook in LLM without using chat template).",
39
+ "scenetext_v1": "StoryCrafter offers two distinct modes for generating and crafting your story:\nInstruct Mode and Narrative Mode.\n\nInstruct Mode leverages the model's instruct template, allowing you to provide specific directives on how to generate text. For instance, you can instruct the model to \"Write a paragraph, describing the Anna's house in details.\"\n\nOn the other hand, Narrative Mode operates more like a traditional notebook, without the constraints of a chat template. This mode allows for a more organic writing experience, as the model generates text based on the context and writing style of your previous beats. You can still use the prompt to direct the text generation.\n\nBy switching between these two modes, you can harness the full potential of StoryCrafter, using Instruct Mode for targeted text generation and Narrative Mode for a more intuitive and creative writing experience.",
40
+ "scenetext_v2": "",
41
+ "scenetext_v3": "",
42
+ "version": "v1",
43
+ "notes": "",
44
+ "is_summary": false
45
+ },
46
+ {
47
+ "outline": "Beat 5",
48
+ "outline_title": "Final Draft",
49
+ "prompt": "In this chapter talk about the final draft - the full text tab. The full text is always dynamically generated from the beats (and their selected version) and the edits",
50
+ "scenetext_v1": "**Finalizing Your Masterpiece: The Full Text Tab**\n\nAs you've been crafting your story beat by beat, the Full Text tab has been waiting in the wings, ready to bring your entire narrative together. This tab is where the magic happens, as it dynamically generates the complete text of your story based on the beats you've created, the versions you've selected, and the edits you've made. Every time you make a change to a beat or switch between versions, the Full Text tab updates automatically, reflecting the current state of your story.\n\nThis tab is your chance to review your story in its complete form, ensuring that the pacing, plot, and character development all come together as intended.",
51
+ "scenetext_v2": "",
52
+ "scenetext_v3": "",
53
+ "version": "v1",
54
+ "notes": "",
55
+ "is_summary": false
56
+ },
57
+ {
58
+ "outline": "Beat 6",
59
+ "outline_title": "Lore book",
60
+ "prompt": "Explain how lore book works (use ideas from SillyTavern - basically a keyword will trigger a memory)",
61
+ "scenetext_v1": "**Unveiling the Lore Book: A Dynamic Story Companion**\n\nWithin StoryCrafter, the Lore Book acts as a treasured companion, holding the memories and lore of your story. This tool operates on a keyword-based system, where specific words or phrases in the prompt can trigger the recall of previously established facts, characters, settings, or events. \n\nHere's how it works:\n\nAs you write the prompt, the Lore Book associate keywords with the memory. These keywords can be characters' names, locations, magical items, or any other significant element within your story.\n\nFor instance, if you mention a character's name in the prompt, the Lore Book can provide context and helping you maintain consistency.\n\nIt can also suggest connections between different elements of your story, helping you to weave a richer, more complex narrative. \n\nLorebook format.\nkyword: \nmemory\n\nYou can use multiple keywords with , (for example character first and last name separated by , - that means either will trigger the memory)\n\nrimmer, arnold:\nArnold Judas Rimmer - A hologram of a deceased crew member, painfully neurotic, insufferably pompous, and obsessed with climbing the ranks of the Space Corps despite being utterly incompetent. Known for his pedantic obsession with Space Corps directives and his strained relationship with Lister.",
62
+ "scenetext_v2": "",
63
+ "scenetext_v3": "",
64
+ "version": "v1",
65
+ "notes": "",
66
+ "is_summary": false
67
+ },
68
+ {
69
+ "outline": "Beat 7",
70
+ "outline_title": "Story Memory",
71
+ "prompt": "Rewrite the following text:\nSettings - in the Settings tab (bellow prompt) you can set how many beats from the last will be included in the prompt text - using too many will slow down the process. If you need more consitency, you might ask for the summary and then add the summary to the World (in Lore book tab)",
72
+ "scenetext_v1": "**Tuning Your Story's Memory: The Settings Tab**\n\nLocated conveniently below the prompt area, the Settings tab offers a crucial tool for managing the scope of your story's recall. Here, you can determine how many previous beats will be included in the prompt text, effectively controlling the amount of context that influences the generation of new beats.\n\nIt's essential to strike a balance, as including too many beats can significantly slow down the storytelling process. If you find yourself needing more consistency or a broader understanding of your narrative's progression, consider an alternative approach. You can request a summary of your story thus far, and then add this summary to the World section within the Lore Book tab.\n\nBy doing so, you're not only ensuring that your story's core elements are preserved but also creating a rich tapestry of lore that can be drawn upon as you continue to craft your narrative. \n\nFP Note: :LOL, llama totally talks like ChatGPT... \"rich tapestry\".... guess what they used as training data, hahahaha",
73
+ "scenetext_v2": "",
74
+ "scenetext_v3": "",
75
+ "version": "v1",
76
+ "notes": "",
77
+ "is_summary": false
78
+ }
79
+ ]
script.py ADDED
@@ -0,0 +1,1692 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import modules.shared as shared
3
+ from pathlib import Path
4
+ import re
5
+ import json
6
+ from functools import partial
7
+ from modules.text_generation import stop_everything_event
8
+ from modules import chat
9
+ from modules import ui as main_ui
10
+ from modules.utils import gradio
11
+ from modules.extensions import apply_extensions
12
+ import random
13
+
14
+ right_symbol = '\U000027A1'
15
+ left_symbol = '\U00002B05'
16
+ refresh_symbol = '\U0001f504' # 🔄
17
+
18
+ def atoi(text):
19
+ return int(text) if text.isdigit() else text.lower()
20
+
21
+ def natural_keys(text):
22
+ return [atoi(c) for c in re.split(r'(\d+)', text)]
23
+
24
+ def get_file_path(filename):
25
+ return "extensions/StoryCrafter/"+filename
26
+
27
+ last_save = get_file_path("last.json")
28
+ save_proj_path = get_file_path("Projects")
29
+ save_proj_path_txt = get_file_path("Text")
30
+ state_save = get_file_path("state.json")
31
+
32
+ params = {
33
+ "display_name": "StoryCrafter",
34
+ "is_tab": True,
35
+ "selectA": [0,0],
36
+ 'projectname':"temp_project",
37
+ 'auto_clear': True,
38
+ 'include_history': True,
39
+ 'include_history_nr':5,
40
+ 'lorebook':'',
41
+ 'system':'You are experienced fiction writer. Develop the plot slowly. Describe all actions in full, elaborate and vivid detail.',
42
+ 'world':''
43
+
44
+ }
45
+
46
+
47
+
48
+ help_str = """
49
+ **Help**
50
+
51
+ This is for writing and generating stories beat by beat (short passages of scenes, paragraphs). At each generation all the previously written/edited beats will be dynamically inserted into LLM as a memory. You can edit the beats any time you wish as both final text and the text LLM see is dynamically generated from the beats each time.
52
+
53
+ Versions
54
+
55
+ Each beat can also have multiple versions and you can then choose which version to include in the final text.
56
+
57
+ Cross variation to prompt. In the Instruct mode you can specify [V1], [V2] or [V3] in the prompt and it will insert the text from that version. This way you can instruct to rewrite the text without copying the text to prompt.
58
+ For example: Rewrite the following text using first person POV [V1] or Summarrize the following text: [V2]
59
+
60
+ Future Cues
61
+
62
+ Each beat can also have Future Cues - unlike Prompt, which are directions for the currently generated text, Future Cues are for the text that will be generated after the current one, down the page. Here you can specify changes and twists that are valid AFTER this test.
63
+ For example if in this block of text the character is changing their hairstyle, in the Future Cues you might specify: from this point refer to Anna as having short pink hair
64
+ """
65
+
66
+
67
+ # Define the global data_structure
68
+
69
+ selected_item = "Beat 1"
70
+ selected_item_title = "Beat 1"
71
+ selected_item_prompt = "Write a paragraph where ..."
72
+ selected_item_scenetext = ""
73
+ selected_scene_version = "v1"
74
+ selected_item_notes = ""
75
+ full_text_until = ""
76
+ full_text = ""
77
+
78
+ #load from lorebook
79
+ dynamic_lore = []
80
+ dynamic_lore_changed = False
81
+
82
+ data_structure = [{"outline": selected_item, "outline_title": selected_item_title, "prompt": selected_item_prompt, "scenetext_v1": selected_item_scenetext,"scenetext_v2": "","scenetext_v3": "", "version": selected_scene_version,"notes":selected_item_notes, "is_summary": False}]
83
+
84
+ def does_outline_exist(outline_name):
85
+ global data_structure
86
+ return any(item["outline"] == outline_name for item in data_structure)
87
+
88
+ def get_first_outline_name():
89
+ global data_structure
90
+ if data_structure:
91
+ return data_structure[0]["outline"]
92
+ else:
93
+ return "" # Return None if data_structure is empty
94
+
95
+ def get_first_outline_name_title(default_title):
96
+ global data_structure
97
+ if data_structure and len(data_structure) > 0:
98
+ return data_structure[0].get("outline_title", default_title)
99
+ else:
100
+ return default_title
101
+
102
+
103
+ def get_data_by_outline(outline_title):
104
+ global data_structure
105
+ for item in data_structure:
106
+ if item["outline"] == outline_title:
107
+ key = "scenetext_"+item["version"]
108
+ return item["prompt"], item[key], item["version"], item["notes"]
109
+ return None, None # Return None if the outline_title is not found
110
+
111
+ def get_title_by_outline(outline_title):
112
+ global data_structure
113
+ def_out = outline_title
114
+ for item in data_structure:
115
+ if item["outline"] == outline_title:
116
+ return item.get("outline_title", def_out)
117
+ return None, None # Return None if the outline_title is not found
118
+
119
+
120
+
121
+ def delete_item_by_outline(outline_title):
122
+ global data_structure
123
+ global selected_item
124
+ next_selected_item = ""
125
+ for item in data_structure:
126
+ if item["outline"] == outline_title:
127
+ data_structure.remove(item)
128
+ selected_item = next_selected_item
129
+ if selected_item=="" and len(data_structure)>0:
130
+ selected_item = data_structure[0]["outline"]
131
+
132
+ return True # Item deleted successfully
133
+ next_selected_item = item["outline"]
134
+ return False # Item not found
135
+
136
+ def generate_unique_outline_name_old(scene_string):
137
+ global data_structure
138
+ # Initialize a counter to create unique names
139
+ counter = 1
140
+ while True:
141
+ outline_title = f"{scene_string} {counter}"
142
+ # Check if the generated name is already in use
143
+ if not any(item["outline"] == outline_title for item in data_structure):
144
+ return outline_title
145
+ counter += 1
146
+
147
+
148
+ def generate_unique_outline_name(base_name):
149
+ global data_structure
150
+
151
+ # Initialize the max number as 0
152
+ max_number = 0
153
+
154
+ # Iterate through all items in data_structure
155
+ for item in data_structure:
156
+ try:
157
+ # Extract the number from the end of the outline (e.g., "Beat 23")
158
+ number = int(item["outline"].split()[-1])
159
+ max_number = max(max_number, number)
160
+ except ValueError:
161
+ # Ignore outlines that don't end with a number
162
+ continue
163
+
164
+ # Generate a new unique outline name
165
+ new_outline_name = f"{base_name} {max_number + 1}"
166
+ return new_outline_name
167
+
168
+ def add_item(outline_title, prompt_string, scene_string):
169
+ global data_structure
170
+ global selected_item
171
+ global selected_item_title
172
+ global selected_item_prompt
173
+ global selected_item_scenetext
174
+ global selected_scene_version
175
+ global selected_item_notes
176
+ outline_name2 = outline_title
177
+
178
+
179
+ new_item = {"outline": outline_title, "outline_title": outline_name2, "prompt": prompt_string, "scenetext_v1": scene_string,"scenetext_v2": "","scenetext_v3": "", "version": "v1", "notes": "", "is_summary": False}
180
+
181
+ selected_item = outline_title
182
+ selected_item_title = outline_name2
183
+ selected_item_prompt = prompt_string
184
+ selected_item_scenetext = scene_string
185
+ selected_scene_version = new_item["version"]
186
+ selected_item_notes = ""
187
+
188
+ data_structure.append(new_item)
189
+
190
+
191
+ def add_item_auto(scene_prefix, prompt_string, scene_text):
192
+ global data_structure
193
+ global selected_item
194
+ global selected_item_title
195
+ global selected_item_prompt
196
+ global selected_item_scenetext
197
+ global selected_scene_version
198
+ global selected_item_notes
199
+ # Check if data_structure has any data
200
+ if len(data_structure)>0:
201
+ # Get the last item in data_structure
202
+ last_item = data_structure[-1]
203
+
204
+ # Check if the last item has "prompt" == '' and "scenetext_v1" == ''
205
+ if last_item["prompt"] == '' and last_item["scenetext_v1"] == '':
206
+ # Overwrite the last item with new values
207
+ last_item["prompt"] = prompt_string
208
+ last_item["scenetext_v1"] = scene_text
209
+ last_item["scenetext_v2"] = ""
210
+ last_item["scenetext_v3"] = ""
211
+ last_item["is_summary"] = False
212
+ last_item["version"] = "v1"
213
+ last_item["notes"] = ""
214
+
215
+ # Update selected_item, selected_item_prompt, and selected_item_scenetext
216
+ selected_item = last_item["outline"]
217
+ selected_item_title = last_item["outline_title"]
218
+ selected_item_prompt = last_item["prompt"]
219
+ selected_item_scenetext = last_item["scenetext_v1"]
220
+ selected_scene_version = last_item["version"]
221
+ selected_item_notes = last_item["notes"]
222
+ # Update data_structure with the modified last_item
223
+ data_structure[-1] = last_item
224
+ return # Exit the function without adding a new item
225
+
226
+
227
+ outline_title = generate_unique_outline_name(scene_prefix)
228
+ outline_name2 = outline_title
229
+
230
+ new_item = {"outline": outline_title, "outline_title": outline_name2, "prompt": prompt_string, "scenetext_v1": scene_text,"scenetext_v2": "","scenetext_v3": "", "version": "v1", "notes":"", "is_summary": False}
231
+
232
+ selected_item = outline_title
233
+ selected_item_title = outline_name2
234
+ selected_item_prompt = prompt_string
235
+ selected_item_scenetext = scene_text
236
+ selected_scene_version = new_item["version"]
237
+ selected_item_notes = ""
238
+
239
+
240
+ data_structure.append(new_item)
241
+
242
+
243
+ def set_version_by_outline(outline_title, scene_version):
244
+ global data_structure
245
+ for item in data_structure:
246
+ if item["outline"] == outline_title:
247
+ item["version"] = scene_version
248
+ return True # Item updated successfully
249
+ return False # Item not found
250
+
251
+ def update_item_by_outline(outline_title, scene_version, new_prompt, new_scene_text):
252
+ global data_structure
253
+ for item in data_structure:
254
+ if item["outline"] == outline_title:
255
+ item["prompt"] = new_prompt
256
+ item["version"] = scene_version
257
+ key = "scenetext_"+item["version"]
258
+ item[key] = new_scene_text
259
+ return True # Item updated successfully
260
+ return False # Item not found
261
+
262
+ def update_item_title_by_outline(outline_name, new_outline_title):
263
+ global data_structure
264
+ for item in data_structure:
265
+ if item["outline"] == outline_name:
266
+ item["outline_title"] = new_outline_title
267
+ return True # Item updated successfully
268
+ return False # Item not found
269
+
270
+ def update_prompt_by_outline(outline_title, new_prompt):
271
+ global data_structure
272
+ for item in data_structure:
273
+ if item["outline"] == outline_title:
274
+ item["prompt"] = new_prompt
275
+ return True # Item updated successfully
276
+ return False # Item not found
277
+
278
+ def update_scenetext_by_outline(outline_title, new_scene_text):
279
+ global data_structure
280
+ for item in data_structure:
281
+ if item["outline"] == outline_title:
282
+ key = "scenetext_"+item["version"]
283
+ item[key] = new_scene_text
284
+ return True # Item updated successfully
285
+ return False # Item not found
286
+
287
+ def update_notes_by_outline(outline_title, new_notes_text):
288
+ global data_structure
289
+ for item in data_structure:
290
+ item['notes'] = new_notes_text
291
+ return True # Item updated successfully
292
+ return False # Item not found
293
+
294
+
295
+ def generate_combined_text():
296
+ global data_structure
297
+ global full_text
298
+ full_text = ""
299
+ for item in data_structure:
300
+ key = "scenetext_"+item["version"]
301
+ full_text += item[key]+'\n\n'
302
+
303
+ #full_text = '\n\n'.join(item["scenetext"] for item in data_structure)
304
+ full_text = full_text.strip()
305
+ return full_text
306
+
307
+ # used in generate
308
+ def generate_combined_text_until_current_with_history(max_last):
309
+ global data_structure
310
+ global selected_item
311
+ outline_title = selected_item
312
+ count_before_outline = 0
313
+ temp_hist = []
314
+
315
+ if max_last > 0:
316
+ for item in data_structure:
317
+ if item["outline"] == outline_title:
318
+ break # Stop when the specified outline_title is reached
319
+
320
+ # Check if we've reached the limit of history_number
321
+ if count_before_outline < max_last:
322
+ key = "scenetext_"+item["version"]
323
+ temp_hist.append(item[key])
324
+ else:
325
+ # If we've reached the limit, remove the oldest entry
326
+ temp_hist.pop(0)
327
+ key = "scenetext_"+item["version"]
328
+ temp_hist.append(item[key])
329
+
330
+ if item["notes"]!='':
331
+ note_txt = "Note: "+item["notes"]
332
+ temp_hist.append(note_txt)
333
+
334
+ count_before_outline += 1
335
+
336
+
337
+ combined_text = ""
338
+ for item_txt in temp_hist:
339
+ combined_text += item_txt + '\n\n'
340
+ text_until = combined_text.rstrip('\n\n') # Remove trailing newline if any
341
+
342
+ return text_until
343
+
344
+ # used for preview
345
+ def generate_combined_text_until_current():
346
+ global data_structure
347
+ global selected_item
348
+ global full_text_until
349
+ combined_text = ""
350
+ outline_title = selected_item
351
+ for item in data_structure:
352
+ if item["outline"] == outline_title:
353
+ break # Stop when the specified outline_title is reached
354
+ key = "scenetext_"+item["version"]
355
+ combined_text += item[key] + '\n\n'
356
+ full_text_until = combined_text.rstrip('\n\n') # Remove trailing newline if any
357
+
358
+ if full_text_until =='':
359
+ full_text_until = '[Beginning]'
360
+ return full_text_until
361
+
362
+
363
+ def move_item_up(outline_title):
364
+ global data_structure
365
+ for i in range(len(data_structure)):
366
+ if data_structure[i]["outline"] == outline_title and i > 0:
367
+ # Swap the item with the preceding one
368
+ data_structure[i], data_structure[i - 1] = data_structure[i - 1], data_structure[i]
369
+ return True # Item moved up successfully
370
+ return False # Item not found or already at the top
371
+
372
+ def move_item_down(outline_title):
373
+ global data_structure
374
+ for i in range(len(data_structure) - 1):
375
+ if data_structure[i]["outline"] == outline_title and i < len(data_structure) - 1:
376
+ # Swap the item with the following one
377
+ data_structure[i], data_structure[i + 1] = data_structure[i + 1], data_structure[i]
378
+ return True # Item moved down successfully
379
+ return False # Item not found or already at the bottom
380
+
381
+
382
+ class ToolButton(gr.Button, gr.components.FormComponent):
383
+ """Small button with single emoji as text, fits inside gradio forms"""
384
+
385
+ def __init__(self, **kwargs):
386
+ super().__init__(variant="tool", **kwargs)
387
+
388
+ def get_block_name(self):
389
+ return "button"
390
+
391
+
392
+ def create_refresh_button(refresh_component, refresh_method, refreshed_args, elem_class):
393
+ def refresh():
394
+ refresh_method()
395
+ args = refreshed_args() if callable(refreshed_args) else refreshed_args
396
+
397
+ for k, v in args.items():
398
+ setattr(refresh_component, k, v)
399
+
400
+ return gr.update(**(args or {}))
401
+
402
+ refresh_button = ToolButton(value=refresh_symbol, elem_classes=elem_class)
403
+ refresh_button.click(
404
+ fn=refresh,
405
+ inputs=[],
406
+ outputs=[refresh_component]
407
+ )
408
+ return refresh_button
409
+
410
+
411
+ def read_file_to_string(file_path):
412
+ data = ''
413
+ try:
414
+ with open(file_path, 'r') as file:
415
+ data = file.read()
416
+ except FileNotFoundError:
417
+ data = ''
418
+
419
+ return data
420
+
421
+ # lore format
422
+ # keyword, keyword: Lore text
423
+ def parse_dynamic_lore(lore_string):
424
+ memories = []
425
+ entries = lore_string.strip().split('\n\n') # Split the input string into entries separated by blank lines
426
+
427
+ for entry in entries:
428
+ lines = entry.strip().split('\n') # Split each entry into lines
429
+ if len(lines) < 2: # Ensure there are at least two lines (keywords and memory text)
430
+ continue
431
+
432
+ keywords_part = lines[0].strip() # First line contains keywords
433
+ memory_text = ' '.join(line.strip() for line in lines[1:]) # Combine the rest as memory text
434
+
435
+ keywords = [kw.strip().lower() for kw in keywords_part.split(',')] # Process keywords
436
+
437
+ # Append the parsed data to the memories list
438
+ memories.append({
439
+ 'keywords': ','.join(keywords), # Join keywords with commas
440
+ 'memory': memory_text
441
+ })
442
+
443
+ return memories
444
+
445
+
446
+
447
+ def atoi(text):
448
+ return int(text) if text.isdigit() else text.lower()
449
+
450
+ def save_string_to_file(file_path, string):
451
+ try:
452
+ with open(file_path, 'w') as file:
453
+ file.write(string)
454
+ print("String saved to file successfully.")
455
+ except Exception as e:
456
+ print("Error occurred while saving string to file:", str(e))
457
+
458
+ #last_save
459
+ def save_to_json(path_to_file):
460
+ global data_structure
461
+ try:
462
+ with open(Path(path_to_file), 'w') as json_file:
463
+ json.dump(data_structure, json_file, indent=2)
464
+ return True
465
+ except:
466
+ print(f"Saving to {path_to_file} failed")
467
+ return False # File not found or invalid JSON
468
+
469
+ def load_from_json(path_to_file):
470
+ global data_structure
471
+ global selected_item
472
+ global selected_item_title
473
+ global selected_item_prompt
474
+ global selected_item_scenetext
475
+ global selected_scene_version
476
+ global selected_item_notes
477
+ global full_text_until
478
+ global full_text
479
+
480
+ print(f"Loading project: {path_to_file}")
481
+ try:
482
+ with open(Path(path_to_file), 'r') as json_file:
483
+ data_structure.clear() # Clear existing data
484
+ data_structure.extend(json.load(json_file))
485
+
486
+ # Ensure all entries in data_structure have the required keys
487
+ default_values = {
488
+ "outline": "Beat 1",
489
+ "outline_title": "Untitled",
490
+ "prompt": "",
491
+ "scenetext_v1": "",
492
+ "scenetext_v2": "",
493
+ "scenetext_v3": "",
494
+ "version":"v1",
495
+ "notes": "",
496
+ "is_summary": False
497
+ }
498
+ for entry in data_structure:
499
+ for key, default in default_values.items():
500
+ if key not in entry:
501
+ entry[key] = default
502
+
503
+
504
+
505
+ generate_combined_text()
506
+ selected_item = get_first_outline_name()
507
+ selected_item_title = get_first_outline_name_title(selected_item)
508
+ generate_combined_text_until_current()
509
+ selected_item_prompt,selected_item_scenetext, selected_scene_version, selected_item_notes = get_data_by_outline(selected_item)
510
+
511
+
512
+ return True # Loading successful
513
+ except (FileNotFoundError, json.JSONDecodeError):
514
+ return False # File not found or invalid JSON
515
+
516
+ def save_state():
517
+ global params
518
+ global state_save
519
+
520
+ try:
521
+ with open(Path(state_save), 'w') as json_file:
522
+ json.dump(params, json_file, indent=4)
523
+ except:
524
+ print("Can't save last state..")
525
+
526
+ def load_state():
527
+ global params
528
+ global state_save
529
+ global dynamic_lore_changed
530
+
531
+ try:
532
+ with open(Path(state_save), 'r') as json_file:
533
+ new_params = json.load(json_file)
534
+ dynamic_lore_changed = True
535
+ for item in new_params:
536
+ params[item] = new_params[item]
537
+ except:
538
+ pass
539
+
540
+
541
+ def save_proj_state(path_to_file):
542
+ global params
543
+
544
+ try:
545
+ with open(Path(path_to_file), 'w') as json_file:
546
+ json.dump(params, json_file, indent=4)
547
+ except:
548
+ print("Can't save last state..")
549
+
550
+ def load_proj_state(path_to_file):
551
+ global params
552
+ global dynamic_lore_changed
553
+ try:
554
+ with open(Path(path_to_file), 'r') as json_file:
555
+ new_params = json.load(json_file)
556
+ dynamic_lore_changed = True
557
+ for item in new_params:
558
+ params[item] = new_params[item]
559
+ except:
560
+ pass
561
+
562
+
563
+
564
+ last_history_visible = []
565
+ last_history_internal = []
566
+ last_undo = ""
567
+
568
+
569
+
570
+ def get_scene_list():
571
+ global data_structure
572
+ return [item["outline"] for item in data_structure]
573
+
574
+
575
+ #def generate_reply_wrapperMY(question, textBoxB, context_replace, extra_context, extra_prefix, state, quick_instruction, _continue=False, _genwithResponse = False, _continue_sel = False, _postfix = '', _addstop = []):
576
+
577
+ def replace_placeholder(text, placeholder, replacement):
578
+ return text.replace(placeholder, replacement)
579
+
580
+ # Generates a response in chat mode, focusing on turn-based interactions.
581
+ # Uses a structured history (last_history) to provide context from previous exchanges.
582
+ # This function is intended for chat-like interactions where the model responds to individual prompts.
583
+ # Contrast with generate_reply_wrapperMY_NP, which focuses on continuous narrative generation.
584
+ # Uses chat.generate_chat_prompt for prompt construction and chat.generate_reply with is_chat=True for generation.
585
+ # Handles streaming and interruption logic.
586
+
587
+ def generate_reply_wrapperMY(text_prompt, existing_text_in_output, state, _continue=False):
588
+
589
+ global params
590
+ global last_history_visible
591
+ global last_history_internal
592
+ global last_undo
593
+ global last_save
594
+ global selected_item
595
+ global selected_item_prompt
596
+ global selected_item_scenetext
597
+ global selected_scene_version
598
+ global full_text_until
599
+ global full_text
600
+ global data_structure
601
+ global dynamic_lore_changed
602
+ global dynamic_lore
603
+
604
+ selF = params['selectA'][0]
605
+ selT = params['selectA'][1]
606
+
607
+ params['selectA'] = [0,0]
608
+
609
+ new_version = True
610
+ if 'turn_template' in state:
611
+ new_version = False
612
+
613
+ visible_text = None
614
+
615
+ if "[V1]" in text_prompt or "[V2]" in text_prompt or "[V3]" in text_prompt:
616
+ for item in data_structure:
617
+ if item["outline"] == selected_item:
618
+ if "[V1]" in text_prompt:
619
+ text_prompt = replace_placeholder(text_prompt, "[V1]", item['scenetext_v1'])
620
+ if "[V2]" in text_prompt:
621
+ text_prompt = replace_placeholder(text_prompt, "[V2]", item['scenetext_v2'])
622
+ if "[V3]" in text_prompt:
623
+ text_prompt = replace_placeholder(text_prompt, "[V3]", item['scenetext_v3'])
624
+ break
625
+
626
+
627
+
628
+ user_prompt = text_prompt
629
+
630
+ text_to_keep = ""
631
+
632
+ if dynamic_lore_changed==True:
633
+ dynamic_lore = parse_dynamic_lore(params['lorebook'])
634
+ dynamic_lore_changed = False
635
+
636
+ generate_combined_text()
637
+
638
+ if new_version:
639
+ if state['instruction_template_str']=='':
640
+ print("Instruction template is empty! Select Instruct template in tab [Parameters] - [Instruction Template]")
641
+ text_to_keep = existing_text_in_output + "\n Instruction template is empty! Select Instruct template in tab [Parameters] - [Instruction template]"
642
+ yield text_to_keep, full_text
643
+ return
644
+ else:
645
+ if state['turn_template']=='':
646
+ print("Instruction template is empty! Select Instruct template in tab [Parameters] - [Instruction Template]")
647
+ text_to_keep = existing_text_in_output + "\n Instruction template is empty! Select Instruct template in tab [Parameters] - [Instruction template]"
648
+ yield text_to_keep, full_text
649
+ return
650
+
651
+
652
+
653
+ state['mode'] = 'instruct'
654
+
655
+ _iswriting = "..."
656
+
657
+ #context = state['context']
658
+
659
+ if new_version:
660
+ context_instruct = state['custom_system_message']
661
+ contest_instruct_bk = context_instruct
662
+
663
+ #state['custom_system_message'] = context_instruct
664
+
665
+ else:
666
+ context_instruct = state['context_instruct']
667
+ contest_instruct_bk = context_instruct
668
+
669
+ #state['context_instruct'] = context_instruct
670
+
671
+
672
+ state = apply_extensions('state', state)
673
+ if shared.model_name == 'None' or shared.model is None:
674
+ print("No model is loaded! Select one in the Model tab.")
675
+ yield text_to_keep, full_text
676
+ return
677
+
678
+ output = {'visible': [], 'internal': []}
679
+ output['internal'].append(['', ''])
680
+ output['visible'].append(['', ''])
681
+
682
+ last_history = {'visible': [], 'internal': []}
683
+
684
+ # fill history with previous text
685
+ outline_title = selected_item
686
+ count_before_outline = 0
687
+
688
+ if params['include_history_nr'] > 0 and params['include_history']:
689
+ for item in data_structure:
690
+ if item["outline"] == outline_title:
691
+ break # Stop when the specified outline_title is reached
692
+
693
+ hist_prompt = item["prompt"]
694
+ key = "scenetext_"+item["version"]
695
+ hist_response = item[key]
696
+
697
+ hist_notes = item["notes"]
698
+ # Check if we've reached the limit of history_number
699
+ if count_before_outline < params['include_history_nr']:
700
+ last_history['internal'].append([hist_prompt, hist_response])
701
+ last_history['visible'].append([hist_prompt, hist_response])
702
+ else:
703
+ # If we've reached the limit, remove the oldest entry
704
+ last_history['internal'].pop(0)
705
+ last_history['visible'].pop(0)
706
+ last_history['internal'].append([hist_prompt, hist_response])
707
+ last_history['visible'].append([hist_prompt, hist_response])
708
+
709
+ if hist_notes!='':
710
+ note_text = 'Note: '+hist_notes
711
+ note_response = "(Understood. I’ll keep this note in mind as I write further.)"
712
+ last_history['internal'].append([note_text, note_response])
713
+ last_history['visible'].append([note_text, note_response])
714
+
715
+
716
+ count_before_outline += 1
717
+
718
+
719
+ #for item in data_structure:
720
+ # if item["outline"] == outline_title:
721
+ # break # Stop when the specified outline_title is reached
722
+
723
+ # hist_prompt = item["prompt"]
724
+ # hist_response = item["scenetext"]
725
+ # last_history['internal'].append([hist_prompt, hist_response])
726
+ # last_history['visible'].append([hist_prompt, hist_response])
727
+
728
+
729
+
730
+
731
+ # simple
732
+ #story_so_far = generate_combined_text_until_current()
733
+ #if story_so_far!="":
734
+ # hist_response = "Thank you, I will remember that."
735
+ # hist_prompt = "Here is the story so far:\n"+story_so_far
736
+ # last_history['internal'].append([hist_prompt, hist_response])
737
+ # last_history['visible'].append([hist_prompt, hist_response])
738
+
739
+ stopping_strings = chat.get_stopping_strings(state)
740
+
741
+ is_stream = state['stream']
742
+
743
+ # Prepare the input
744
+ if not _continue:
745
+ visible_text = user_prompt
746
+
747
+ # Apply extensions
748
+ user_prompt, visible_text = apply_extensions('chat_input', user_prompt, visible_text, state)
749
+ user_prompt = apply_extensions('input', user_prompt, state, is_chat=True)
750
+
751
+ outtext = _iswriting
752
+ yield outtext, full_text
753
+
754
+ else:
755
+ visible_text = user_prompt
756
+
757
+ if _continue:
758
+ text_to_keep = existing_text_in_output
759
+ # continue sel can span across squiglies
760
+
761
+ # fill history for generate_chat_prompt
762
+ #user_msg, assistant_msg
763
+ last_history['internal'].append([user_prompt, existing_text_in_output])
764
+ last_history['visible'].append([user_prompt, existing_text_in_output])
765
+
766
+ outtext = text_to_keep + _iswriting
767
+ yield outtext, full_text
768
+
769
+
770
+ # Generate the prompt
771
+ kwargs = {
772
+ '_continue': _continue,
773
+ 'history': last_history,
774
+ }
775
+
776
+
777
+ system_message = contest_instruct_bk
778
+ world_msg = ''
779
+ lore_msg = ''
780
+
781
+ if params['system']!='':
782
+ system_message = params['system']
783
+ system_message = system_message.rstrip('\n')
784
+
785
+ if params['world']!='':
786
+ world_msg = "\n\n"+params['world']
787
+
788
+ #add dynamic lore from prompt
789
+ if dynamic_lore:
790
+ user_input_lower = text_prompt.lower()
791
+ for dyn_mem_item in dynamic_lore:
792
+ # Check to see if keywords are present.
793
+ keywords = dyn_mem_item["keywords"].lower().split(",")
794
+
795
+ for keyword in keywords:
796
+ keywordsimp = keyword.strip()
797
+ if keywordsimp!='' and keywordsimp in user_input_lower:
798
+ # keyword is present in user_input
799
+ lore_msg += "\n\n"+ dyn_mem_item["memory"]
800
+
801
+
802
+
803
+ if new_version:
804
+ state['custom_system_message'] = system_message+world_msg+lore_msg
805
+ else:
806
+ state['context_instruct'] = system_message+world_msg+lore_msg
807
+
808
+
809
+ #prompt = apply_extensions('custom_generate_chat_prompt', question, state, **kwargs)
810
+
811
+
812
+ prompt = chat.generate_chat_prompt(user_prompt, state, **kwargs)
813
+
814
+ #put it back, just in case
815
+ if new_version:
816
+ state['custom_system_message'] = contest_instruct_bk
817
+ else:
818
+ state['context_instruct'] = contest_instruct_bk
819
+
820
+ # Generate
821
+ reply = None
822
+ for j, reply in enumerate(chat.generate_reply(prompt, state, stopping_strings=stopping_strings, is_chat=True)):
823
+
824
+ visible_reply = reply #re.sub("(<USER>|<user>|{{user}})", state['name1'], reply)
825
+
826
+ if shared.stop_everything:
827
+ output['visible'][-1][1] = apply_extensions('output', output['visible'][-1][1], state, is_chat=True)
828
+
829
+ output_text = output['visible'][-1][1]
830
+ print("--Interrupted--")
831
+ update_item_by_outline(selected_item, selected_scene_version, text_prompt, text_to_keep + output_text)
832
+ generate_combined_text()
833
+ save_to_json(last_save)
834
+
835
+ yield text_to_keep + output_text, full_text
836
+
837
+ return
838
+
839
+ if _continue:
840
+ output['internal'][-1] = [user_prompt, reply]
841
+ output['visible'][-1] = [visible_text, visible_reply]
842
+ if is_stream:
843
+ output_text = output['visible'][-1][1]
844
+ update_item_by_outline(selected_item, selected_scene_version, text_prompt, text_to_keep + output_text)
845
+ yield text_to_keep + output_text, full_text
846
+ elif not (j == 0 and visible_reply.strip() == ''):
847
+ output['internal'][-1] = [user_prompt, reply.lstrip(' ')]
848
+ output['visible'][-1] = [visible_text, visible_reply.lstrip(' ')]
849
+
850
+ if is_stream:
851
+ output_text = output['visible'][-1][1]
852
+ update_item_by_outline(selected_item, selected_scene_version, text_prompt, text_to_keep + output_text)
853
+ yield text_to_keep + output_text, full_text
854
+
855
+ output['visible'][-1][1] = apply_extensions('output', output['visible'][-1][1], state, is_chat=True)
856
+
857
+ output_text = output['visible'][-1][1]
858
+
859
+ # not really used for anything
860
+ last_history_visible = output['visible'][-1]
861
+ last_history_internal = output['internal'][-1]
862
+
863
+ update_item_by_outline(selected_item, selected_scene_version, text_prompt, text_to_keep + output_text)
864
+ generate_combined_text()
865
+ save_to_json(last_save)
866
+ save_state()
867
+
868
+ yield text_to_keep + output_text, full_text
869
+
870
+ # Generates a response in narrative mode, focusing on continuous text generation.
871
+ # Combines all previous block text (using generate_combined_text_until_current_with_history) into a single context.
872
+ # This function is designed for generating longer, more narrative-driven text, as opposed to the turn-based interactions of generate_reply_wrapperMY.
873
+ # Uses chat.generate_reply with is_chat=False for generation.
874
+
875
+ def generate_reply_wrapperMY_NP(text_prompt, existing_text_in_output, state, _continue=False):
876
+
877
+ global params
878
+ global last_history_visible
879
+ global last_history_internal
880
+ global last_undo
881
+ global last_save
882
+ global selected_item
883
+ global selected_item_prompt
884
+ global selected_item_scenetext
885
+ global selected_scene_version
886
+ global full_text_until
887
+ global full_text
888
+ global data_structure
889
+ global dynamic_lore_changed
890
+ global dynamic_lore
891
+
892
+
893
+ selF = params['selectA'][0]
894
+ selT = params['selectA'][1]
895
+
896
+ params['selectA'] = [0,0]
897
+
898
+ new_version = True
899
+ if 'turn_template' in state:
900
+ new_version = False
901
+
902
+ visible_text = None
903
+
904
+ user_prompt = text_prompt
905
+
906
+ text_to_keep = ""
907
+
908
+ if dynamic_lore_changed==True:
909
+ dynamic_lore = parse_dynamic_lore(params['lorebook'])
910
+ dynamic_lore_changed = False
911
+
912
+
913
+ generate_combined_text()
914
+
915
+ if new_version:
916
+ if state['instruction_template_str']=='':
917
+ print("Instruction template is empty! Select Instruct template in tab [Parameters] - [Instruction Template]")
918
+ text_to_keep = existing_text_in_output + "\n Instruction template is empty! Select Instruct template in tab [Parameters] - [Instruction template]"
919
+ yield text_to_keep, full_text
920
+ return
921
+ else:
922
+ if state['turn_template']=='':
923
+ print("Instruction template is empty! Select Instruct template in tab [Parameters] - [Instruction Template]")
924
+ text_to_keep = existing_text_in_output + "\n Instruction template is empty! Select Instruct template in tab [Parameters] - [Instruction template]"
925
+ yield text_to_keep, full_text
926
+ return
927
+
928
+
929
+
930
+ state['mode'] = 'instruct'
931
+
932
+ _iswriting = "..."
933
+
934
+ #context = state['context']
935
+
936
+ if new_version:
937
+ context_instruct = state['custom_system_message']
938
+ contest_instruct_bk = context_instruct
939
+
940
+ #state['custom_system_message'] = context_instruct
941
+
942
+ else:
943
+ context_instruct = state['context_instruct']
944
+ contest_instruct_bk = context_instruct
945
+
946
+ #state['context_instruct'] = context_instruct
947
+
948
+
949
+ state = apply_extensions('state', state)
950
+ if shared.model_name == 'None' or shared.model is None:
951
+ print("No model is loaded! Select one in the Model tab.")
952
+ yield text_to_keep, full_text
953
+ return
954
+
955
+ output = {'visible': [], 'internal': []}
956
+ output['internal'].append(['', ''])
957
+ output['visible'].append(['', ''])
958
+
959
+ last_history = {'visible': [], 'internal': []}
960
+
961
+ # fill history with previous text
962
+ if params['include_history_nr']>0 and params['include_history']:
963
+ story_so_far = generate_combined_text_until_current_with_history(params['include_history_nr'])
964
+ else:
965
+ story_so_far = ''
966
+
967
+ stopping_strings = chat.get_stopping_strings(state)
968
+
969
+ is_stream = state['stream']
970
+
971
+ # Prepare the input
972
+ if not _continue:
973
+ visible_text = user_prompt
974
+
975
+ outtext = _iswriting
976
+ yield outtext, full_text
977
+
978
+ else:
979
+ visible_text = user_prompt
980
+
981
+ if _continue:
982
+ text_to_keep = existing_text_in_output+'\n'
983
+ # continue sel can span across squiglies
984
+ story_so_far = story_so_far +"\n"+ existing_text_in_output
985
+ outtext = text_to_keep + _iswriting
986
+ yield outtext, full_text
987
+
988
+
989
+ # Generate the prompt
990
+ kwargs = {
991
+ '_continue': _continue,
992
+ 'history': last_history,
993
+ }
994
+
995
+ #prompt = apply_extensions('custom_generate_chat_prompt', question, state, **kwargs)
996
+
997
+ system_message = contest_instruct_bk
998
+ world_msg = ''
999
+ lore_msg = ''
1000
+
1001
+ if params['system']!='':
1002
+ system_message = params['system']
1003
+ system_message = system_message.rstrip('\n')
1004
+
1005
+ if params['world']!='':
1006
+ world_msg = "\n\n"+params['world']+"\n\n"
1007
+
1008
+ #add dynamic lore from prompt
1009
+ if dynamic_lore:
1010
+ user_input_lower = text_prompt.lower()
1011
+ for dyn_mem_item in dynamic_lore:
1012
+ # Check to see if keywords are present.
1013
+ keywords = dyn_mem_item["keywords"].lower().split(",")
1014
+
1015
+ for keyword in keywords:
1016
+ keywordsimp = keyword.strip()
1017
+ if keywordsimp!='' and keywordsimp in user_input_lower:
1018
+ # keyword is present in user_input
1019
+ lore_msg += "\n\n"+ dyn_mem_item["memory"]
1020
+
1021
+
1022
+ prompt = system_message + world_msg + lore_msg
1023
+ prompt = prompt+ story_so_far+"\n"
1024
+ if text_prompt!='':
1025
+ prompt = prompt + "(Editor's Note: Continue writing the story using the following direction. "+ text_prompt+")\n"
1026
+
1027
+ #put it back, just in case
1028
+ if new_version:
1029
+ state['custom_system_message'] = contest_instruct_bk
1030
+ else:
1031
+ state['context_instruct'] = contest_instruct_bk
1032
+
1033
+ # Generate
1034
+ reply = None
1035
+ for j, reply in enumerate(chat.generate_reply(prompt, state, stopping_strings=stopping_strings, is_chat=False)):
1036
+
1037
+ #visible_reply = re.sub("(<USER>|<user>|{{user}})", state['name1'], reply)
1038
+ visible_reply = reply
1039
+
1040
+ if shared.stop_everything:
1041
+ output['visible'][-1][1] = apply_extensions('output', output['visible'][-1][1], state, is_chat=False)
1042
+
1043
+ output_text = output['visible'][-1][1]
1044
+ print("--Interrupted--")
1045
+ update_item_by_outline(selected_item, selected_scene_version,text_prompt, text_to_keep + output_text)
1046
+ generate_combined_text()
1047
+ save_to_json(last_save)
1048
+
1049
+ yield text_to_keep + output_text, full_text
1050
+
1051
+ return
1052
+
1053
+ if _continue:
1054
+ output['internal'][-1] = [user_prompt, reply]
1055
+ output['visible'][-1] = [visible_text, visible_reply]
1056
+ if is_stream:
1057
+ output_text = output['visible'][-1][1]
1058
+ update_item_by_outline(selected_item, selected_scene_version,text_prompt, text_to_keep + output_text)
1059
+ yield text_to_keep + output_text, full_text
1060
+ elif not (j == 0 and visible_reply.strip() == ''):
1061
+ output['internal'][-1] = [user_prompt, reply.lstrip(' ')]
1062
+ output['visible'][-1] = [visible_text, visible_reply.lstrip(' ')]
1063
+
1064
+ if is_stream:
1065
+ output_text = output['visible'][-1][1]
1066
+ update_item_by_outline(selected_item, selected_scene_version, text_prompt, text_to_keep + output_text)
1067
+ yield text_to_keep + output_text, full_text
1068
+
1069
+ output['visible'][-1][1] = apply_extensions('output', output['visible'][-1][1], state, is_chat=False)
1070
+
1071
+ output_text = output['visible'][-1][1]
1072
+
1073
+ # not really used for anything
1074
+ last_history_visible = output['visible'][-1]
1075
+ last_history_internal = output['internal'][-1]
1076
+
1077
+ update_item_by_outline(selected_item, selected_scene_version, text_prompt, text_to_keep + output_text)
1078
+ generate_combined_text()
1079
+ save_to_json(last_save)
1080
+ save_state()
1081
+
1082
+ yield text_to_keep + output_text, full_text
1083
+
1084
+ def custom_css():
1085
+ return """
1086
+ .preview-text textarea {
1087
+ background-color: #071407 !important;
1088
+ --input-text-size: 16px !important;
1089
+ color: #4dc66a !important;
1090
+ --body-text-color: #4dc66a !important;
1091
+ font-family: monospace
1092
+
1093
+ }
1094
+ .scene-text textarea {
1095
+ background-color: #301919 !important;
1096
+ color: #f19999 !important;
1097
+ --body-text-color: #f19999 !important;
1098
+ font-family: monospace
1099
+
1100
+ }
1101
+ .scene-text2 textarea {
1102
+ background-color: #192930 !important;
1103
+ color: #99CCFF !important;
1104
+ --body-text-color: #99CCFF !important;
1105
+ font-family: monospace
1106
+
1107
+ }
1108
+ """
1109
+
1110
+ def custom_js():
1111
+ java = '''
1112
+ const blockwriterElement = document.querySelector('#textbox-blockwriter textarea');
1113
+ let blockwriterScrolled = false;
1114
+
1115
+ blockwriterElement.addEventListener('scroll', function() {
1116
+ let diff = blockwriterElement.scrollHeight - blockwriterElement.clientHeight;
1117
+ if(Math.abs(blockwriterElement.scrollTop - diff) <= 1 || diff == 0) {
1118
+ blockwriterScrolled = false;
1119
+ } else {
1120
+ blockwriterScrolled = true;
1121
+ }
1122
+ });
1123
+
1124
+ const blockwriterObserver = new MutationObserver(function(mutations) {
1125
+ mutations.forEach(function(mutation) {
1126
+ if(!blockwriterScrolled) {
1127
+ blockwriterElement.scrollTop = playgroundAElement.scrollHeight;
1128
+ }
1129
+ });
1130
+ });
1131
+
1132
+ blockwriterObserver.observe(blockwriterElement.parentNode.parentNode.parentNode, config);
1133
+
1134
+ '''
1135
+ return java
1136
+
1137
+
1138
+ def create_action_button(button_label, main_function, update_function, outputs, variant = 'primary'):
1139
+
1140
+ _ishow = False
1141
+
1142
+ def show():
1143
+ nonlocal _ishow
1144
+ if _ishow:
1145
+ _ishow = False
1146
+ return gr.Button.update(visible=False),gr.Button.update(visible=False)
1147
+ else:
1148
+ _ishow = True
1149
+ return gr.Button.update(visible=True),gr.Button.update(visible=True)
1150
+
1151
+ def hide():
1152
+ nonlocal _ishow
1153
+ _ishow = False
1154
+ return gr.Button.update(visible=False),gr.Button.update(visible=False)
1155
+
1156
+ def process():
1157
+ nonlocal _ishow
1158
+ _ishow = False
1159
+ main_function()
1160
+ return gr.Button.update(visible=False),gr.Button.update(visible=False)
1161
+
1162
+ _intMain = gr.Button(button_label, interactive=True, variant = variant)
1163
+ with gr.Row():
1164
+ _intAction = gr.Button(value= 'Continue?',variant="primary",visible=False,interactive=True)
1165
+ _intCancel = gr.Button(value='Cancel',visible=False,interactive=True)
1166
+
1167
+
1168
+ _intMain.click(show,None,[_intAction,_intCancel])
1169
+ _intCancel.click(hide,None,[_intAction,_intCancel])
1170
+ _intAction.click(process,None,[_intAction,_intCancel]).then(update_function,None,outputs)
1171
+
1172
+ return _intMain
1173
+
1174
+ def create_save_button(button_label, save_method, defaultname_variable, default_key, save_method_inputs = None, variant = 'secondary'):
1175
+
1176
+ def show():
1177
+ defname = defaultname_variable[default_key] if defaultname_variable is not None else default_key
1178
+ return gr.Textbox.update(value = defname, interactive= True, visible=True),gr.Button.update(visible=True),gr.Button.update(visible=True),gr.Button.update(visible=False)
1179
+
1180
+ def hide():
1181
+ return gr.Textbox.update(visible=False),gr.Button.update(visible=False),gr.Button.update(visible=False),gr.Button.update(visible=True)
1182
+
1183
+ _intMain = gr.Button(button_label, interactive=True,variant=variant)
1184
+ _edit_name = gr.Textbox(value='',lines=1,max_lines=1,visible=False, label='Name',interactive=True)
1185
+ with gr.Row():
1186
+ _intAction = gr.Button(value=button_label,variant="primary",visible=False,interactive=True)
1187
+ _intCancel = gr.Button(value='Cancel',visible=False,interactive=True)
1188
+
1189
+ _intMain.click(show,None,[_edit_name,_intAction,_intCancel,_intMain])
1190
+ _intCancel.click(hide,None,[_edit_name,_intAction,_intCancel,_intMain])
1191
+ inputs = [_edit_name] + save_method_inputs if save_method_inputs is not None else _edit_name
1192
+ _intAction.click(save_method,inputs,None).then(hide,None,[_edit_name,_intAction,_intCancel,_intMain])
1193
+
1194
+ return _intMain
1195
+
1196
+
1197
+ def create_load_button(button_label, load_method, file_list_method, update_function, outputs, variant = 'secondary'):
1198
+
1199
+ def show():
1200
+ choices = file_list_method()
1201
+ return gr.Dropdown.update(choices=choices, value='None', visible = True),gr.Button.update(visible=True),gr.Button.update(visible=True),gr.Button.update(visible=False)
1202
+
1203
+ def hide():
1204
+ return gr.Textbox.update(visible=False),gr.Button.update(visible=False),gr.Button.update(visible=False),gr.Button.update(visible=True)
1205
+
1206
+ def process(text):
1207
+ load_method(text)
1208
+ return gr.Textbox.update(visible=False),gr.Button.update(visible=False),gr.Button.update(visible=False),gr.Button.update(visible=True)
1209
+
1210
+ _intMain = gr.Button(button_label, interactive=True,variant=variant)
1211
+ _drop = gr.Dropdown(choices=['None'], label= button_label, value='None',visible=False,interactive=True)
1212
+ with gr.Row():
1213
+ _intAction = gr.Button(value='Load',variant="primary",visible=False,interactive=True)
1214
+ _intCancel = gr.Button(value='Cancel',visible=False,interactive=True)
1215
+
1216
+
1217
+ _intMain.click(show,None,[_drop,_intAction,_intCancel,_intMain])
1218
+ _intCancel.click(hide,None,[_drop,_intAction,_intCancel,_intMain])
1219
+ _intAction.click(process,_drop,[_drop,_intAction,_intCancel,_intMain]).then(update_function,None,outputs)
1220
+
1221
+ return _intMain
1222
+
1223
+ #font-family: monospace
1224
+ def get_available_projects():
1225
+ templpath = save_proj_path
1226
+ paths = (x for x in Path(templpath).iterdir() if x.suffix in ('.json'))
1227
+ sortedlist = sorted(set((k.stem for k in paths)), key=natural_keys)
1228
+ sortedlist.insert(0, "None")
1229
+ return sortedlist
1230
+
1231
+ # Example usage:
1232
+ def lorebook_save_action(name, text):
1233
+ # Replace this with your save logic
1234
+ print(f"Saving file: {name}")
1235
+ print(f"Saving file: {text}")
1236
+
1237
+ def project_save(projname):
1238
+ global params
1239
+ global last_save
1240
+ params['projectname'] = projname
1241
+ projpath = save_proj_path +"/"+ projname+".json"
1242
+ projpath2 = save_proj_path +"/"+ projname+".jsonw"
1243
+
1244
+ save_to_json(projpath)
1245
+ save_to_json(last_save)
1246
+ save_proj_state(projpath2)
1247
+ save_state()
1248
+ print(f"Project saved to: {projpath}")
1249
+ return projname
1250
+
1251
+ def quick_project_save():
1252
+ global params
1253
+ global last_save
1254
+ projname = params['projectname']
1255
+ projpath = save_proj_path +"/"+ projname+".json"
1256
+ projpath2 = save_proj_path +"/"+ projname+".jsonw"
1257
+
1258
+ save_to_json(projpath)
1259
+ save_proj_state(projpath2)
1260
+
1261
+ save_to_json(last_save)
1262
+ save_state()
1263
+ print(f"Project saved to: {projpath}")
1264
+
1265
+
1266
+ def load_project(projname):
1267
+ global params
1268
+ params['projectname'] = projname
1269
+ projpath = save_proj_path +"/"+ projname+".json"
1270
+ projpath2 = save_proj_path +"/"+ projname+".jsonw"
1271
+ load_from_json(projpath)
1272
+ load_proj_state(projpath2)
1273
+ print(f"Project loaded: {projpath}")
1274
+
1275
+ def rename_scene(scene_name):
1276
+ global params
1277
+ print(f"Saving file: {scene_name}")
1278
+
1279
+ def full_update_ui():
1280
+ global selected_item
1281
+ global selected_item_prompt
1282
+ global selected_item_scenetext
1283
+ global selected_scene_version
1284
+ global full_text_until
1285
+ global full_text
1286
+ global selected_item_notes
1287
+ global params
1288
+
1289
+ return gr.Radio.update(choices=get_scene_list(), value=selected_item), selected_item, selected_item_prompt, selected_item_scenetext, selected_scene_version, full_text_until, full_text, selected_item_notes, params['projectname'], params['projectname'], params['system'],params['world'],params['lorebook']
1290
+
1291
+ def create_new_project():
1292
+ global selected_item
1293
+ global selected_item_title
1294
+ global selected_item_prompt
1295
+ global selected_item_scenetext
1296
+ global selected_scene_version
1297
+ global full_text_until
1298
+ global full_text
1299
+ global data_structure
1300
+ global last_save
1301
+ global params
1302
+
1303
+ selected_item = "Beat 1"
1304
+ selected_item_title = "Beat 1"
1305
+
1306
+ selected_item_prompt = "Write a paragraph where ..."
1307
+ selected_item_scenetext = ""
1308
+ full_text_until = ""
1309
+ full_text = ""
1310
+ params['projectname'] = 'new_project'
1311
+ data_structure = [{"outline": selected_item,"outline_title": selected_item_title, "prompt": selected_item_prompt, "scenetext_v1": selected_item_scenetext, "scenetext_v2": "", "scenetext_v3": "","version":"v1","notes":"","is_summary": False}]
1312
+
1313
+ params['world']=''
1314
+ params['lorebook']=''
1315
+ save_to_json(last_save)
1316
+ save_state()
1317
+
1318
+ def delete_beat_funct():
1319
+ global selected_item
1320
+ global selected_item_title
1321
+ global selected_item_prompt
1322
+ global selected_item_scenetext
1323
+ global selected_scene_version
1324
+ global full_text_until
1325
+ global full_text
1326
+ global data_structure
1327
+ global last_save
1328
+
1329
+ delete_item_by_outline(selected_item)
1330
+
1331
+
1332
+ def ui():
1333
+ global params
1334
+ global selected_item
1335
+ global selected_item_title
1336
+ global selected_item_prompt
1337
+ global selected_item_scenetext
1338
+ global full_text
1339
+ global full_text_until
1340
+
1341
+
1342
+ params['selectA'] = [0,0]
1343
+
1344
+ load_state()
1345
+ load_from_json(last_save)
1346
+
1347
+ with gr.Row():
1348
+ with gr.Column():
1349
+
1350
+ with gr.Tab('Scenes'):
1351
+ with gr.Row():
1352
+ with gr.Column(scale = 1):
1353
+ with gr.Row():
1354
+ gr_btn_addnew_scene = gr.Button(value='+ New Beat',visible=True,variant="primary")
1355
+ with gr.Row():
1356
+ gr_scenes_radio = gr.Radio(choices=get_scene_list(), value=selected_item, label='Beats', interactive=True, elem_classes='checkboxgroup-table')
1357
+ with gr.Column(scale = 3):
1358
+ with gr.Row():
1359
+ gr_itemname = gr.Textbox(value=selected_item_title, lines = 1, visible = True, label = 'Beat Title', interactive=True, elem_classes=['scene-text'])
1360
+ with gr.Row():
1361
+ gr_prompt = gr.Textbox(value=selected_item_prompt ,lines=4,visible=True, label='Prompt')
1362
+ with gr.Row():
1363
+ with gr.Tab('Instruct Mode'):
1364
+ with gr.Row():
1365
+ gr_btn_generate = gr.Button(value='Generate',visible=True,variant="primary")
1366
+ gr_btn_generate_continue = gr.Button(value='Continue',visible=True)
1367
+ gr_btn_stop = gr.Button(value='Stop',visible=True) #elem_classes="small-button")
1368
+ with gr.Row():
1369
+ gr.Markdown('The text will be generated from the prompt using model instruction template.')
1370
+ with gr.Tab('Narrative Mode'):
1371
+ with gr.Row():
1372
+ gr_btn_generate_np = gr.Button(value='Generate (Narrative)',variant="primary", visible=True)
1373
+ gr_btn_generate_continue_np = gr.Button(value='Continue (Narrative)',visible=True)
1374
+ gr_btn_stop_np = gr.Button(value='Stop',visible=True)
1375
+ with gr.Row():
1376
+ gr.Markdown('The text will be generated as a Narrative completion of the scenes before. Prompt can be used to steer the generation but is used without adding instruction template.')
1377
+ with gr.Tab('Future Cues'):
1378
+ with gr.Row():
1379
+ gr_notes = gr.Textbox(value=selected_item_notes ,lines=4,visible=True,interactive=True, label='Future Cues will be visible to the text model and will shape further text generation', elem_classes=['scene-text2'])
1380
+ with gr.Tab('Settings'):
1381
+ with gr.Row():
1382
+ #gr_auto_clear = gr.Checkbox(label = "Auto Clear Prompt", value = params['auto_clear'])
1383
+ gr_include_history = gr.Checkbox(label = "Include Previous Scenes and Notes in the prompt", value = params['include_history'])
1384
+ include_last_history = gr.Slider(value = params['include_history_nr'],step = 1, minimum=0, maximum=50, label='Max Number of newset Scenes to Include')
1385
+ with gr.Tab('Tools'):
1386
+ with gr.Row():
1387
+ gr_tools_swap = gr.Button(value='<> Swap',visible=True, elem_classes="small-button")
1388
+
1389
+ with gr.Row():
1390
+ gr_generated_text_version = gr.Radio(choices = ['v1','v2','v3'], value= selected_scene_version , visible=True, label='Version')
1391
+ with gr.Row():
1392
+ gr_generated_text = gr.Textbox(value=selected_item_scenetext ,lines=10,visible=True, label='Text',elem_classes=['textbox', 'add_scrollbar'],elem_id='textbox-blockwriter')
1393
+
1394
+ with gr.Row():
1395
+ gr_btn_save_Quick = gr.Button(value='Quick Save',visible=True,variant="primary")
1396
+ gr_itemUp = gr.Button("Move Up")
1397
+ gr_itemDown = gr.Button("Move Down")
1398
+ delete_beat = gr.Button('Delete Current Beat', interactive=True)
1399
+ delete_confirm = gr.Button('Are you Sure?', variant='stop', visible=False) #,elem_classes=['refresh-button']
1400
+ delete_cancel = gr.Button('Cancel', visible=False)
1401
+ with gr.Column(scale = 3):
1402
+ with gr.Row():
1403
+ gr_prevtext = gr.Textbox(value=full_text_until, lines = 35, visible = True, label = 'Story to this point', interactive=False,elem_classes=['preview-text', 'add_scrollbar'])
1404
+
1405
+ with gr.Tab('Full Text'):
1406
+ with gr.Row():
1407
+ with gr.Column(scale = 1):
1408
+ gr_project_name_txt = gr.Textbox(value = params['projectname'], lines=1, label='Text Name')
1409
+ gr_btn_save_Text = gr.Button(value='Save Text',visible=True,variant="primary")
1410
+ with gr.Column(scale = 4):
1411
+ gr_fulltext = gr.Textbox(value=full_text,lines=25,visible=True, label='Full Text', elem_classes=['preview-text', 'add_scrollbar'])
1412
+ with gr.Column(scale = 1):
1413
+ gr.Markdown('')
1414
+ with gr.Tab('Lore book'):
1415
+ with gr.Row():
1416
+ with gr.Column(scale=4):
1417
+ gr_text_SYSTEM = gr.Textbox(value = params['system'], lines=2, label='System Prompt')
1418
+ with gr.Column(scale=1):
1419
+ gr.Markdown('Set System message. This will be always send as the first thing to the text model')
1420
+ with gr.Row():
1421
+ with gr.Column(scale=4):
1422
+ gr_text_WOORLD = gr.Textbox(value = params['world'], lines=10, label='Story Description and World (always present in prompt)')
1423
+ with gr.Column(scale=1):
1424
+ gr.Markdown('Description of the story, world, characters. This will be alwayspresent on the top of the prompt below the system prompt')
1425
+ with gr.Row():
1426
+ with gr.Column(scale=4):
1427
+ gr_text_DYNAMEMORY = gr.Textbox(value = params['lorebook'], lines=10, label='Dynamic Lore')
1428
+ with gr.Column(scale=1):
1429
+ gr.Markdown('Lore triggered by a keywords in the prompt. The Lore will be only used if a keyword in the prompt triggers it.')
1430
+ gr_lore_example = gr.Button(value='Load Example', visible=True)
1431
+
1432
+ with gr.Tab('Project'):
1433
+ with gr.Row():
1434
+ with gr.Column(scale=1):
1435
+
1436
+ gr_project_name = gr.Textbox(value = params['projectname'], lines=1, label='Current Project')
1437
+ gr_project_save = gr.Button('Save Project', interactive=True)
1438
+ gr_project_saveA = gr.Button('Save?', visible=False)
1439
+ gr_project_saveC = gr.Button('Cancel', variant='stop', visible=False)
1440
+ #create_save_button( 'Save Project', project_save,params, params['projectname'])
1441
+ create_load_button( 'Load project', load_project, get_available_projects, full_update_ui, [gr_scenes_radio,gr_itemname,gr_prompt,gr_generated_text,gr_generated_text_version, gr_prevtext,gr_fulltext,gr_notes, gr_project_name,gr_project_name_txt,gr_text_SYSTEM,gr_text_WOORLD,gr_text_DYNAMEMORY] )
1442
+ gr.Markdown('---')
1443
+ create_action_button('New Project',create_new_project,full_update_ui,[gr_scenes_radio,gr_itemname,gr_prompt,gr_generated_text,gr_generated_text_version,gr_prevtext,gr_fulltext,gr_notes,gr_project_name,gr_project_name_txt,gr_text_SYSTEM,gr_text_WOORLD,gr_text_DYNAMEMORY])
1444
+ with gr.Column(scale=4):
1445
+
1446
+ gr.Markdown(help_str)
1447
+
1448
+ def update_state_param(sysmsg, world, lore):
1449
+ global params
1450
+ global dynamic_lore_changed
1451
+ params['system'] = sysmsg
1452
+ params['world'] = world
1453
+ lore_before = params['lorebook']
1454
+ params['lorebook'] = lore
1455
+
1456
+ if lore_before!=lore:
1457
+ dynamic_lore_changed = True
1458
+
1459
+
1460
+ gr_text_SYSTEM.input(update_state_param,[gr_text_SYSTEM,gr_text_WOORLD,gr_text_DYNAMEMORY],None)
1461
+ gr_text_WOORLD.input(update_state_param,[gr_text_SYSTEM,gr_text_WOORLD,gr_text_DYNAMEMORY],None)
1462
+ gr_text_DYNAMEMORY.input(update_state_param,[gr_text_SYSTEM,gr_text_WOORLD,gr_text_DYNAMEMORY],None)
1463
+
1464
+ def write_lore():
1465
+ global params
1466
+ global dynamic_lore_changed
1467
+
1468
+ lore = """rimmer,arnold:
1469
+ Arnold Judas Rimmer - A hologram of a deceased crew member, painfully neurotic, insufferably pompous, and obsessed with climbing the ranks of the Space Corps despite being utterly incompetent. Known for his pedantic obsession with Space Corps directives and his strained relationship with Lister.
1470
+
1471
+ cat:
1472
+ The Cat - A highly evolved humanoid descendant of the ship's original pet cat. Vain, flamboyant, and obsessed with fashion, he moves with feline grace but is utterly self-centered. Lives for his looks and has a hilariously tenuous grasp of the crew's perilous reality.
1473
+
1474
+ lister,dave:
1475
+ Dave Lister - The last human alive, a slobby, curry-loving everyman with a big heart and a dream of returning to Earth. Despite his laziness and crude manners, he's the emotional core of the crew, often finding himself at odds with Rimmer's uptight personality but deeply loyal to his companions."""
1476
+
1477
+
1478
+ params['lorebook'] = lore
1479
+ dynamic_lore_changed = True
1480
+
1481
+ return lore
1482
+
1483
+ gr_lore_example.click(write_lore,None,gr_text_DYNAMEMORY)
1484
+
1485
+ def update_item_ui():
1486
+ global selected_item_title
1487
+ global selected_item_prompt
1488
+ global selected_item_scenetext
1489
+ global selected_scene_version
1490
+ global selected_item_notes
1491
+ global full_text_until
1492
+ return selected_item_title, selected_item_prompt, selected_item_scenetext, selected_scene_version, full_text_until, selected_item_notes
1493
+
1494
+
1495
+ def update_scenes_ui():
1496
+ global selected_item
1497
+ return gr.Radio.update(choices=get_scene_list(), value=selected_item)
1498
+
1499
+ def select_scene(scene_name):
1500
+ global selected_item
1501
+ global selected_item_title
1502
+ global selected_item_prompt
1503
+ global selected_item_scenetext
1504
+ global selected_scene_version
1505
+ global selected_item_notes
1506
+
1507
+ if does_outline_exist(scene_name):
1508
+ selected_item = scene_name
1509
+ selected_item_prompt, selected_item_scenetext, selected_scene_version, selected_item_notes = get_data_by_outline(scene_name)
1510
+ selected_item_title = get_title_by_outline(scene_name)
1511
+ generate_combined_text_until_current()
1512
+
1513
+
1514
+ gr_scenes_radio.change(select_scene,gr_scenes_radio,None).then(update_item_ui,None,[gr_itemname,gr_prompt,gr_generated_text,gr_generated_text_version, gr_prevtext,gr_notes],show_progress=False)
1515
+
1516
+ def change_version(version):
1517
+ global selected_item
1518
+ global selected_item_title
1519
+ global selected_item_prompt
1520
+ global selected_item_scenetext
1521
+ global selected_scene_version
1522
+ global selected_item_notes
1523
+
1524
+ selected_scene_version = version
1525
+
1526
+ set_version_by_outline(selected_item,version)
1527
+ selected_item_prompt, selected_item_scenetext, selected_scene_version, selected_item_notes = get_data_by_outline(selected_item)
1528
+ generate_combined_text_until_current()
1529
+ generate_combined_text()
1530
+
1531
+
1532
+
1533
+
1534
+ def update_text_version_change():
1535
+ global selected_item_scenetext
1536
+ global full_text_until
1537
+ global full_text
1538
+ return selected_item_scenetext,full_text_until,full_text
1539
+
1540
+
1541
+ gr_generated_text_version.change(change_version,gr_generated_text_version,None).then(update_text_version_change, None, [gr_generated_text,gr_prevtext,gr_fulltext],show_progress=False)
1542
+
1543
+
1544
+ clear_arr = [delete_confirm, delete_beat, delete_cancel]
1545
+ delete_beat.click(lambda: [gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)], None, clear_arr)
1546
+ delete_cancel.click(lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, clear_arr)
1547
+ delete_confirm.click(delete_beat_funct,None,None).then(lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, clear_arr).then(
1548
+ full_update_ui, None,[gr_scenes_radio,gr_itemname,gr_prompt,gr_generated_text,gr_generated_text_version, gr_prevtext,gr_fulltext,gr_notes,gr_project_name,gr_project_name_txt,gr_text_SYSTEM,gr_text_WOORLD,gr_text_DYNAMEMORY])
1549
+
1550
+
1551
+ save_arr = [gr_project_saveA, gr_project_save, gr_project_saveC]
1552
+ gr_project_save.click(lambda: [gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)], None, save_arr)
1553
+ gr_project_saveC.click(lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, save_arr)
1554
+ gr_project_saveA.click(project_save,gr_project_name,gr_project_name_txt).then(lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, save_arr)
1555
+
1556
+ gr_btn_save_Quick.click(quick_project_save,None,None)
1557
+
1558
+ def full_text_save(savename):
1559
+ text = generate_combined_text()
1560
+ projpath = save_proj_path_txt +"/"+ savename+".txt"
1561
+ try:
1562
+
1563
+ # Save the text to the file
1564
+ with open(projpath, 'w', encoding='utf-8') as file:
1565
+ file.write(text)
1566
+
1567
+ print(f"Text successfully saved to: {projpath}")
1568
+ except Exception as e:
1569
+ print(f"Failed to save text to file. Error: {e}")
1570
+
1571
+
1572
+ gr_btn_save_Text.click(full_text_save,gr_project_name_txt,None)
1573
+
1574
+ def add_new_item():
1575
+ add_item_auto("Beat","","")
1576
+ generate_combined_text_until_current()
1577
+
1578
+ gr_btn_addnew_scene.click(add_new_item,None,None).then(update_scenes_ui, None, gr_scenes_radio,show_progress=False).then(update_item_ui, None,[gr_itemname,gr_prompt,gr_generated_text,gr_generated_text_version, gr_prevtext, gr_notes],show_progress=False)
1579
+
1580
+ def change_prompt(text):
1581
+ global selected_item
1582
+ global selected_item_prompt
1583
+ selected_item_prompt = text
1584
+ update_prompt_by_outline(selected_item,selected_item_prompt)
1585
+
1586
+
1587
+ gr_prompt.input(change_prompt,gr_prompt,None)
1588
+
1589
+ def change_scenetext(text):
1590
+ global selected_item
1591
+ global selected_item_scenetext
1592
+ selected_item_scenetext = text
1593
+ update_scenetext_by_outline(selected_item,selected_item_scenetext)
1594
+ return generate_combined_text()
1595
+
1596
+ gr_generated_text.input(change_scenetext,gr_generated_text,gr_fulltext,show_progress=False)
1597
+
1598
+ def change_notes(text):
1599
+ global selected_item_notes
1600
+ global selected_item
1601
+ selected_item_notes = text
1602
+ update_notes_by_outline(selected_item,selected_item_notes)
1603
+
1604
+ gr_notes.input(change_notes,gr_notes,None,show_progress=False)
1605
+
1606
+ def change_title(text):
1607
+ global selected_item
1608
+ global selected_item_title
1609
+ update_item_title_by_outline(selected_item,text)
1610
+ selected_item_title = text
1611
+
1612
+
1613
+ gr_itemname.input(change_title,gr_itemname,None)
1614
+
1615
+ def moveitemup():
1616
+ global selected_item
1617
+ move_item_up(selected_item)
1618
+
1619
+ return gr.Radio.update(choices=get_scene_list(), value=selected_item), generate_combined_text(), generate_combined_text_until_current()
1620
+
1621
+ gr_itemUp.click(moveitemup,None,[gr_scenes_radio,gr_fulltext,gr_prevtext])
1622
+
1623
+
1624
+ def moveitemdown():
1625
+ global selected_item
1626
+ move_item_down(selected_item)
1627
+
1628
+ return gr.Radio.update(choices=get_scene_list(), value=selected_item), generate_combined_text(), generate_combined_text_until_current()
1629
+
1630
+ gr_itemDown.click(moveitemdown,None,[gr_scenes_radio,gr_fulltext,gr_prevtext])
1631
+
1632
+
1633
+ input_paramsA = [gr_prompt, gr_generated_text, shared.gradio['interface_state']]
1634
+ output_paramsA =[gr_generated_text,gr_fulltext]
1635
+
1636
+
1637
+ disable_struct = [gr_scenes_radio,gr_btn_addnew_scene,gr_itemUp,gr_itemDown,gr_btn_generate,gr_btn_generate_continue,gr_btn_generate_np,gr_btn_generate_continue_np]
1638
+
1639
+ def update_full_text_ui():
1640
+ global full_text_until
1641
+ return full_text_until
1642
+
1643
+ def disable_radio():
1644
+ return gr.Radio.update(interactive=False), gr.Button.update(interactive=False), gr.Button.update(interactive=False), gr.Button.update(interactive=False), gr.Button.update(interactive=False), gr.Button.update(interactive=False), gr.Button.update(interactive=False), gr.Button.update(interactive=False)
1645
+
1646
+ def enable_radio():
1647
+ return gr.Radio.update(interactive=True), gr.Button.update(interactive=True), gr.Button.update(interactive=True), gr.Button.update(interactive=True), gr.Button.update(interactive=True), gr.Button.update(interactive=True), gr.Button.update(interactive=True), gr.Button.update(interactive=True)
1648
+
1649
+ gr_btn_generate.click(main_ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then(disable_radio,None,disable_struct).then(
1650
+ generate_reply_wrapperMY, inputs=input_paramsA, outputs= output_paramsA, show_progress=False).then(enable_radio,None,disable_struct)
1651
+
1652
+ gr_btn_generate_np.click(main_ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then(disable_radio,None,disable_struct).then(
1653
+ generate_reply_wrapperMY_NP, inputs=input_paramsA, outputs= output_paramsA, show_progress=False).then(enable_radio,None,disable_struct)
1654
+
1655
+ gr_btn_generate_continue_np.click(main_ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then(disable_radio,None,disable_struct).then(
1656
+ partial(generate_reply_wrapperMY_NP, _continue=True), inputs=input_paramsA, outputs= output_paramsA, show_progress=False).then(enable_radio,None,disable_struct)
1657
+
1658
+ gr_btn_generate_continue.click(main_ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then(disable_radio,None,disable_struct).then(
1659
+ partial(generate_reply_wrapperMY, _continue=True), inputs=input_paramsA, outputs= output_paramsA, show_progress=False).then(enable_radio,None,disable_struct)
1660
+
1661
+ def stop_everything_eventMy():
1662
+ shared.stop_everything = True
1663
+
1664
+ gr_btn_stop.click(stop_everything_eventMy, None, None, queue=False)
1665
+ gr_btn_stop_np.click(stop_everything_eventMy, None, None, queue=False)
1666
+
1667
+ include_last_history.change(lambda x: params.update({"include_history_nr": x}), include_last_history,None)
1668
+ #gr_auto_clear.change(lambda x: params.update({"auto_clear": x}), gr_auto_clear, None)
1669
+ gr_include_history.change(lambda x: params.update({"include_history": x}), gr_include_history, None)
1670
+
1671
+ def swap_current():
1672
+ global selected_item
1673
+ global data_structure
1674
+ global selected_item_prompt
1675
+ global selected_item_scenetext
1676
+ global selected_scene_version
1677
+ global selected_item_notes
1678
+ global full_text
1679
+ for item in data_structure:
1680
+ if item["outline"] == selected_item:
1681
+ key = "scenetext_"+item["version"]
1682
+ prompt = item["prompt"]
1683
+ item["prompt"] = item[key]
1684
+ item[key] = prompt
1685
+ break
1686
+
1687
+ generate_combined_text()
1688
+ selected_item_prompt, selected_item_scenetext, selected_scene_version, selected_item_notes = get_data_by_outline(selected_item)
1689
+ return selected_item_prompt, selected_item_scenetext, full_text
1690
+
1691
+
1692
+ gr_tools_swap.click(swap_current,None,[gr_prompt,gr_generated_text,gr_fulltext])
state.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "display_name": "StoryCrafter",
3
+ "is_tab": true,
4
+ "selectA": [
5
+ 0,
6
+ 0
7
+ ],
8
+ "projectname": "manual",
9
+ "auto_clear": true,
10
+ "include_history": true,
11
+ "include_history_nr": 5,
12
+ "lorebook": "Future Cues:\nFuture Cues are directives that apply to the beats that follow the current one, ensuring consistency and coherence as the story unfolds. They can be used to introduce significant changes, reminders, or any other detail that will be relevant to the story moving forward.",
13
+ "system": "You are experienced fiction writer. Develop the plot slowly. Describe all actions in full, elaborate and vivid detail.",
14
+ "world": "StoryCrafter is for writing and generating stories beat by beat (short passages of scenes, paragraphs). At each generation all the previously \n\nVersions\nEach beat can also have multiple versions and you can then choose which version to include in the final text.\n\nFuture Cues\nEach beat can also have Future Cues - they are for the text that will be generated after the current one, down the page. "
15
+ }