content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
how to use GitHub environment variable in Actions?
I added environment variables (NODE_ENV) in my 'dev' GitHub Environment.
How can I use it in my Action for my self-hosted runner on AWS?
Now, I tried in this way:
- name: start pm2 service
env:
NODE_ENV: ${{ secrets.NODE_ENV }}
run: NODE_ENV=$NODE_ENV pm2 start ./bin/www --name 'backend'
But I can't get the env on the AWS, my app shows nothing.
A:
Try and and use the official example
steps:
- shell: bash
env:
NODE_ENV: ${{ secrets.NODE_ENV }}
run: |
echo "NODE=ENV='$NODE_ENV'"
pm2 start ./bin/www --name 'backend'
Since NODE_ENV is exported, you might not need to prefix pm2 with NODE_ENV=$NODE_ENV.
However, since the value is still empty, that would confirm the "External Configuration/Secret Sources" is not fully supported yet for an AWS App Runner, assuming it is used for a Github self-hosted runner execution.
That differs from the fact App Runners support GitHub Actions since Nov. 2021.
| how to use GitHub environment variable in Actions? | I added environment variables (NODE_ENV) in my 'dev' GitHub Environment.
How can I use it in my Action for my self-hosted runner on AWS?
Now, I tried in this way:
- name: start pm2 service
env:
NODE_ENV: ${{ secrets.NODE_ENV }}
run: NODE_ENV=$NODE_ENV pm2 start ./bin/www --name 'backend'
But I can't get the env on the AWS, my app shows nothing.
| [
"Try and and use the official example\nsteps:\n - shell: bash\n env:\n NODE_ENV: ${{ secrets.NODE_ENV }}\n run: |\n echo \"NODE=ENV='$NODE_ENV'\"\n pm2 start ./bin/www --name 'backend'\n\nSince NODE_ENV is exported, you might not need to prefix pm2 with NODE_ENV=$NODE_ENV.\nHowever, since the value is still empty, that would confirm the \"External Configuration/Secret Sources\" is not fully supported yet for an AWS App Runner, assuming it is used for a Github self-hosted runner execution.\nThat differs from the fact App Runners support GitHub Actions since Nov. 2021.\n"
] | [
0
] | [] | [] | [
"amazon_ec2",
"github",
"github_actions",
"node.js"
] | stackoverflow_0074670548_amazon_ec2_github_github_actions_node.js.txt |
Q:
Reversed double linked list by python
why can't print reversed this double linked list by python?
always print 6 or None
please can anyone help me fast to pass this task
///////////////////////////////////////////////////////////////////////////
class Node:
def __init__(self, data=None, next=None, prev=None):
self.data = data
self.next = next
self.previous = prev
sample methods==>
def set_data(self, newData):
self.data = newData
def get_data(self):
return self.data
def set_next(self, newNext):
self.next = newNext
def get_next(self):
return self.next
def hasNext(self):
return self.next is not None
def set_previous(self, newprev):
self.previous = newprev
def get_previous(self):
return self.previous
def hasPrevious(self):
return self.previous is not None
class double===>
class DoubleLinkedList:
def __init__(self):
self.head = None
self.tail = None
def addAtStart(self, item):
newNode = Node(item)
if self.head is None:
self.head = self.tail = newNode
else:
newNode.set_next(self.head)
newNode.set_previous(None)
self.head.set_previous(newNode)
self.head = newNode
def size(self):
current = self.head
count = 0
while current is not None:
count += 1
current = current.get_next()
return count
here is the wrong method ==>
try to fix it without more changes
def printReverse(self):
current = self.head
while current:
temp = current.next
current.next = current.previous
current.previous = temp
current = current.previous
temp = self.head
self.head = self.tail
self.tail = temp
print("Nodes of doubly linked list reversed: ")
while current is not None:
print(current.data),
current = current.get_next()
call methods==>
new = DoubleLinkedList()
new.addAtStart(1)
new.addAtStart(2)
new.addAtStart(3)
new.printReverse()
A:
Your printReverse seems to do something else than what its name suggests. I would think that this function would just iterate the list nodes in reversed order and print the values, but it actually reverses the list, and doesn't print the result because of a bug.
The error in your code is that the final loop has a condition that is guaranteed to be false. current is always None when it reaches that loop, so nothing gets printed there. This is easily fixed by initialising current just before the loop with:
current = self.head
That fixes your issue, but it is not nice to have a function that both reverses the list, and prints it. It is better practice to separate these two tasks. The method that reverses the list could be named reverse. Then add another method that allows iteration of the values in the list. This is done by defining __iter__. The caller can then easily print the list with that iterator.
Here is how that looks:
def reverse(self):
current = self.head
while current:
current.previous, current.next = current.next, current.previous
current = current.previous
self.head, self.tail = self.tail, self.head
def __iter__(self):
node = self.head
while node:
yield node.data
node = node.next
def __repr__(self):
return "->".join(map(repr, self))
The main program can then be:
lst = DoubleLinkedList()
lst.addAtStart(1)
lst.addAtStart(2)
lst.addAtStart(3)
print(lst)
lst.reverse()
print(lst)
| Reversed double linked list by python | why can't print reversed this double linked list by python?
always print 6 or None
please can anyone help me fast to pass this task
///////////////////////////////////////////////////////////////////////////
class Node:
def __init__(self, data=None, next=None, prev=None):
self.data = data
self.next = next
self.previous = prev
sample methods==>
def set_data(self, newData):
self.data = newData
def get_data(self):
return self.data
def set_next(self, newNext):
self.next = newNext
def get_next(self):
return self.next
def hasNext(self):
return self.next is not None
def set_previous(self, newprev):
self.previous = newprev
def get_previous(self):
return self.previous
def hasPrevious(self):
return self.previous is not None
class double===>
class DoubleLinkedList:
def __init__(self):
self.head = None
self.tail = None
def addAtStart(self, item):
newNode = Node(item)
if self.head is None:
self.head = self.tail = newNode
else:
newNode.set_next(self.head)
newNode.set_previous(None)
self.head.set_previous(newNode)
self.head = newNode
def size(self):
current = self.head
count = 0
while current is not None:
count += 1
current = current.get_next()
return count
here is the wrong method ==>
try to fix it without more changes
def printReverse(self):
current = self.head
while current:
temp = current.next
current.next = current.previous
current.previous = temp
current = current.previous
temp = self.head
self.head = self.tail
self.tail = temp
print("Nodes of doubly linked list reversed: ")
while current is not None:
print(current.data),
current = current.get_next()
call methods==>
new = DoubleLinkedList()
new.addAtStart(1)
new.addAtStart(2)
new.addAtStart(3)
new.printReverse()
| [
"Your printReverse seems to do something else than what its name suggests. I would think that this function would just iterate the list nodes in reversed order and print the values, but it actually reverses the list, and doesn't print the result because of a bug.\nThe error in your code is that the final loop has a condition that is guaranteed to be false. current is always None when it reaches that loop, so nothing gets printed there. This is easily fixed by initialising current just before the loop with:\n current = self.head\n\nThat fixes your issue, but it is not nice to have a function that both reverses the list, and prints it. It is better practice to separate these two tasks. The method that reverses the list could be named reverse. Then add another method that allows iteration of the values in the list. This is done by defining __iter__. The caller can then easily print the list with that iterator.\nHere is how that looks:\n def reverse(self):\n current = self.head\n while current:\n current.previous, current.next = current.next, current.previous\n current = current.previous\n self.head, self.tail = self.tail, self.head\n\n def __iter__(self):\n node = self.head\n while node:\n yield node.data\n node = node.next\n\n def __repr__(self):\n return \"->\".join(map(repr, self))\n\nThe main program can then be:\nlst = DoubleLinkedList()\nlst.addAtStart(1)\nlst.addAtStart(2)\nlst.addAtStart(3)\nprint(lst)\nlst.reverse()\nprint(lst)\n\n"
] | [
1
] | [] | [] | [
"linked_list",
"python"
] | stackoverflow_0074670265_linked_list_python.txt |
Q:
How to change from user input of adjacency matrix into hard coding it into the program?
try
{
System.out.println("Enter the number of vertices");
number_of_vertices = scan.nextInt();
adjacency_matrix = new int[number_of_vertices + 1][number_of_vertices + 1];
System.out.println("Enter the Weighted Matrix for the graph");
for (int i = 1; i <= number_of_vertices; i++)
{
for (int j = 1; j <= number_of_vertices; j++)
{
adjacency_matrix[i][j] = scan.nextInt();
if (i == j)
{
adjacency_matrix[i][j] = 0;
continue;
}
if (adjacency_matrix[i][j] == 0)
{
adjacency_matrix[i][j] = Integer.MAX_VALUE;
}
}
}
This is what the user input looks like:
Enter the number of vertices
5
Enter the Weighted Matrix for the graph
0 9 6 5 3
0 0 0 0 0
0 2 0 4 0
0 0 0 0 0
0 0 0 0 0
How would I be able to hard code the matrix into the program rather than ask the user to enter the matrix?
A:
Assuming (i suppose main reason for down votes: not showing that):
[static] Scanner scan = new Scanner(System.in);
You just need to "parametrize" the/any "constructor argument" of new Scanner(...)...
E.g, like:
Scanner scan = new Scanner(someInputStreamOrWriterorPrintStream);
Refs (java 20): https://download.java.net/java/early_access/loom/docs/api/java.base/java/util/Scanner.html
Where someInputStreamOrWriterorPrintStream can be one of (jdk19):
java.lang.String (! simply the (complete) input ...)
java.lang.Readable (since 1.5!?, WTH!! ..this would apply to System.out)
java.io.InputStream
java.io.File
java.nio.file.Path
java.nio.channels.ReadableByteChannel#
| How to change from user input of adjacency matrix into hard coding it into the program? | try
{
System.out.println("Enter the number of vertices");
number_of_vertices = scan.nextInt();
adjacency_matrix = new int[number_of_vertices + 1][number_of_vertices + 1];
System.out.println("Enter the Weighted Matrix for the graph");
for (int i = 1; i <= number_of_vertices; i++)
{
for (int j = 1; j <= number_of_vertices; j++)
{
adjacency_matrix[i][j] = scan.nextInt();
if (i == j)
{
adjacency_matrix[i][j] = 0;
continue;
}
if (adjacency_matrix[i][j] == 0)
{
adjacency_matrix[i][j] = Integer.MAX_VALUE;
}
}
}
This is what the user input looks like:
Enter the number of vertices
5
Enter the Weighted Matrix for the graph
0 9 6 5 3
0 0 0 0 0
0 2 0 4 0
0 0 0 0 0
0 0 0 0 0
How would I be able to hard code the matrix into the program rather than ask the user to enter the matrix?
| [
"Assuming (i suppose main reason for down votes: not showing that):\n[static] Scanner scan = new Scanner(System.in);\n\nYou just need to \"parametrize\" the/any \"constructor argument\" of new Scanner(...)...\nE.g, like:\nScanner scan = new Scanner(someInputStreamOrWriterorPrintStream);\n\nRefs (java 20): https://download.java.net/java/early_access/loom/docs/api/java.base/java/util/Scanner.html\nWhere someInputStreamOrWriterorPrintStream can be one of (jdk19):\n\njava.lang.String (! simply the (complete) input ...)\njava.lang.Readable (since 1.5!?, WTH!! ..this would apply to System.out)\njava.io.InputStream\njava.io.File\njava.nio.file.Path\njava.nio.channels.ReadableByteChannel#\n\n"
] | [
0
] | [] | [] | [
"adjacency_matrix",
"dijkstra",
"java",
"java.util.scanner"
] | stackoverflow_0074670961_adjacency_matrix_dijkstra_java_java.util.scanner.txt |
Q:
Stuck on Python "KeyError: " in BFS code of a water jug scenario
Intended Function of code: Takes a user input for the volume of 3 jars(1-9) and output the volumes with one of the jars containing the target length. jars can be Emptied/Filled a jar, or poured from one jar to another until one is empty or full.
With the code I have, i'm stuck on a key exception error .
Target length is 4 for this case
Code:
`
class Graph:
class GraphNode:
def __init__(self, jar1 = 0, jar2 = 0, jar3 = 0, color = "white", pi = None):
self.jar1 = jar1
self.jar2 = jar2
self.jar3 = jar3
self.color = color
self.pi = pi
def __repr__(self):
return str(self)
def __init__(self, jl1 = 0, jl2 = 0, jl3 = 0, target = 0):
self.jl1 = jl1
self.jl2 = jl2
self.jl3 = jl3
self.target = target
self.V = {}
for x in range(jl1 + 1):
for y in range(jl2 + 1):
for z in range(jl3 + 1):
node = Graph.GraphNode(x, y, z, "white", None)
self.V[node] = None
def isFound(self, a: GraphNode) -> bool:
if self.target in [a.jar1, a.jar2, a.jar3]:
return True
return False
pass
def isAdjacent(self, a: GraphNode, b: GraphNode) -> bool:
if self.V[a]==b:
return True
return False
pass
def BFS(self) -> [] :
start = Graph.GraphNode(0, 0, 0, "white")
queue=[]
queue.append(start)
while len(queue)>0:
u=queue.pop(0)
for v in self.V:
if self.isAdjacent(u,v):
if v.color =="white":
v.color == "gray"
v.pi=u
if self.isFound(v):
output=[]
while v.pi is not None:
output.insert(0,v)
v=v.pi
return output
else:
queue.append(v)
u.color="black"
return []
#######################################################
j1 = input("Size of first jar: ")
j2 = input("Size of second jar: ")
j3 = input("Size of third jar: ")
t = input("Size of target: ")
jar1 = int(j1)
jar2 = int(j2)
jar3 = int(j3)
target = int(t)
graph1 = Graph(jar1, jar2, jar3, target)
output = graph1.BFS()
print(output)
`
**Error: **
line 37, in isAdjacent
if self.V[a]==b:
KeyError: <exception str() failed>
A:
Strange but when I first ran this in the IPython interpreter I got a different exception:
... :35, in Graph.isAdjacent(self, a, b)
34 def isAdjacent(self, a: GraphNode, b: GraphNode) -> bool:
---> 35 if self.V[a]==b:
36 return True
37 return False
<class 'str'>: (<class 'RecursionError'>, RecursionError('maximum recursion depth exceeded while getting the str of an object'))
When I run it as a script or in the normal interpreter I do get the same one you had:
... line 35, in isAdjacent
if self.V[a]==b:
KeyError: <exception str() failed>
I'm not sure what this means so I ran the debugger and got this:
File "/Users/.../stackoverflow/bfs1.py", line 1, in <module>
class Graph:
File "/Users/.../stackoverflow/bfs1.py", line 47, in BFS
if self.isAdjacent(u,v):
File "/Users/.../stackoverflow/bfs1.py", line 35, in isAdjacent
if self.V[a]==b:
KeyError: <unprintable KeyError object>
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
> /Users/.../stackoverflow/bfs1.py(35)isAdjacent()
-> if self.V[a]==b:
(Pdb) type(a)
<class '__main__.Graph.GraphNode'>
(Pdb) str(a)
*** RecursionError: maximum recursion depth exceeded while calling a Python object
So it does seem like a maximum recursion error. (The error message you originally got is not very helpful). But the words <unprintable KeyError object> are a clue. It looks like it was not able to display the KeyError exception...
The culprit is this line in your class definition:
def __repr__(self):
return str(self)
What were you trying to do here?
The __repr__ function is called when the class is asked to produce a string representation of itself. But yours calls the string function on the instance of the class so it will call itself! So I think you actually generated a second exception while the debugger was trying to display the first!!!.
I replaced these lines with
def __repr__(self):
return f"GraphNode({self.jar1}, {self.jar2}, {self.jar3}, {self.color}, {self.pi})"
and I don't get the exception now:
Size of first jar: 1
Size of second jar: 3
Size of third jar: 6
Size of target: 4
Traceback (most recent call last):
File "/Users/.../stackoverflow/bfs1.py", line 77, in <module>
output = graph1.BFS()
File "/Users/.../stackoverflow/bfs1.py", line 45, in BFS
if self.isAdjacent(u,v):
File "/Users/.../stackoverflow/bfs1.py", line 33, in isAdjacent
if self.V[a]==b:
KeyError: GraphNode(0, 0, 0, white, None)
This exception is easier to interpret. Now it's over to you to figure out why this GraphNode was not found in the keys of self.V!
| Stuck on Python "KeyError: " in BFS code of a water jug scenario | Intended Function of code: Takes a user input for the volume of 3 jars(1-9) and output the volumes with one of the jars containing the target length. jars can be Emptied/Filled a jar, or poured from one jar to another until one is empty or full.
With the code I have, i'm stuck on a key exception error .
Target length is 4 for this case
Code:
`
class Graph:
class GraphNode:
def __init__(self, jar1 = 0, jar2 = 0, jar3 = 0, color = "white", pi = None):
self.jar1 = jar1
self.jar2 = jar2
self.jar3 = jar3
self.color = color
self.pi = pi
def __repr__(self):
return str(self)
def __init__(self, jl1 = 0, jl2 = 0, jl3 = 0, target = 0):
self.jl1 = jl1
self.jl2 = jl2
self.jl3 = jl3
self.target = target
self.V = {}
for x in range(jl1 + 1):
for y in range(jl2 + 1):
for z in range(jl3 + 1):
node = Graph.GraphNode(x, y, z, "white", None)
self.V[node] = None
def isFound(self, a: GraphNode) -> bool:
if self.target in [a.jar1, a.jar2, a.jar3]:
return True
return False
pass
def isAdjacent(self, a: GraphNode, b: GraphNode) -> bool:
if self.V[a]==b:
return True
return False
pass
def BFS(self) -> [] :
start = Graph.GraphNode(0, 0, 0, "white")
queue=[]
queue.append(start)
while len(queue)>0:
u=queue.pop(0)
for v in self.V:
if self.isAdjacent(u,v):
if v.color =="white":
v.color == "gray"
v.pi=u
if self.isFound(v):
output=[]
while v.pi is not None:
output.insert(0,v)
v=v.pi
return output
else:
queue.append(v)
u.color="black"
return []
#######################################################
j1 = input("Size of first jar: ")
j2 = input("Size of second jar: ")
j3 = input("Size of third jar: ")
t = input("Size of target: ")
jar1 = int(j1)
jar2 = int(j2)
jar3 = int(j3)
target = int(t)
graph1 = Graph(jar1, jar2, jar3, target)
output = graph1.BFS()
print(output)
`
**Error: **
line 37, in isAdjacent
if self.V[a]==b:
KeyError: <exception str() failed>
| [
"Strange but when I first ran this in the IPython interpreter I got a different exception:\n... :35, in Graph.isAdjacent(self, a, b)\n 34 def isAdjacent(self, a: GraphNode, b: GraphNode) -> bool:\n---> 35 if self.V[a]==b:\n 36 return True\n 37 return False\n\n<class 'str'>: (<class 'RecursionError'>, RecursionError('maximum recursion depth exceeded while getting the str of an object'))\n\nWhen I run it as a script or in the normal interpreter I do get the same one you had:\n... line 35, in isAdjacent\n if self.V[a]==b:\nKeyError: <exception str() failed>\n\nI'm not sure what this means so I ran the debugger and got this:\n File \"/Users/.../stackoverflow/bfs1.py\", line 1, in <module>\n class Graph:\n File \"/Users/.../stackoverflow/bfs1.py\", line 47, in BFS\n if self.isAdjacent(u,v):\n File \"/Users/.../stackoverflow/bfs1.py\", line 35, in isAdjacent\n if self.V[a]==b:\nKeyError: <unprintable KeyError object>\nUncaught exception. Entering post mortem debugging\nRunning 'cont' or 'step' will restart the program\n> /Users/.../stackoverflow/bfs1.py(35)isAdjacent()\n-> if self.V[a]==b:\n(Pdb) type(a)\n<class '__main__.Graph.GraphNode'>\n(Pdb) str(a)\n*** RecursionError: maximum recursion depth exceeded while calling a Python object\n\nSo it does seem like a maximum recursion error. (The error message you originally got is not very helpful). But the words <unprintable KeyError object> are a clue. It looks like it was not able to display the KeyError exception...\nThe culprit is this line in your class definition:\n def __repr__(self):\n return str(self)\n\nWhat were you trying to do here?\nThe __repr__ function is called when the class is asked to produce a string representation of itself. But yours calls the string function on the instance of the class so it will call itself! So I think you actually generated a second exception while the debugger was trying to display the first!!!.\nI replaced these lines with\n def __repr__(self):\n return f\"GraphNode({self.jar1}, {self.jar2}, {self.jar3}, {self.color}, {self.pi})\"\n\nand I don't get the exception now:\nSize of first jar: 1\nSize of second jar: 3\nSize of third jar: 6\nSize of target: 4\nTraceback (most recent call last):\n File \"/Users/.../stackoverflow/bfs1.py\", line 77, in <module>\n output = graph1.BFS()\n File \"/Users/.../stackoverflow/bfs1.py\", line 45, in BFS\n if self.isAdjacent(u,v):\n File \"/Users/.../stackoverflow/bfs1.py\", line 33, in isAdjacent\n if self.V[a]==b:\nKeyError: GraphNode(0, 0, 0, white, None)\n\nThis exception is easier to interpret. Now it's over to you to figure out why this GraphNode was not found in the keys of self.V!\n"
] | [
1
] | [] | [] | [
"breadth_first_search",
"graph_traversal",
"python"
] | stackoverflow_0074664111_breadth_first_search_graph_traversal_python.txt |
Q:
Java and the Trello API?
How can I create a module to generate a timetable in Java using the Trello API?
I'm a student, and I have this project as homework.
I've researched and can't find any tutorials on this subject.
I expect getting some steps to follow or some resources.
A:
To create a module to generate a timetable in Java using the Trello API, you will need to do the following:
Make sure you have a Java development environment set up on your computer. You will need a Java Development Kit (JDK) and a text editor or an Integrated Development Environment (IDE) to write and run your Java code.
Install the Trello API Java client library. The Trello API provides a Java client library that makes it easy to integrate with the Trello API from a Java application. You can find the library and installation instructions on the Trello API documentation website.
Authenticate with the Trello API. Before you can access the Trello API, you need to authenticate your Java application with the API. You will need to create a Trello API key and token, and use them to authenticate your application. You can find instructions on how to do this in the Trello API documentation.
Write the Java code to generate a timetable. Once you have set up your development environment and authenticated with the Trello API, you can start writing the code to generate a timetable. The Trello API provides a number of different methods for accessing and manipulating data in Trello, such as creating and updating cards, boards, and lists. You will need to use these methods in your code to generate the timetable.
Test and debug your code. Once you have written your code, you will need to test it to make sure it works correctly. You can use the Trello API documentation and the Java client library documentation to help you understand how the API works and how to use it in your code. If you encounter any errors or problems, you can use a debugging tool or a Java debugger to help you find and fix the issues.
| Java and the Trello API? | How can I create a module to generate a timetable in Java using the Trello API?
I'm a student, and I have this project as homework.
I've researched and can't find any tutorials on this subject.
I expect getting some steps to follow or some resources.
| [
"To create a module to generate a timetable in Java using the Trello API, you will need to do the following:\n\nMake sure you have a Java development environment set up on your computer. You will need a Java Development Kit (JDK) and a text editor or an Integrated Development Environment (IDE) to write and run your Java code.\n\nInstall the Trello API Java client library. The Trello API provides a Java client library that makes it easy to integrate with the Trello API from a Java application. You can find the library and installation instructions on the Trello API documentation website.\n\nAuthenticate with the Trello API. Before you can access the Trello API, you need to authenticate your Java application with the API. You will need to create a Trello API key and token, and use them to authenticate your application. You can find instructions on how to do this in the Trello API documentation.\n\nWrite the Java code to generate a timetable. Once you have set up your development environment and authenticated with the Trello API, you can start writing the code to generate a timetable. The Trello API provides a number of different methods for accessing and manipulating data in Trello, such as creating and updating cards, boards, and lists. You will need to use these methods in your code to generate the timetable.\n\nTest and debug your code. Once you have written your code, you will need to test it to make sure it works correctly. You can use the Trello API documentation and the Java client library documentation to help you understand how the API works and how to use it in your code. If you encounter any errors or problems, you can use a debugging tool or a Java debugger to help you find and fix the issues.\n\n\n"
] | [
0
] | [] | [] | [
"api",
"java",
"trello"
] | stackoverflow_0074670349_api_java_trello.txt |
Q:
RabbitMq consume aysnc
I need to have an async consumer method to consume messages from RabbitMq. My problem is that the rabbitmq client for .net rely on an event handler.
I tried to implement a blocking system with a Semaphore, which is working with a low volume. When I'm getting more volume, some messages are lost.
Here is my implementation :
private long _lock;
private string _message;
private object _tag;
private readonly SemaphoreSlim _signal;
public void Configure()
{
Interlocked.Exchange(ref _lock, 0);
var consumer = new EventingBasicConsumer(_channel);
consumer.Received += (sender, ea) =>
{
_message = Encoding.UTF8.GetString(ea.Body.ToArray());
_tag = ea.DeliveryTag;
Interlocked.Exchange(ref _lock, 1);
_signal.Release();
};
_channel.BasicConsume(queue: _rabbitConfig.Queue, autoAck: true, consumer: consumer);
}
public async Task<string> Consume(CancellationToken cancellationToken)
{
while (0 == Interlocked.Read(ref _lock))
{
await _signal.WaitAsync();
}
Interlocked.Exchange(ref _lock, 0);
return _message;
}
I alse tried using BufferBlock but some messages are still lost.
Is there any other way to implement a system keeping my Consume() method ?
A:
The issue with your code is that you are using a shared variable (_message) to store the received message, and you are not protecting this variable with a lock or other synchronization mechanism. This means that if multiple threads are calling the Consume() method at the same time, they could potentially access and modify the _message variable concurrently, leading to data races and possible data loss.
To fix this issue, you can use a lock statement to protect the shared variable and ensure that only one thread can access it at a time. This will prevent multiple threads from accessing and modifying the variable concurrently, and will ensure that the messages are consumed correctly.
Here is an example of how you could modify your code to use a lock statement to protect the shared variable:
private long _lock;
private string _message;
private object _tag;
private readonly SemaphoreSlim _signal;
private readonly object _syncRoot = new object();
public void Configure()
{
Interlocked.Exchange(ref _lock, 0);
var consumer = new EventingBasicConsumer(_channel);
consumer.Received += (sender, ea) =>
{
lock (_syncRoot)
{
_message = Encoding.UTF8.GetString(ea.Body.ToArray());
_tag = ea.DeliveryTag;
}
Interlocked.Exchange(ref _lock, 1);
_signal.Release();
};
_channel.BasicConsume(
A:
You're trying to create an asynchronous consumer for RabbitMQ using the .NET RabbitMQ client, but are running into issues with lost messages when the volume of messages increases. There are a few different ways you could approach this problem, but one option might be to use the BasicGet method of the RabbitMQ client instead of the BasicConsume method.
The BasicConsume method allows you to register an event handler for incoming messages, but it doesn't provide any built-in mechanism for ensuring that all messages are processed. In contrast, the BasicGet method allows you to retrieve a single message from a queue and process it without using an event handler. This means that you can use a while loop to repeatedly call BasicGet and process the messages one at a time, using a cancellation token to stop the loop when necessary.
Here's an example of how you might implement this using the BasicGet method:
public async Task<string> Consume(CancellationToken cancellationToken)
{
while (!cancellationToken.IsCancellationRequested)
{
var result = _channel.BasicGet(_rabbitConfig.Queue, autoAck: true);
if (result == null)
{
// No message was available in the queue, so we can wait a bit before checking again.
await Task.Delay(TimeSpan.FromMilliseconds(100));
}
else
{
var message = Encoding.UTF8.GetString(result.Body.ToArray());
return message;
}
}
return null;
}
This approach should ensure that all messages are processed and should be more efficient than using an event handler and a SemaphoreSlim. Of course, there are many other ways to implement an asynchronous consumer for RabbitMQ, so you may want to experiment with different approaches to find one that works best for your specific use case.
| RabbitMq consume aysnc | I need to have an async consumer method to consume messages from RabbitMq. My problem is that the rabbitmq client for .net rely on an event handler.
I tried to implement a blocking system with a Semaphore, which is working with a low volume. When I'm getting more volume, some messages are lost.
Here is my implementation :
private long _lock;
private string _message;
private object _tag;
private readonly SemaphoreSlim _signal;
public void Configure()
{
Interlocked.Exchange(ref _lock, 0);
var consumer = new EventingBasicConsumer(_channel);
consumer.Received += (sender, ea) =>
{
_message = Encoding.UTF8.GetString(ea.Body.ToArray());
_tag = ea.DeliveryTag;
Interlocked.Exchange(ref _lock, 1);
_signal.Release();
};
_channel.BasicConsume(queue: _rabbitConfig.Queue, autoAck: true, consumer: consumer);
}
public async Task<string> Consume(CancellationToken cancellationToken)
{
while (0 == Interlocked.Read(ref _lock))
{
await _signal.WaitAsync();
}
Interlocked.Exchange(ref _lock, 0);
return _message;
}
I alse tried using BufferBlock but some messages are still lost.
Is there any other way to implement a system keeping my Consume() method ?
| [
"The issue with your code is that you are using a shared variable (_message) to store the received message, and you are not protecting this variable with a lock or other synchronization mechanism. This means that if multiple threads are calling the Consume() method at the same time, they could potentially access and modify the _message variable concurrently, leading to data races and possible data loss.\nTo fix this issue, you can use a lock statement to protect the shared variable and ensure that only one thread can access it at a time. This will prevent multiple threads from accessing and modifying the variable concurrently, and will ensure that the messages are consumed correctly.\nHere is an example of how you could modify your code to use a lock statement to protect the shared variable:\nprivate long _lock;\nprivate string _message;\nprivate object _tag;\nprivate readonly SemaphoreSlim _signal;\nprivate readonly object _syncRoot = new object();\n\npublic void Configure()\n{\n Interlocked.Exchange(ref _lock, 0);\n\n var consumer = new EventingBasicConsumer(_channel);\n consumer.Received += (sender, ea) =>\n {\n lock (_syncRoot)\n {\n _message = Encoding.UTF8.GetString(ea.Body.ToArray());\n _tag = ea.DeliveryTag;\n }\n Interlocked.Exchange(ref _lock, 1);\n _signal.Release();\n };\n\n _channel.BasicConsume(\n\n",
"You're trying to create an asynchronous consumer for RabbitMQ using the .NET RabbitMQ client, but are running into issues with lost messages when the volume of messages increases. There are a few different ways you could approach this problem, but one option might be to use the BasicGet method of the RabbitMQ client instead of the BasicConsume method.\nThe BasicConsume method allows you to register an event handler for incoming messages, but it doesn't provide any built-in mechanism for ensuring that all messages are processed. In contrast, the BasicGet method allows you to retrieve a single message from a queue and process it without using an event handler. This means that you can use a while loop to repeatedly call BasicGet and process the messages one at a time, using a cancellation token to stop the loop when necessary.\nHere's an example of how you might implement this using the BasicGet method:\npublic async Task<string> Consume(CancellationToken cancellationToken)\n{\n while (!cancellationToken.IsCancellationRequested)\n {\n var result = _channel.BasicGet(_rabbitConfig.Queue, autoAck: true);\n if (result == null)\n {\n // No message was available in the queue, so we can wait a bit before checking again.\n await Task.Delay(TimeSpan.FromMilliseconds(100));\n }\n else\n {\n var message = Encoding.UTF8.GetString(result.Body.ToArray());\n return message;\n }\n }\n\n return null;\n}\n\nThis approach should ensure that all messages are processed and should be more efficient than using an event handler and a SemaphoreSlim. Of course, there are many other ways to implement an asynchronous consumer for RabbitMQ, so you may want to experiment with different approaches to find one that works best for your specific use case.\n"
] | [
0,
0
] | [] | [] | [
".net",
".net_core",
"c#",
"event_handling",
"rabbitmq"
] | stackoverflow_0074667871_.net_.net_core_c#_event_handling_rabbitmq.txt |
Q:
request.body possibly null - Trying to make my first Sveltekit api
Hi everyone!
I am trying to create my first api route with Sveltekit and I keep running into the issue on line 9 that says "request.body is possibly null" and it will not allow me to pull the id from the request.
Any help you can provide for me to understand what I am doing wrong would be greatly appreciated!
A:
Typescript's type checking means that it knows there isn't a guarantee that the body field on the request object will have a value, which is why it's showing this message. What you will need to do is to add a guard - a bit of code that checks whether request.body has a value and if not, does some kind of error handling or alternate processing.
Looking at your code, without an id you probably won't be wanting to do much so the guard can just throw an error, preferably with a useful message like "Invalid request: Need to supply 'id' field in request body".
The below snippet gives an example of a guard clause. You might need to also verify that .get("id") isn't undefined.
const data = request.body;
if (data === undefined) {
throw new Error("Invalid request: Need to supply 'id' field in request body");
}
const id = data.get("id")
| request.body possibly null - Trying to make my first Sveltekit api |
Hi everyone!
I am trying to create my first api route with Sveltekit and I keep running into the issue on line 9 that says "request.body is possibly null" and it will not allow me to pull the id from the request.
Any help you can provide for me to understand what I am doing wrong would be greatly appreciated!
| [
"Typescript's type checking means that it knows there isn't a guarantee that the body field on the request object will have a value, which is why it's showing this message. What you will need to do is to add a guard - a bit of code that checks whether request.body has a value and if not, does some kind of error handling or alternate processing.\nLooking at your code, without an id you probably won't be wanting to do much so the guard can just throw an error, preferably with a useful message like \"Invalid request: Need to supply 'id' field in request body\".\nThe below snippet gives an example of a guard clause. You might need to also verify that .get(\"id\") isn't undefined.\nconst data = request.body;\nif (data === undefined) {\n throw new Error(\"Invalid request: Need to supply 'id' field in request body\");\n}\nconst id = data.get(\"id\")\n\n"
] | [
2
] | [] | [] | [
"prisma",
"sveltekit",
"typescript"
] | stackoverflow_0074670992_prisma_sveltekit_typescript.txt |
Q:
Jest - How to mock aws-sdk sqs.receiveMessage methode
I try mocking sqs.receiveMessage function which imported from aws-sdk.
Here is my code(sqsHelper.js):
const AWS = require("aws-sdk");
export default class SqsHelper {
static SqsGetMessagesTest = () => {
const sqs = new AWS.SQS({
apiVersion: serviceConfig.sqs.api_version,
region: serviceConfig.sqs.region,
});
const queueURL =
"https://sqs.us-west-2.amazonaws.com/<1234>/<4567>";
const params = {
AttributeNames: ["SentTimestamp"],
MaxNumberOfMessages: 10,
MessageAttributeNames: ["All"],
QueueUrl: queueURL,
VisibilityTimeout: 20,
WaitTimeSeconds: 20,
};
return new Promise((resolve, reject) => {
sqs.receiveMessage(params, async (recErr, recData) => {
if (recErr) {
reject(recErr);
} else if (recData.Messages) {
console.info(`Message count: ${recData.Messages.length}`);
resolve(recData.Messages);
}
});
});
};
}
And here is the test file(sqsHelper.test.js):
import SqsHelper from "../../helpers/sqsHelper.js";
import { SQS } from "aws-sdk";
const dumyData = { Messages: [{ name: "123", lastName: "456" }] };
const sqs = new SQS();
describe("Test SQS helper", () => {
test("Recieve message", async () => {
jest.spyOn(sqs, 'receiveMessage').mockReturnValue(dumyData);
// check 1
const res1 = await sqs.receiveMessage();
console.log(`res: ${JSON.stringify(res1, null, 2)}`)
expect(res1).toEqual(dumyData);
// check 2
const res2 = await SqsHelper.SqsGetMessagesTest();
console.log(`res2: ${JSON.stringify(res2, null, 2)}`);
expect(res2).toBe(dumyData);
});
});
The problem is that on the first check( which i call the function directly from the test file) i can see that the receiveMessage has been mocked and the results is as expected.
But on the second check(which the function called from the second module "sqsHelper.js") looks that the mock function doe's work and the originalreceiveMessage has been called and it still ask me about credentials.
This is the error:
InvalidClientTokenId: The security token included in the request is
invalid.
what I'm doing wrong?
Thanks
A:
The receiveMessage should trigger a callback that comes in the params. receiveMessage does not return a Promise
Try something like this:
const dummyData = { Messages: [{ name: "123", lastName: "456" }] };
const mockReceiveMessage = jest.fn().mockImplementation((params, callback) => callback("", dummyData));
jest.mock("aws-sdk", () => {
const originalModule = jest.requireActual("aws-sdk");
return {
...originalModule,
SQS: function() { // needs to be function as it will be used as constructor
return {
receiveMessage: mockReceiveMessage
}
}
};
})
describe("Test SQS helper", () => {
test("Recieve message", async () => {
const res = await SqsHelper.SqsGetMessagesTest();
expect(res).toBe(dummyData.Messages);
});
test("Error response", async () => {
mockReceiveMessage.mockImplementation((params, callback) => callback("some error"));
await expect(SqsHelper.SqsGetMessagesTest()).rejects.toEqual("some error");
});
});
| Jest - How to mock aws-sdk sqs.receiveMessage methode | I try mocking sqs.receiveMessage function which imported from aws-sdk.
Here is my code(sqsHelper.js):
const AWS = require("aws-sdk");
export default class SqsHelper {
static SqsGetMessagesTest = () => {
const sqs = new AWS.SQS({
apiVersion: serviceConfig.sqs.api_version,
region: serviceConfig.sqs.region,
});
const queueURL =
"https://sqs.us-west-2.amazonaws.com/<1234>/<4567>";
const params = {
AttributeNames: ["SentTimestamp"],
MaxNumberOfMessages: 10,
MessageAttributeNames: ["All"],
QueueUrl: queueURL,
VisibilityTimeout: 20,
WaitTimeSeconds: 20,
};
return new Promise((resolve, reject) => {
sqs.receiveMessage(params, async (recErr, recData) => {
if (recErr) {
reject(recErr);
} else if (recData.Messages) {
console.info(`Message count: ${recData.Messages.length}`);
resolve(recData.Messages);
}
});
});
};
}
And here is the test file(sqsHelper.test.js):
import SqsHelper from "../../helpers/sqsHelper.js";
import { SQS } from "aws-sdk";
const dumyData = { Messages: [{ name: "123", lastName: "456" }] };
const sqs = new SQS();
describe("Test SQS helper", () => {
test("Recieve message", async () => {
jest.spyOn(sqs, 'receiveMessage').mockReturnValue(dumyData);
// check 1
const res1 = await sqs.receiveMessage();
console.log(`res: ${JSON.stringify(res1, null, 2)}`)
expect(res1).toEqual(dumyData);
// check 2
const res2 = await SqsHelper.SqsGetMessagesTest();
console.log(`res2: ${JSON.stringify(res2, null, 2)}`);
expect(res2).toBe(dumyData);
});
});
The problem is that on the first check( which i call the function directly from the test file) i can see that the receiveMessage has been mocked and the results is as expected.
But on the second check(which the function called from the second module "sqsHelper.js") looks that the mock function doe's work and the originalreceiveMessage has been called and it still ask me about credentials.
This is the error:
InvalidClientTokenId: The security token included in the request is
invalid.
what I'm doing wrong?
Thanks
| [
"The receiveMessage should trigger a callback that comes in the params. receiveMessage does not return a Promise\nTry something like this:\nconst dummyData = { Messages: [{ name: \"123\", lastName: \"456\" }] };\n\nconst mockReceiveMessage = jest.fn().mockImplementation((params, callback) => callback(\"\", dummyData));\n\njest.mock(\"aws-sdk\", () => {\n const originalModule = jest.requireActual(\"aws-sdk\");\n\n return {\n ...originalModule,\n SQS: function() { // needs to be function as it will be used as constructor\n return {\n receiveMessage: mockReceiveMessage\n }\n }\n };\n})\n\ndescribe(\"Test SQS helper\", () => {\n test(\"Recieve message\", async () => {\n const res = await SqsHelper.SqsGetMessagesTest();\n expect(res).toBe(dummyData.Messages);\n });\n\n test(\"Error response\", async () => {\n mockReceiveMessage.mockImplementation((params, callback) => callback(\"some error\"));\n await expect(SqsHelper.SqsGetMessagesTest()).rejects.toEqual(\"some error\");\n });\n});\n\n\n"
] | [
1
] | [] | [] | [
"aws_sdk",
"jestjs",
"mocking",
"node.js",
"unit_testing"
] | stackoverflow_0074644471_aws_sdk_jestjs_mocking_node.js_unit_testing.txt |
Q:
I have a redirect problem with a 403 error
I want to redirect errors 403 to a specific path, I saw we can use .htaccess but about me, it doesn't work... I use a subdomain so maybe there is something to do more.
I wrote this in .htaccess:
RewriteEngine On
ErrorDocument 403 https://enzo.quelenis.com/errors/403
Any idea?
tk
A:
Mind the double H in your .htaaccess
ErrorDocument 403 hhttps://enzo.quelenis.com/errors/403
| I have a redirect problem with a 403 error | I want to redirect errors 403 to a specific path, I saw we can use .htaccess but about me, it doesn't work... I use a subdomain so maybe there is something to do more.
I wrote this in .htaccess:
RewriteEngine On
ErrorDocument 403 https://enzo.quelenis.com/errors/403
Any idea?
tk
| [
"Mind the double H in your .htaaccess\nErrorDocument 403 hhttps://enzo.quelenis.com/errors/403\n"
] | [
0
] | [] | [] | [
".htaccess",
"http_status_code_403"
] | stackoverflow_0074671053_.htaccess_http_status_code_403.txt |
Q:
Finding all subsets of specified size
I've been scratching my head about this for two days now and I cannot come up with a solution. What I'm looking for is a function f(s, n) such that it returns a set containing all subsets of s where the length of each subset is n.
Demo:
s={a, b, c, d}
f(s, 4)
{{a, b, c, d}}
f(s, 3)
{{a, b, c}, {a, b, d}, {a, c, d}, {b, c, d}}
f(s, 2)
{{a, b}, {a, c}, {a, d}, {b, c}, {b, d}, {c, d}}
f(s, 1)
{{a}, {b}, {c}, {d}}
I have a feeling that recursion is the way to go here. I've been fiddling with something like
f(S, n):
for s in S:
t = f( S-{s}, n-1 )
...
But this does not seem to do the trick. I did notice that len(f(s,n)) seems to be the binomial coefficient bin(len(s), n). I guess this could be utilized somehow.
Can you help me please?
A:
One way to solve this is by backtracking. Here's a possible algorithm in pseudo code:
def backtrack(input_set, idx, partial_res, res, n):
if len(partial_res == n):
res.append(partial_res[:])
return
for i in range(idx, len(input_set)):
partial_res.append(input_set[i])
backtrack(input_set, idx+1, partial_res, res, n) # path with input_set[i]
partial_res.pop()
backtrack(input_set, idx+1, partial_res, res, n) # path without input_set[i]
Time complexity of this approach is O(2^len(input_set)) since we make 2 branches at each element of input_set, regardless of whether the path leads to a valid result or not. The space complexity is O(len(input_set) choose n) since this is the number of valid subsets you get, as you correctly pointed out in your question.
Now, there is a way to optimize the above algorithm to reduce the time complexity to O(len(input_set) choose n) by pruning the recursive tree to paths that can lead to valid results only.
If n - len(partial_res) < len(input_set) - idx + 1, we are sure that even if we took every remaining element in input_set[idx:] we are still short at least one to reach n. So we can employ this as a base case and return and prune.
Also, if n - len(partial_res) == len(input_set) - idx + 1, this means that we need each and every element in input_set[idx:] to get the required n length result. Thus, we can't skip any elements and so the second branch of our recursive call becomes redundant.
backtrack(input_set, idx+1, partial_res, res, n) # path without input_set[i]
We can skip this branch with a conditional check.
Implementing these base cases correctly, reduces the time complexity of the algorithm to O(len(input_set) choose k), which is a hard limit because that's the number of subsets that there are.
A:
Let us call n the size of the array and k the number of elements to be out in a subarray.
Let us consider the first element A[0] of the array A.
If this element is put in the subset, the problem becomes a (n-1, k-1) similar problem.
If not, it becomes a (n-1, k) problem.
This can be simply implemented in a recursive function.
We just have to pay attention to deal with the extreme cases k == 0 or k > n.
During the process, we also have to keep trace of:
n: the number of remaining elements of A to consider
k: the number of elements that remain to be put in the current subset
index: the index of the next element of A to consider
The current_subset array that memorizes the elements already selected.
Here is a simple code in c++ to illustrate the algorithm
Output
For 5 elements and subsets of size 3:
3 4 5
2 4 5
2 3 5
2 3 4
1 4 5
1 3 5
1 3 4
1 2 5
1 2 4
1 2 3
#include <iostream>
#include <vector>
void print (const std::vector<std::vector<int>>& subsets) {
for (auto &v: subsets) {
for (auto &x: v) {
std::cout << x << " ";
}
std::cout << "\n";
}
}
// n: number of remaining elements of A to consider
// k: number of elements that remain to be put in the current subset
// index: index of next element of A to consider
void Get_subset_rec (std::vector<std::vector<int>>& subsets, int n, int k, int index, std::vector<int>& A, std::vector<int>& current_subset) {
if (n < k) return;
if (k == 0) {
subsets.push_back (current_subset);
return;
}
Get_subset_rec (subsets, n-1, k, index+1, A, current_subset);
current_subset.push_back(A[index]);
Get_subset_rec (subsets, n-1, k-1, index+1, A, current_subset);
current_subset.pop_back(); // remove last element
return;
}
void Get_subset (std::vector<std::vector<int>>& subsets, int subset_length, std::vector<int>& A) {
std::vector<int> current_subset;
Get_subset_rec (subsets, A.size(), subset_length, 0, A, current_subset);
}
int main () {
int subset_length = 3; // subset size
std::vector A = {1, 2, 3, 4, 5};
int size = A.size();
std::vector<std::vector<int>> subsets;
Get_subset (subsets, subset_length, A);
std::cout << subsets.size() << "\n";
print (subsets);
}
Live demo
A:
subseqs 0 _ = [[]]
subseqs k [] = []
subseqs k (x:xs) = map (x:) (subseqs (k-1) xs) ++ subseqs k xs
Live demo
The function looks for subsequences of (non-negative) length k in a given sequence. There are three cases:
If the length is 0: there is a single empty subsequence in any sequence.
Otherwise, if the sequence is empty: there are no subsequences of any (positive) length k.
Otherwise, there is a non-empty sequence that starts with x and continues with xs, and a positive length k. All our subsequences are of two kinds: those that contain x (they are subsequences of xs of length k-1, with x stuck at the front of each one), and those that do not contain x (they are just subsequences of xs of length k).
The algorithm is a more or less literal translation of these notes to Haskell. Notation cheat sheet:
[] an empty list
[w] a list with a single element w
x:xs a list with a head of x and a tail of xs
(x:) a function that sticks an x in front of any list
++ list concatenation
f a b c a function f applied to arguments a b and c
A:
Here is a non-recursive python function that takes a list superset and returns a generator that produces all subsets of size k.
def subsets_k(superset, k):
if k > len(superset):
return
if k == 0:
yield []
return
indices = list(range(k))
while True:
yield [superset[i] for i in indices]
i = k - 1
while indices[i] == len(superset) - k + i:
i -= 1
if i == -1:
return
indices[i] += 1
for j in range(i + 1, k):
indices[j] = indices[i] + j - i
Testing it:
for s in subsets_k(['a', 'b', 'c', 'd', 'e'], 3):
print(s)
Output:
['a', 'b', 'c']
['a', 'b', 'd']
['a', 'b', 'e']
['a', 'c', 'd']
['a', 'c', 'e']
['a', 'd', 'e']
['b', 'c', 'd']
['b', 'c', 'e']
['b', 'd', 'e']
['c', 'd', 'e']
| Finding all subsets of specified size | I've been scratching my head about this for two days now and I cannot come up with a solution. What I'm looking for is a function f(s, n) such that it returns a set containing all subsets of s where the length of each subset is n.
Demo:
s={a, b, c, d}
f(s, 4)
{{a, b, c, d}}
f(s, 3)
{{a, b, c}, {a, b, d}, {a, c, d}, {b, c, d}}
f(s, 2)
{{a, b}, {a, c}, {a, d}, {b, c}, {b, d}, {c, d}}
f(s, 1)
{{a}, {b}, {c}, {d}}
I have a feeling that recursion is the way to go here. I've been fiddling with something like
f(S, n):
for s in S:
t = f( S-{s}, n-1 )
...
But this does not seem to do the trick. I did notice that len(f(s,n)) seems to be the binomial coefficient bin(len(s), n). I guess this could be utilized somehow.
Can you help me please?
| [
"One way to solve this is by backtracking. Here's a possible algorithm in pseudo code:\ndef backtrack(input_set, idx, partial_res, res, n):\n if len(partial_res == n):\n res.append(partial_res[:])\n return\n \n for i in range(idx, len(input_set)):\n partial_res.append(input_set[i])\n backtrack(input_set, idx+1, partial_res, res, n) # path with input_set[i]\n partial_res.pop()\n backtrack(input_set, idx+1, partial_res, res, n) # path without input_set[i]\n\nTime complexity of this approach is O(2^len(input_set)) since we make 2 branches at each element of input_set, regardless of whether the path leads to a valid result or not. The space complexity is O(len(input_set) choose n) since this is the number of valid subsets you get, as you correctly pointed out in your question.\nNow, there is a way to optimize the above algorithm to reduce the time complexity to O(len(input_set) choose n) by pruning the recursive tree to paths that can lead to valid results only.\nIf n - len(partial_res) < len(input_set) - idx + 1, we are sure that even if we took every remaining element in input_set[idx:] we are still short at least one to reach n. So we can employ this as a base case and return and prune.\nAlso, if n - len(partial_res) == len(input_set) - idx + 1, this means that we need each and every element in input_set[idx:] to get the required n length result. Thus, we can't skip any elements and so the second branch of our recursive call becomes redundant.\nbacktrack(input_set, idx+1, partial_res, res, n) # path without input_set[i]\n\nWe can skip this branch with a conditional check.\nImplementing these base cases correctly, reduces the time complexity of the algorithm to O(len(input_set) choose k), which is a hard limit because that's the number of subsets that there are.\n",
"Let us call n the size of the array and k the number of elements to be out in a subarray.\nLet us consider the first element A[0] of the array A.\nIf this element is put in the subset, the problem becomes a (n-1, k-1) similar problem.\nIf not, it becomes a (n-1, k) problem.\nThis can be simply implemented in a recursive function.\nWe just have to pay attention to deal with the extreme cases k == 0 or k > n.\nDuring the process, we also have to keep trace of:\n\nn: the number of remaining elements of A to consider\n\nk: the number of elements that remain to be put in the current subset\n\nindex: the index of the next element of A to consider\n\nThe current_subset array that memorizes the elements already selected.\nHere is a simple code in c++ to illustrate the algorithm\n\n\nOutput\nFor 5 elements and subsets of size 3:\n3 4 5\n2 4 5\n2 3 5\n2 3 4\n1 4 5\n1 3 5\n1 3 4\n1 2 5\n1 2 4\n1 2 3\n\n#include <iostream>\n#include <vector>\n\nvoid print (const std::vector<std::vector<int>>& subsets) {\n for (auto &v: subsets) {\n for (auto &x: v) {\n std::cout << x << \" \";\n }\n std::cout << \"\\n\";\n }\n}\n// n: number of remaining elements of A to consider\n// k: number of elements that remain to be put in the current subset\n// index: index of next element of A to consider\n\nvoid Get_subset_rec (std::vector<std::vector<int>>& subsets, int n, int k, int index, std::vector<int>& A, std::vector<int>& current_subset) {\n if (n < k) return; \n if (k == 0) {\n subsets.push_back (current_subset);\n return;\n } \n Get_subset_rec (subsets, n-1, k, index+1, A, current_subset);\n current_subset.push_back(A[index]);\n Get_subset_rec (subsets, n-1, k-1, index+1, A, current_subset);\n current_subset.pop_back(); // remove last element\n return;\n}\n\nvoid Get_subset (std::vector<std::vector<int>>& subsets, int subset_length, std::vector<int>& A) {\n std::vector<int> current_subset;\n Get_subset_rec (subsets, A.size(), subset_length, 0, A, current_subset);\n}\n\nint main () {\n int subset_length = 3; // subset size\n std::vector A = {1, 2, 3, 4, 5};\n int size = A.size();\n std::vector<std::vector<int>> subsets;\n\n Get_subset (subsets, subset_length, A);\n std::cout << subsets.size() << \"\\n\";\n print (subsets);\n}\n\nLive demo\n",
"subseqs 0 _ = [[]]\nsubseqs k [] = []\nsubseqs k (x:xs) = map (x:) (subseqs (k-1) xs) ++ subseqs k xs\n\nLive demo\nThe function looks for subsequences of (non-negative) length k in a given sequence. There are three cases:\n\nIf the length is 0: there is a single empty subsequence in any sequence.\nOtherwise, if the sequence is empty: there are no subsequences of any (positive) length k.\nOtherwise, there is a non-empty sequence that starts with x and continues with xs, and a positive length k. All our subsequences are of two kinds: those that contain x (they are subsequences of xs of length k-1, with x stuck at the front of each one), and those that do not contain x (they are just subsequences of xs of length k).\n\nThe algorithm is a more or less literal translation of these notes to Haskell. Notation cheat sheet:\n\n[] an empty list\n[w] a list with a single element w\nx:xs a list with a head of x and a tail of xs\n(x:) a function that sticks an x in front of any list\n++ list concatenation\nf a b c a function f applied to arguments a b and c\n\n",
"Here is a non-recursive python function that takes a list superset and returns a generator that produces all subsets of size k.\ndef subsets_k(superset, k):\n if k > len(superset):\n return\n if k == 0:\n yield []\n return\n\n indices = list(range(k))\n while True:\n yield [superset[i] for i in indices]\n\n i = k - 1\n while indices[i] == len(superset) - k + i:\n i -= 1\n if i == -1:\n return\n \n indices[i] += 1\n for j in range(i + 1, k):\n indices[j] = indices[i] + j - i\n\nTesting it:\nfor s in subsets_k(['a', 'b', 'c', 'd', 'e'], 3):\n print(s)\n\nOutput:\n['a', 'b', 'c']\n['a', 'b', 'd']\n['a', 'b', 'e']\n['a', 'c', 'd']\n['a', 'c', 'e']\n['a', 'd', 'e']\n['b', 'c', 'd']\n['b', 'c', 'e']\n['b', 'd', 'e']\n['c', 'd', 'e']\n\n"
] | [
1,
1,
1,
0
] | [] | [] | [
"algorithm",
"set",
"subset"
] | stackoverflow_0070938729_algorithm_set_subset.txt |
Q:
Turbo stream in Rails 7 does not render the same pages for error action of create
My controller is as follows:
def create
@message = @inbox.messages.new(message_params)
respond_to do |format|
if @message.save
format.turbo_stream do
render turbo_stream: [
turbo_stream.update('new_message',
partial: 'inboxes/messages/form',
locals: { message: Message.new })
]
end
format.html { redirect_to @inbox, notice: "Message was successfully created." }
else
format.turbo_stream do
render turbo_stream: turbo_stream.update('new_message', partial: 'inboxes/messages/form', locals: { message: @message })
end
format.html { render :new, status: :unprocessable_entity }
end
end
end
The create action redirect to @inbox without issue but when I try to render the error (else) it's redirected to inboxes/messages/
Also dont know why but ActionController::UnknownFormat with the following code only for the else part:
def create
@message = @inbox.messages.new(message_params)
respond_to do |format|
if @message.save
format.turbo_stream do
render turbo_stream: [
turbo_stream.update('new_message',
partial: 'inboxes/messages/form',
locals: { message: Message.new })
]
end
format.html { redirect_to @inbox, notice: 'Message was successfully created.' }
else
format.turbo_stream do
render turbo_stream: [
turbo_stream.update('new_message',
partial: 'inboxes/messages/form',
locals: { message: @message })
]
format.html { render :new, status: :unprocessable_entity }
end
end
end
end
A:
Your format.turbo_stream in the else-block encapsulates the format.html as well, it should be outside of it (so after the next end)
| Turbo stream in Rails 7 does not render the same pages for error action of create | My controller is as follows:
def create
@message = @inbox.messages.new(message_params)
respond_to do |format|
if @message.save
format.turbo_stream do
render turbo_stream: [
turbo_stream.update('new_message',
partial: 'inboxes/messages/form',
locals: { message: Message.new })
]
end
format.html { redirect_to @inbox, notice: "Message was successfully created." }
else
format.turbo_stream do
render turbo_stream: turbo_stream.update('new_message', partial: 'inboxes/messages/form', locals: { message: @message })
end
format.html { render :new, status: :unprocessable_entity }
end
end
end
The create action redirect to @inbox without issue but when I try to render the error (else) it's redirected to inboxes/messages/
Also dont know why but ActionController::UnknownFormat with the following code only for the else part:
def create
@message = @inbox.messages.new(message_params)
respond_to do |format|
if @message.save
format.turbo_stream do
render turbo_stream: [
turbo_stream.update('new_message',
partial: 'inboxes/messages/form',
locals: { message: Message.new })
]
end
format.html { redirect_to @inbox, notice: 'Message was successfully created.' }
else
format.turbo_stream do
render turbo_stream: [
turbo_stream.update('new_message',
partial: 'inboxes/messages/form',
locals: { message: @message })
]
format.html { render :new, status: :unprocessable_entity }
end
end
end
end
| [
"Your format.turbo_stream in the else-block encapsulates the format.html as well, it should be outside of it (so after the next end)\n"
] | [
0
] | [] | [] | [
"ruby_on_rails"
] | stackoverflow_0071538264_ruby_on_rails.txt |
Q:
Jetpack Compose Motion Layout Header usage
I found Header in https://github.com/androidx/constraintlayout/wiki/Compose-MotionLayout-JSON-Syntax but could not find its usage. Can anyone point me to a good resource or can help to understand the usage of Header in jetpack compose motion layout? Thanks in advance.
A:
It is for future meta-commands like debugging.
Currently it is used with the Link feature.
But in the future it will support things like optimization flags.
| Jetpack Compose Motion Layout Header usage | I found Header in https://github.com/androidx/constraintlayout/wiki/Compose-MotionLayout-JSON-Syntax but could not find its usage. Can anyone point me to a good resource or can help to understand the usage of Header in jetpack compose motion layout? Thanks in advance.
| [
"It is for future meta-commands like debugging.\nCurrently it is used with the Link feature.\nBut in the future it will support things like optimization flags.\n"
] | [
0
] | [] | [] | [
"android",
"android_jetpack_compose",
"android_motionlayout"
] | stackoverflow_0074664920_android_android_jetpack_compose_android_motionlayout.txt |
Q:
Pick closing value of last Thursday of month
I need to pick closing value of last Thursday of month and then apply standard deviation to it. How can I do so? If Thursday is trading holiday then it should be Wednesday but not Friday.
I saw one code here - Pine Script / Trading View - Calculating Trading Day of Month (TDOM) but I do not know how to change it to what I want.
A:
Pine script currently doesn't have a trading days calendar, so it's impossible (AFAIK) to check if it's the last trading day.
We can check if we are on the last week of the month and in addition check if it's Thursday. It won't work on all cases (for example in case where on the last week there is no trading day on a Thursday) but it will work in most cases.
//@version=5
indicator("My Script", overlay = true)
f_is_leap_year() =>
if ((year % 4) != 0)
false
else if ((year % 100) != 0)
true
else if ((year % 400) == 0)
true
else
false
f_get_last_day() =>
if (month == 1) or (month == 3) or (month == 5) or (month == 7) or (month == 8) or (month == 10) or (month == 12)
31
else if (month == 4) or (month == 6) or (month == 9) or (month == 11)
30
else
f_is_leap_year() ? 29 : 28 // February
last_thursday = dayofmonth > f_get_last_day() - 7 and dayofweek == dayofweek.thursday
plotshape(last_thursday)
| Pick closing value of last Thursday of month | I need to pick closing value of last Thursday of month and then apply standard deviation to it. How can I do so? If Thursday is trading holiday then it should be Wednesday but not Friday.
I saw one code here - Pine Script / Trading View - Calculating Trading Day of Month (TDOM) but I do not know how to change it to what I want.
| [
"Pine script currently doesn't have a trading days calendar, so it's impossible (AFAIK) to check if it's the last trading day.\nWe can check if we are on the last week of the month and in addition check if it's Thursday. It won't work on all cases (for example in case where on the last week there is no trading day on a Thursday) but it will work in most cases.\n//@version=5\nindicator(\"My Script\", overlay = true)\n\nf_is_leap_year() =>\n if ((year % 4) != 0)\n false\n else if ((year % 100) != 0)\n true\n else if ((year % 400) == 0)\n true\n else\n false\n\nf_get_last_day() =>\n if (month == 1) or (month == 3) or (month == 5) or (month == 7) or (month == 8) or (month == 10) or (month == 12)\n 31\n else if (month == 4) or (month == 6) or (month == 9) or (month == 11)\n 30\n else\n f_is_leap_year() ? 29 : 28 // February\n\nlast_thursday = dayofmonth > f_get_last_day() - 7 and dayofweek == dayofweek.thursday\n\nplotshape(last_thursday)\n\n"
] | [
1
] | [] | [] | [
"pine_script",
"pine_script_v4",
"pinescript_v5"
] | stackoverflow_0074659666_pine_script_pine_script_v4_pinescript_v5.txt |
Q:
Extraction multiple data points from a long sentence/paragraph
I was looking for an approach or any useful libraries to extract multiple data points that corresponds to different years, from a single paragraph.
For eg.
The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600.
That's about 50% \increase in size
In the above example, i need to extract,
1. sales year 2019 --> 400
2. sales year 2020 --> 600
Assumptions
You can assume the entity is already known. [sales in the above example]
Can anyone please suggest? Thanks in advance
Approach. Pre existing libraries etc.
A:
One approach you could take is to use regular expressions to search for patterns in the text that match the information you're looking for. For example, in the sentence "The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600.", you could use the following regular expression to match the sales data for each year: \d{4} is \d+. This regular expression will match any four-digit number followed by " is " and then one or more digits.
Once you have matched the relevant data points, you can use a library like Python's re module to extract the information you need. For example, in Python you could do something like this:
import re
text = "The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600."
# Use the regular expression to find all matches in the text
matches = re.findall(r"\d{4} is \d+", text)
# Loop through the matches and extract the year and sales data
for match in matches:
year, sales = match.split(" is ")
print(f"Year: {year}, Sales: {sales}")
This code would output the following:
Year: 2019, Sales: 400
Year: 2020, Sales: 600
Another option is to use a natural language processing (NLP) library like spaCy or NLTK to extract the information you need. These libraries can help you identify and extract specific entities, such as dates and numbers, from a piece of text.
For example, using spaCy you could do something like this:
import spacy
# Load the English model
nlp = spacy.load("en_core_web_sm")
# Parse the text
text = "The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600."
doc = nlp(text)
# Loop through the entities in the document
for ent in doc.ents:
# If the entity is a date and a number, print the year and the sales data
if ent.label_ == "DATE" and ent.label_ == "CARDINAL":
print(f"Year: {ent.text}, Sales: {ent.text}")
This code would output the same results as the previous example.
Overall, there are many approaches you can take to extract multiple data points from a single paragraph. The approach you choose will depend on the specific requirements of your task and the data you are working with.
| Extraction multiple data points from a long sentence/paragraph | I was looking for an approach or any useful libraries to extract multiple data points that corresponds to different years, from a single paragraph.
For eg.
The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600.
That's about 50% \increase in size
In the above example, i need to extract,
1. sales year 2019 --> 400
2. sales year 2020 --> 600
Assumptions
You can assume the entity is already known. [sales in the above example]
Can anyone please suggest? Thanks in advance
Approach. Pre existing libraries etc.
| [
"One approach you could take is to use regular expressions to search for patterns in the text that match the information you're looking for. For example, in the sentence \"The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600.\", you could use the following regular expression to match the sales data for each year: \\d{4} is \\d+. This regular expression will match any four-digit number followed by \" is \" and then one or more digits.\nOnce you have matched the relevant data points, you can use a library like Python's re module to extract the information you need. For example, in Python you could do something like this:\nimport re\n\ntext = \"The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600.\"\n\n# Use the regular expression to find all matches in the text\nmatches = re.findall(r\"\\d{4} is \\d+\", text)\n\n# Loop through the matches and extract the year and sales data\nfor match in matches:\n year, sales = match.split(\" is \")\n print(f\"Year: {year}, Sales: {sales}\")\n\nThis code would output the following:\nYear: 2019, Sales: 400\nYear: 2020, Sales: 600\n\nAnother option is to use a natural language processing (NLP) library like spaCy or NLTK to extract the information you need. These libraries can help you identify and extract specific entities, such as dates and numbers, from a piece of text.\nFor example, using spaCy you could do something like this:\nimport spacy\n\n# Load the English model\nnlp = spacy.load(\"en_core_web_sm\")\n\n# Parse the text\ntext = \"The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600.\"\ndoc = nlp(text)\n\n# Loop through the entities in the document\nfor ent in doc.ents:\n # If the entity is a date and a number, print the year and the sales data\n if ent.label_ == \"DATE\" and ent.label_ == \"CARDINAL\":\n print(f\"Year: {ent.text}, Sales: {ent.text}\")\n\nThis code would output the same results as the previous example.\nOverall, there are many approaches you can take to extract multiple data points from a single paragraph. The approach you choose will depend on the specific requirements of your task and the data you are working with.\n"
] | [
1
] | [] | [] | [
"nlp",
"python"
] | stackoverflow_0074671055_nlp_python.txt |
Q:
Mongo Dart find documents if a property array doesn't contain an object
Here is a example where I'm creating a pipeline to a mongoDB where I'm quering for a documents where 'recieved_by' array contains a provided user.id.
`
final pipeline = AggregationPipelineBuilder()
.addStage(Match(where.within('fullDocument.received_by', user.id).map['\$query']));
`
How to do opposite thing. I need documents where 'received_by' array DOESN'T CONTAIN provided value (user.id).
Thank you
A:
o do the opposite of what you're currently doing, you can use the $nin operator instead of $in. For example:
final pipeline = AggregationPipelineBuilder()
.addStage(Match(where.notIn('fullDocument.received_by', user.id).map['\$query']));
This will match documents where the received_by array does not contain the provided user.id value.
| Mongo Dart find documents if a property array doesn't contain an object | Here is a example where I'm creating a pipeline to a mongoDB where I'm quering for a documents where 'recieved_by' array contains a provided user.id.
`
final pipeline = AggregationPipelineBuilder()
.addStage(Match(where.within('fullDocument.received_by', user.id).map['\$query']));
`
How to do opposite thing. I need documents where 'received_by' array DOESN'T CONTAIN provided value (user.id).
Thank you
| [
"o do the opposite of what you're currently doing, you can use the $nin operator instead of $in. For example:\nfinal pipeline = AggregationPipelineBuilder()\n .addStage(Match(where.notIn('fullDocument.received_by', user.id).map['\\$query']));\n\nThis will match documents where the received_by array does not contain the provided user.id value.\n"
] | [
0
] | [] | [] | [
"dart",
"flutter",
"mongo_dart"
] | stackoverflow_0074670995_dart_flutter_mongo_dart.txt |
Q:
Runtime complexity of Char.IsLetter() function
I would like to know the Big O notation of the Char.IsLetter() function.
private bool Helper(char c)
{
return char.IsLetter(c);
}
I believe that it has runtime complexity O(1). Is this correct? How can I verify this property for built-in functions?
A:
The runtime complexity of the char.IsLetter() method in C# is O(1). This means that the time it takes for the method to execute is constant and does not depend on the input size.
In the code you provided, the Helper() method simply calls the char.IsLetter() method and returns the result. Since the char.IsLetter() method has a runtime complexity of O(1), the Helper() method also has a runtime complexity of O(1).
It is worth mentioning that the runtime complexity of a method is not always a good indication of its performance. For example, a method with a runtime complexity of O(1) might still be slower than a method with a higher runtime complexity if the constant factor in the O(1) complexity is very large. In general, it is best to measure the performance of a method using actual experiments rather than relying solely on theoretical estimates.
| Runtime complexity of Char.IsLetter() function | I would like to know the Big O notation of the Char.IsLetter() function.
private bool Helper(char c)
{
return char.IsLetter(c);
}
I believe that it has runtime complexity O(1). Is this correct? How can I verify this property for built-in functions?
| [
"The runtime complexity of the char.IsLetter() method in C# is O(1). This means that the time it takes for the method to execute is constant and does not depend on the input size.\nIn the code you provided, the Helper() method simply calls the char.IsLetter() method and returns the result. Since the char.IsLetter() method has a runtime complexity of O(1), the Helper() method also has a runtime complexity of O(1).\nIt is worth mentioning that the runtime complexity of a method is not always a good indication of its performance. For example, a method with a runtime complexity of O(1) might still be slower than a method with a higher runtime complexity if the constant factor in the O(1) complexity is very large. In general, it is best to measure the performance of a method using actual experiments rather than relying solely on theoretical estimates.\n"
] | [
1
] | [] | [] | [
".net",
"big_o",
"c#",
"runtime",
"time_complexity"
] | stackoverflow_0074670702_.net_big_o_c#_runtime_time_complexity.txt |
Q:
Use PowerShell session variables as default values for parameters
Can values that are stored in a PowerShell session variable be used to populate a parameter's default value?
In this example, the session variables are populate the first time the script is run, but aren't used in subsequent executions:
function Get-Authetication
{
[cmdletbinding()]
param(
[parameter(Mandatory=$true)]
[string]$Server = { if ($PSCmdlet.SessionState.PSVariable.Get('Server').Value) { $PSCmdlet.SessionState.PSVariable.Get('Server').Value } },
[parameter(Mandatory=$true)]
[pscredential]$Credential = (Get-Credential)
)
# store
$PSCmdlet.SessionState.PSVariable.Set('Server',$Server)
$PSCmdlet.SessionState.PSVariable.Set('Credential',$Credential)
# return for testing
[PsCustomObject]@{
Server=$PSCmdlet.SessionState.PSVariable.Get('Server').Value;
Username=($PSCmdlet.SessionState.PSVariable.Get('Credential').Value).Username
}
}
Get-Authetication
A:
This should be done by the caller using $PSDefaultParameterValues.
Or, do it in the function instead of using a default value:
function Get-Authetication
{
[cmdletbinding()]
param(
[parameter()]
[string]$Server,
[parameter()]
[pscredential]$Credential = (Get-Credential)
)
if (!$PSCmdlet.SessionState.PSVariable.Get('Server').Value -and !$Server) {
$Server = Read-Host 'Enter server'
# Alternatively
# throw [System.ArgumentException]'You must supply a Server value'
}
if ($Server) {
$PSCmdlet.SessionState.PSVariable.Set('Server',$Server)
}
$myServer = $PSCmdlet.SessionState.PSVariable.Get('Server').Value
# return for testing
[PsCustomObject]@{
Server=$myServer;
}
}
(abbreviated example)
A:
This looks like the question you're asking is:
I am always going to run this script/function as the same person. But after I run it the first time, I want it to remember that one credential.
If that is in fact your question, then it's not clear that this approach won't do that. You're trying to persist a value to use later, and it happens to be a credential. There are several ways to accomplish this, and this is a great article that includes all your options:
https://purple.telstra.com.au/blog/using-saved-credentials-securely-in-powershell-scripts#:~:text=From%20that%20perspective%20your%20process%20to%20have%20a,and%20store%20that%20in%20a%20file%20More%20items
Basically, you're going to need to save an encrypted value in a file the first time, then access that file in subsequent runs.
HOWEVER, if you are intending for multiple people to be able to do this, or YOU want to run this on more than one server, you'll have many additional things to contend with, like using a network share to store the file, or keeping track of different users.
Good luck researching.
| Use PowerShell session variables as default values for parameters | Can values that are stored in a PowerShell session variable be used to populate a parameter's default value?
In this example, the session variables are populate the first time the script is run, but aren't used in subsequent executions:
function Get-Authetication
{
[cmdletbinding()]
param(
[parameter(Mandatory=$true)]
[string]$Server = { if ($PSCmdlet.SessionState.PSVariable.Get('Server').Value) { $PSCmdlet.SessionState.PSVariable.Get('Server').Value } },
[parameter(Mandatory=$true)]
[pscredential]$Credential = (Get-Credential)
)
# store
$PSCmdlet.SessionState.PSVariable.Set('Server',$Server)
$PSCmdlet.SessionState.PSVariable.Set('Credential',$Credential)
# return for testing
[PsCustomObject]@{
Server=$PSCmdlet.SessionState.PSVariable.Get('Server').Value;
Username=($PSCmdlet.SessionState.PSVariable.Get('Credential').Value).Username
}
}
Get-Authetication
| [
"This should be done by the caller using $PSDefaultParameterValues.\nOr, do it in the function instead of using a default value:\nfunction Get-Authetication\n{\n\n [cmdletbinding()]\n param(\n [parameter()]\n [string]$Server,\n\n [parameter()]\n [pscredential]$Credential = (Get-Credential)\n\n )\n if (!$PSCmdlet.SessionState.PSVariable.Get('Server').Value -and !$Server) {\n $Server = Read-Host 'Enter server'\n # Alternatively\n # throw [System.ArgumentException]'You must supply a Server value'\n }\n if ($Server) {\n $PSCmdlet.SessionState.PSVariable.Set('Server',$Server)\n }\n $myServer = $PSCmdlet.SessionState.PSVariable.Get('Server').Value\n\n # return for testing\n [PsCustomObject]@{\n Server=$myServer;\n }\n}\n\n(abbreviated example)\n",
"This looks like the question you're asking is:\nI am always going to run this script/function as the same person. But after I run it the first time, I want it to remember that one credential.\nIf that is in fact your question, then it's not clear that this approach won't do that. You're trying to persist a value to use later, and it happens to be a credential. There are several ways to accomplish this, and this is a great article that includes all your options:\nhttps://purple.telstra.com.au/blog/using-saved-credentials-securely-in-powershell-scripts#:~:text=From%20that%20perspective%20your%20process%20to%20have%20a,and%20store%20that%20in%20a%20file%20More%20items\nBasically, you're going to need to save an encrypted value in a file the first time, then access that file in subsequent runs.\nHOWEVER, if you are intending for multiple people to be able to do this, or YOU want to run this on more than one server, you'll have many additional things to contend with, like using a network share to store the file, or keeping track of different users.\nGood luck researching.\n"
] | [
2,
0
] | [] | [] | [
"powershell",
"powershell_4.0"
] | stackoverflow_0039875081_powershell_powershell_4.0.txt |
Q:
Materialize the Value for a Type that has One Inhabitant
Thanks to @MilesSabin's answer I can write a type level Fibonacci sequence:
sealed trait Digit
case object Zero extends Digit
case object One extends Digit
sealed trait Dense { type N <: Dense }
sealed trait DNil extends Dense { type N = DNil }
case object DNil extends DNil
final case class ::[+H <: Digit, +T <: Dense](digit: H, tail: T) extends Dense {
type N = digit.type :: tail.N
}
/* The `A`th Fibonacci number is `B` */
trait Fib[A <: Dense, B <: Dense]
object Fib {
implicit val f0 = new Fib[_0, _0] {}
implicit val f1 = new Fib[_1, _1] {}
implicit def f2[A <: Dense, P <: Dense, P2 <: Dense, F <: Dense, F2 <: Dense]
(implicit p: Pred.Aux[A, P],
p2: Pred.Aux[P, P2],
f: Fib[P, F],
f2: Fib[P2, F2],
sum: Sum[F, F2]): Fib[A, sum.Out] = new Fib[A, sum.Out] {}
}
implicitly[Fib[_7, _13]]
What I'd really like to be able to do is get a Witness for Dense and use it like:
def apply[Out <: Dense](n: Dense)(implicit f:Fib[n.N, Out], w:Witness.Aux[Out]): Out
= w.value
Scala tells me that it can't summon a Witness instance. I'm guessing this is because my type-level encoding of natural numbers is a linked list of bits and that's not a singleton type. I can't understand why Witness won't work since there is only a single inhabitant for a class like _7.
What I'm trying to do is materialize a value for a type that only has one possible value. That way I can get an Out directly from apply.
I think a possible solution might leverage implicit macros.
Any and all ideas are welcome. I encourage you to leave a note if the question isn't clear.
A:
To materialize the value for a type that only has one possible value, you can use the valueOf method provided by the shapeless.ValueOf trait. This method takes a type parameter with a single inhabitant and returns the value of that inhabitant.
Here is an example:
import shapeless.ValueOf
// Define a type with a single inhabitant
sealed trait Digit
case object Zero extends Digit
// Define a function that materializes the value of a type with a single inhabitant
def apply[Out <: Digit](n: Digit)(implicit valueOf: ValueOf[Out]): Out = valueOf.value
// Use the function to get the value of the single inhabitant of the `Zero` type
val zero: Zero = apply(Zero)
Note that the ValueOf trait requires the SingleInhabitant type class to be implemented for the type with a single inhabitant. This is provided automatically for any sealed trait with exactly one case object extending it, as in the example above. For other types, you will need to provide an implicit SingleInhabitant instance.
| Materialize the Value for a Type that has One Inhabitant | Thanks to @MilesSabin's answer I can write a type level Fibonacci sequence:
sealed trait Digit
case object Zero extends Digit
case object One extends Digit
sealed trait Dense { type N <: Dense }
sealed trait DNil extends Dense { type N = DNil }
case object DNil extends DNil
final case class ::[+H <: Digit, +T <: Dense](digit: H, tail: T) extends Dense {
type N = digit.type :: tail.N
}
/* The `A`th Fibonacci number is `B` */
trait Fib[A <: Dense, B <: Dense]
object Fib {
implicit val f0 = new Fib[_0, _0] {}
implicit val f1 = new Fib[_1, _1] {}
implicit def f2[A <: Dense, P <: Dense, P2 <: Dense, F <: Dense, F2 <: Dense]
(implicit p: Pred.Aux[A, P],
p2: Pred.Aux[P, P2],
f: Fib[P, F],
f2: Fib[P2, F2],
sum: Sum[F, F2]): Fib[A, sum.Out] = new Fib[A, sum.Out] {}
}
implicitly[Fib[_7, _13]]
What I'd really like to be able to do is get a Witness for Dense and use it like:
def apply[Out <: Dense](n: Dense)(implicit f:Fib[n.N, Out], w:Witness.Aux[Out]): Out
= w.value
Scala tells me that it can't summon a Witness instance. I'm guessing this is because my type-level encoding of natural numbers is a linked list of bits and that's not a singleton type. I can't understand why Witness won't work since there is only a single inhabitant for a class like _7.
What I'm trying to do is materialize a value for a type that only has one possible value. That way I can get an Out directly from apply.
I think a possible solution might leverage implicit macros.
Any and all ideas are welcome. I encourage you to leave a note if the question isn't clear.
| [
"To materialize the value for a type that only has one possible value, you can use the valueOf method provided by the shapeless.ValueOf trait. This method takes a type parameter with a single inhabitant and returns the value of that inhabitant.\nHere is an example:\nimport shapeless.ValueOf\n\n// Define a type with a single inhabitant\nsealed trait Digit\ncase object Zero extends Digit\n\n// Define a function that materializes the value of a type with a single inhabitant\ndef apply[Out <: Digit](n: Digit)(implicit valueOf: ValueOf[Out]): Out = valueOf.value\n\n// Use the function to get the value of the single inhabitant of the `Zero` type\nval zero: Zero = apply(Zero)\n\nNote that the ValueOf trait requires the SingleInhabitant type class to be implemented for the type with a single inhabitant. This is provided automatically for any sealed trait with exactly one case object extending it, as in the example above. For other types, you will need to provide an implicit SingleInhabitant instance.\n"
] | [
0
] | [] | [] | [
"macros",
"scala",
"shapeless",
"type_level_computation"
] | stackoverflow_0032170982_macros_scala_shapeless_type_level_computation.txt |
Q:
How to count the occurrence of an item from a list within a dictionary (python)
Below is a dictionary named daily_purchases that maps a person's weekly (Monday to Friday) purchases and contains a list of what they bought.
{'Monday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Tuesday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Wednesday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Thursday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Friday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}}
Ultimately, I want to create a graph of each purchase for the week for each person. For example, Edith bought X (total) apples this week. This is my code so far:
def itemCounter(lst,i):
#INPUT: item list and items
#OUTPUT: count times someone bought a particular item
count = 0
for ele in lst:
if (ele ==i):
count = count + 1
return count
for day, day_values in daily_purchases.items():
for name, items in day_values.items():
for item in items:
item = itemCounter(i)
items = itemCounter(last)
I am not sure which data structure is appropriate to create the graph and why the counter isn't returning the value I want
A:
How is your graph supposed to look like?
Packing your weekly values in a dataframe would allow you to represent in any form:
import pandas as pd
daily_purchases = {'Monday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Tuesday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Wednesday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Thursday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Friday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}}
def get_weekly_purchases(day_pur):
output_dict = {}
for _, day_val in day_pur.items():
for pers, lst in day_val.items():
output_dict[pers] = output_dict.setdefault(pers, {})
for elt in lst:
output_dict[pers][elt] = output_dict[pers].get(elt, 0) + 1
return pd.DataFrame(output_dict).fillna(0)
print(get_weekly_purchases(daily_purchases))
Output:
Edith Carol Hannah Frank Alice Ingrid Bob Gertrude Dave
Apple 10.0 10.0 10.0 5.0 0.0 0.0 0.0 5.0 10.0
Banana 5.0 15.0 5.0 0.0 0.0 10.0 5.0 10.0 0.0
Hamburger Buns 10.0 0.0 10.0 0.0 5.0 0.0 5.0 10.0 5.0
Carrot 5.0 15.0 0.0 10.0 0.0 0.0 5.0 5.0 10.0
Dragon Fruit 0.0 5.0 5.0 5.0 10.0 0.0 10.0 0.0 0.0
Eggs 0.0 5.0 5.0 0.0 15.0 0.0 5.0 15.0 0.0
Ice Pops 0.0 0.0 5.0 5.0 5.0 5.0 0.0 5.0 10.0
Edit: it's not quite clear how you want to represent items (separate bars / stacked). Here is an example for stacked bar (better readable in my opinion):
df = get_weekly_purchases(daily_purchases)
df.T.plot(kind='bar', title="Purchases in a week", stacked=True)
plt.show()
It gives you:
| How to count the occurrence of an item from a list within a dictionary (python) | Below is a dictionary named daily_purchases that maps a person's weekly (Monday to Friday) purchases and contains a list of what they bought.
{'Monday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Tuesday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Wednesday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Thursday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Friday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}}
Ultimately, I want to create a graph of each purchase for the week for each person. For example, Edith bought X (total) apples this week. This is my code so far:
def itemCounter(lst,i):
#INPUT: item list and items
#OUTPUT: count times someone bought a particular item
count = 0
for ele in lst:
if (ele ==i):
count = count + 1
return count
for day, day_values in daily_purchases.items():
for name, items in day_values.items():
for item in items:
item = itemCounter(i)
items = itemCounter(last)
I am not sure which data structure is appropriate to create the graph and why the counter isn't returning the value I want
| [
"How is your graph supposed to look like?\nPacking your weekly values in a dataframe would allow you to represent in any form:\nimport pandas as pd\n\ndaily_purchases = {'Monday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Tuesday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Wednesday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Thursday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}, 'Friday': {'Edith': ['Apple', 'Banana', 'Hamburger Buns', 'Carrot', 'Hamburger Buns', 'Apple'], 'Carol': ['Carrot', 'Banana', 'Banana', 'Dragon Fruit', 'Apple', 'Apple', 'Carrot', 'Carrot', 'Eggs', 'Banana'], 'Hannah': ['Hamburger Buns', 'Dragon Fruit', 'Eggs', 'Apple', 'Apple', 'Hamburger Buns', 'Banana', 'Ice Pops'], 'Frank': ['Ice Pops', 'Carrot', 'Apple', 'Carrot', 'Dragon Fruit'], 'Alice': ['Eggs', 'Ice Pops', 'Eggs', 'Eggs', 'Hamburger Buns', 'Dragon Fruit', 'Dragon Fruit'], 'Ingrid': ['Banana', 'Banana', 'Ice Pops'], 'Bob': ['Eggs', 'Banana', 'Hamburger Buns', 'Dragon Fruit', 'Carrot', 'Dragon Fruit'], 'Gertrude': ['Hamburger Buns', 'Banana', 'Eggs', 'Ice Pops', 'Hamburger Buns', 'Eggs', 'Apple', 'Carrot', 'Eggs', 'Banana'], 'Dave': ['Apple', 'Ice Pops', 'Carrot', 'Carrot', 'Ice Pops', 'Hamburger Buns', 'Apple']}}\n\ndef get_weekly_purchases(day_pur):\n output_dict = {}\n for _, day_val in day_pur.items():\n for pers, lst in day_val.items():\n output_dict[pers] = output_dict.setdefault(pers, {})\n for elt in lst:\n output_dict[pers][elt] = output_dict[pers].get(elt, 0) + 1\n\n return pd.DataFrame(output_dict).fillna(0)\n\nprint(get_weekly_purchases(daily_purchases))\n\nOutput:\n Edith Carol Hannah Frank Alice Ingrid Bob Gertrude Dave\nApple 10.0 10.0 10.0 5.0 0.0 0.0 0.0 5.0 10.0\nBanana 5.0 15.0 5.0 0.0 0.0 10.0 5.0 10.0 0.0\nHamburger Buns 10.0 0.0 10.0 0.0 5.0 0.0 5.0 10.0 5.0\nCarrot 5.0 15.0 0.0 10.0 0.0 0.0 5.0 5.0 10.0\nDragon Fruit 0.0 5.0 5.0 5.0 10.0 0.0 10.0 0.0 0.0\nEggs 0.0 5.0 5.0 0.0 15.0 0.0 5.0 15.0 0.0\nIce Pops 0.0 0.0 5.0 5.0 5.0 5.0 0.0 5.0 10.0\n\nEdit: it's not quite clear how you want to represent items (separate bars / stacked). Here is an example for stacked bar (better readable in my opinion):\ndf = get_weekly_purchases(daily_purchases)\ndf.T.plot(kind='bar', title=\"Purchases in a week\", stacked=True)\nplt.show()\n\nIt gives you:\n\n"
] | [
1
] | [] | [] | [
"dictionary",
"function",
"list"
] | stackoverflow_0074670551_dictionary_function_list.txt |
Q:
What is the difference between __str__ and __repr__?
What is the difference between __str__ and __repr__ in Python?
A:
Alex summarized well but, surprisingly, was too succinct.
First, let me reiterate the main points in Alex’s post:
The default implementation is useless (it’s hard to think of one which wouldn’t be, but yeah)
__repr__ goal is to be unambiguous
__str__ goal is to be readable
Container’s __str__ uses contained objects’ __repr__
Default implementation is useless
This is mostly a surprise because Python’s defaults tend to be fairly useful. However, in this case, having a default for __repr__ which would act like:
return "%s(%r)" % (self.__class__, self.__dict__)
would have been too dangerous (for example, too easy to get into infinite recursion if objects reference each other). So Python cops out. Note that there is one default which is true: if __repr__ is defined, and __str__ is not, the object will behave as though __str__=__repr__.
This means, in simple terms: almost every object you implement should have a functional __repr__ that’s usable for understanding the object. Implementing __str__ is optional: do that if you need a “pretty print” functionality (for example, used by a report generator).
The goal of __repr__ is to be unambiguous
Let me come right out and say it — I do not believe in debuggers. I don’t really know how to use any debugger, and have never used one seriously. Furthermore, I believe that the big fault in debuggers is their basic nature — most failures I debug happened a long long time ago, in a galaxy far far away. This means that I do believe, with religious fervor, in logging. Logging is the lifeblood of any decent fire-and-forget server system. Python makes it easy to log: with maybe some project specific wrappers, all you need is a
log(INFO, "I am in the weird function and a is", a, "and b is", b, "but I got a null C — using default", default_c)
But you have to do the last step — make sure every object you implement has a useful repr, so code like that can just work. This is why the “eval” thing comes up: if you have enough information so eval(repr(c))==c, that means you know everything there is to know about c. If that’s easy enough, at least in a fuzzy way, do it. If not, make sure you have enough information about c anyway. I usually use an eval-like format: "MyClass(this=%r,that=%r)" % (self.this,self.that). It does not mean that you can actually construct MyClass, or that those are the right constructor arguments — but it is a useful form to express “this is everything you need to know about this instance”.
Note: I used %r above, not %s. You always want to use repr() [or %r formatting character, equivalently] inside __repr__ implementation, or you’re defeating the goal of repr. You want to be able to differentiate MyClass(3) and MyClass("3").
The goal of __str__ is to be readable
Specifically, it is not intended to be unambiguous — notice that str(3)==str("3"). Likewise, if you implement an IP abstraction, having the str of it look like 192.168.1.1 is just fine. When implementing a date/time abstraction, the str can be "2010/4/12 15:35:22", etc. The goal is to represent it in a way that a user, not a programmer, would want to read it. Chop off useless digits, pretend to be some other class — as long is it supports readability, it is an improvement.
Container’s __str__ uses contained objects’ __repr__
This seems surprising, doesn’t it? It is a little, but how readable would it be if it used their __str__?
[moshe is, 3, hello
world, this is a list, oh I don't know, containing just 4 elements]
Not very. Specifically, the strings in a container would find it way too easy to disturb its string representation. In the face of ambiguity, remember, Python resists the temptation to guess. If you want the above behavior when you’re printing a list, just
print("[" + ", ".join(l) + "]")
(you can probably also figure out what to do about dictionaries.
Summary
Implement __repr__ for any class you implement. This should be second nature. Implement __str__ if you think it would be useful to have a string version which errs on the side of readability.
A:
My rule of thumb: __repr__ is for developers, __str__ is for customers.
A:
Unless you specifically act to ensure otherwise, most classes don't have helpful results for either:
>>> class Sic(object): pass
...
>>> print(str(Sic()))
<__main__.Sic object at 0x8b7d0>
>>> print(repr(Sic()))
<__main__.Sic object at 0x8b7d0>
>>>
As you see -- no difference, and no info beyond the class and object's id. If you only override one of the two...:
>>> class Sic(object):
... def __repr__(self): return 'foo'
...
>>> print(str(Sic()))
foo
>>> print(repr(Sic()))
foo
>>> class Sic(object):
... def __str__(self): return 'foo'
...
>>> print(str(Sic()))
foo
>>> print(repr(Sic()))
<__main__.Sic object at 0x2617f0>
>>>
as you see, if you override __repr__, that's ALSO used for __str__, but not vice versa.
Other crucial tidbits to know: __str__ on a built-on container uses the __repr__, NOT the __str__, for the items it contains. And, despite the words on the subject found in typical docs, hardly anybody bothers making the __repr__ of objects be a string that eval may use to build an equal object (it's just too hard, AND not knowing how the relevant module was actually imported makes it actually flat out impossible).
So, my advice: focus on making __str__ reasonably human-readable, and __repr__ as unambiguous as you possibly can, even if that interferes with the fuzzy unattainable goal of making __repr__'s returned value acceptable as input to __eval__!
A:
__repr__: representation of python object usually eval will convert it back to that object
__str__: is whatever you think is that object in text form
e.g.
>>> s="""w'o"w"""
>>> repr(s)
'\'w\\\'o"w\''
>>> str(s)
'w\'o"w'
>>> eval(str(s))==s
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 1
w'o"w
^
SyntaxError: EOL while scanning single-quoted string
>>> eval(repr(s))==s
True
A:
In short, the goal of __repr__ is to be unambiguous and __str__ is to be
readable.
Here is a good example:
>>> import datetime
>>> today = datetime.datetime.now()
>>> str(today)
'2012-03-14 09:21:58.130922'
>>> repr(today)
'datetime.datetime(2012, 3, 14, 9, 21, 58, 130922)'
Read this documentation for repr:
repr(object)
Return a string containing a printable representation of an object. This is the same value yielded by conversions (reverse
quotes). It is sometimes useful to be able to access this operation as
an ordinary function. For many types, this function makes an attempt
to return a string that would yield an object with the same value when
passed to eval(), otherwise the representation is a string enclosed in
angle brackets that contains the name of the type of the object
together with additional information often including the name and
address of the object. A class can control what this function returns
for its instances by defining a __repr__() method.
Here is the documentation for str:
str(object='')
Return a string containing a nicely printable
representation of an object. For strings, this returns the string
itself. The difference with repr(object) is that str(object) does not
always attempt to return a string that is acceptable to eval(); its
goal is to return a printable string. If no argument is given, returns
the empty string, ''.
A:
What is the difference between __str__ and __repr__ in Python?
__str__ (read as "dunder (double-underscore) string") and __repr__ (read as "dunder-repper" (for "representation")) are both special methods that return strings based on the state of the object.
__repr__ provides backup behavior if __str__ is missing.
So one should first write a __repr__ that allows you to reinstantiate an equivalent object from the string it returns e.g. using eval or by typing it in character-for-character in a Python shell.
At any time later, one can write a __str__ for a user-readable string representation of the instance, when one believes it to be necessary.
__str__
If you print an object, or pass it to format, str.format, or str, then if a __str__ method is defined, that method will be called, otherwise, __repr__ will be used.
__repr__
The __repr__ method is called by the builtin function repr and is what is echoed on your python shell when it evaluates an expression that returns an object.
Since it provides a backup for __str__, if you can only write one, start with __repr__
Here's the builtin help on repr:
repr(...)
repr(object) -> string
Return the canonical string representation of the object.
For most object types, eval(repr(object)) == object.
That is, for most objects, if you type in what is printed by repr, you should be able to create an equivalent object. But this is not the default implementation.
Default Implementation of __repr__
The default object __repr__ is (C Python source) something like:
def __repr__(self):
return '<{0}.{1} object at {2}>'.format(
type(self).__module__, type(self).__qualname__, hex(id(self)))
That means by default you'll print the module the object is from, the class name, and the hexadecimal representation of its location in memory - for example:
<__main__.Foo object at 0x7f80665abdd0>
This information isn't very useful, but there's no way to derive how one might accurately create a canonical representation of any given instance, and it's better than nothing, at least telling us how we might uniquely identify it in memory.
How can __repr__ be useful?
Let's look at how useful it can be, using the Python shell and datetime objects. First we need to import the datetime module:
import datetime
If we call datetime.now in the shell, we'll see everything we need to recreate an equivalent datetime object. This is created by the datetime __repr__:
>>> datetime.datetime.now()
datetime.datetime(2015, 1, 24, 20, 5, 36, 491180)
If we print a datetime object, we see a nice human readable (in fact, ISO) format. This is implemented by datetime's __str__:
>>> print(datetime.datetime.now())
2015-01-24 20:05:44.977951
It is a simple matter to recreate the object we lost because we didn't assign it to a variable by copying and pasting from the __repr__ output, and then printing it, and we get it in the same human readable output as the other object:
>>> the_past = datetime.datetime(2015, 1, 24, 20, 5, 36, 491180)
>>> print(the_past)
2015-01-24 20:05:36.491180
#How do I implement them?
As you're developing, you'll want to be able to reproduce objects in the same state, if possible. This, for example, is how the datetime object defines __repr__ (Python source). It is fairly complex, because of all of the attributes needed to reproduce such an object:
def __repr__(self):
"""Convert to formal string, for repr()."""
L = [self._year, self._month, self._day, # These are never zero
self._hour, self._minute, self._second, self._microsecond]
if L[-1] == 0:
del L[-1]
if L[-1] == 0:
del L[-1]
s = "%s.%s(%s)" % (self.__class__.__module__,
self.__class__.__qualname__,
", ".join(map(str, L)))
if self._tzinfo is not None:
assert s[-1:] == ")"
s = s[:-1] + ", tzinfo=%r" % self._tzinfo + ")"
if self._fold:
assert s[-1:] == ")"
s = s[:-1] + ", fold=1)"
return s
If you want your object to have a more human readable representation, you can implement __str__ next. Here's how the datetime object (Python source) implements __str__, which it easily does because it already has a function to display it in ISO format:
def __str__(self):
"Convert to string, for str()."
return self.isoformat(sep=' ')
Set __repr__ = __str__?
This is a critique of another answer here that suggests setting __repr__ = __str__.
Setting __repr__ = __str__ is silly - __repr__ is a fallback for __str__ and a __repr__, written for developers usage in debugging, should be written before you write a __str__.
You need a __str__ only when you need a textual representation of the object.
Conclusion
Define __repr__ for objects you write so you and other developers have a reproducible example when using it as you develop. Define __str__ when you need a human readable string representation of it.
A:
On page 358 of the book Python scripting for computational science by Hans Petter Langtangen, it clearly states that
The __repr__ aims at a complete string representation of the object;
The __str__ is to return a nice string for printing.
So, I prefer to understand them as
repr = reproduce
str = string (representation)
from the user's point of view
although this is a misunderstanding I made when learning python.
A small but good example is also given on the same page as follows:
Example
In [38]: str('s')
Out[38]: 's'
In [39]: repr('s')
Out[39]: "'s'"
In [40]: eval(str('s'))
Traceback (most recent call last):
File "<ipython-input-40-abd46c0c43e7>", line 1, in <module>
eval(str('s'))
File "<string>", line 1, in <module>
NameError: name 's' is not defined
In [41]: eval(repr('s'))
Out[41]: 's'
A:
Apart from all the answers given, I would like to add few points :-
1) __repr__() is invoked when you simply write object's name on interactive python console and press enter.
2) __str__() is invoked when you use object with print statement.
3) In case, if __str__ is missing, then print and any function using str() invokes __repr__() of object.
4) __str__() of containers, when invoked will execute __repr__() method of its contained elements.
5) str() called within __str__() could potentially recurse without a base case, and error on maximum recursion depth.
6) __repr__() can call repr() which will attempt to avoid infinite recursion automatically, replacing an already represented object with ....
A:
(2020 entry)
Q: What's the difference between __str__() and __repr__()?
TL;DR:
LONG
This question has been around a long time, and there are a variety of answers of which most are correct (not to mention from several Python community legends[!]). However when it comes down to the nitty-gritty, this question is analogous to asking the difference between the str() and repr() built-in functions. I'm going to describe the differences in my own words (which means I may be "borrowing" liberally from Core Python Programming so pls forgive me).
Both str() and repr() have the same basic job: their goal is to return a string representation of a Python object. What kind of string representation is what differentiates them.
str() & __str__() return a printable string representation of
an object... something human-readable/for human consumption
repr() & __repr__() return a string representation of an object that is a valid Python expression, an object you can pass to eval() or type into the Python shell without getting an error.
For example, let's assign a string to x and an int to y, and simply showing human-readable string versions of each:
>>> x, y = 'foo', 123
>>> str(x), str(y)
('foo', '123')
Can we take what is inside the quotes in both cases and enter them verbatim into the Python interpreter? Let's give it a try:
>>> 123
123
>>> foo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'foo' is not defined
Clearly you can for an int but not necessarily for a str. Similarly, while I can pass '123' to eval(), that doesn't work for 'foo':
>>> eval('123')
123
>>> eval('foo')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 1, in <module>
NameError: name 'foo' is not defined
So this tells you the Python shell just eval()s what you give it. Got it? Now, let's repr() both expressions and see what we get. More specifically, take its output and dump those out in the interpreter (there's a point to this which we'll address afterwards):
>>> repr(x), repr(y)
("'foo'", '123')
>>> 123
123
>>> 'foo'
'foo'
Wow, they both work? That's because 'foo', while a printable string representation of that string, it's not evaluatable, but "'foo'" is. 123 is a valid Python int called by either str() or repr(). What happens when we call eval() with these?
>>> eval('123')
123
>>> eval("'foo'")
'foo'
It works because 123 and 'foo' are valid Python objects. Another key takeaway is that while sometimes both return the same thing (the same string representation), that's not always the case. (And yes, yes, I can go create a variable foo where the eval() works, but that's not the point.)
More factoids about both pairs
Sometimes, str() and repr() are called implicitly, meaning they're called on behalf of users: when users execute print (Py1/Py2) or call print() (Py3+), even if users don't call str() explicitly, such a call is made on their behalf before the object is displayed.
In the Python shell (interactive interpreter), if you enter a variable at the >>> prompt and press RETURN, the interpreter displays the results of repr() implicitly called on that object.
To connect str() and repr() to __str__() and __repr__(), realize that calls to the built-in functions, i.e., str(x) or repr(y) result in calling their object's corresponding special methods: x.__str__() or y.__repr()__
By implementing __str__() and __repr__() for your Python classes, you overload the built-in functions (str() and repr()), allowing instances of your classes to be passed in to str() and repr(). When such calls are made, they turn around and call the class' __str__() and __repr__() (per #3).
A:
To put it simply:
__str__ is used in to show a string representation of your object to be read easily by others.
__repr__ is used to show a string representation of the object.
Let's say I want to create a Fraction class where the string representation of a fraction is '(1/2)' and the object (Fraction class) is to be represented as 'Fraction (1,2)'
So we can create a simple Fraction class:
class Fraction:
def __init__(self, num, den):
self.__num = num
self.__den = den
def __str__(self):
return '(' + str(self.__num) + '/' + str(self.__den) + ')'
def __repr__(self):
return 'Fraction (' + str(self.__num) + ',' + str(self.__den) + ')'
f = Fraction(1,2)
print('I want to represent the Fraction STRING as ' + str(f)) # (1/2)
print('I want to represent the Fraction OBJECT as ', repr(f)) # Fraction (1,2)
A:
From an (An Unofficial) Python Reference Wiki (archive copy) by effbot:
__str__ "computes the "informal" string representation of an object. This differs from __repr__ in that it does not have to be a valid Python expression: a more convenient or concise representation may be used instead."
A:
In all honesty, eval(repr(obj)) is never used. If you find yourself using it, you should stop, because eval is dangerous, and strings are a very inefficient way to serialize your objects (use pickle instead).
Therefore, I would recommend setting __repr__ = __str__. The reason is that str(list) calls repr on the elements (I consider this to be one of the biggest design flaws of Python that was not addressed by Python 3). An actual repr will probably not be very helpful as the output of print([your, objects]).
To qualify this, in my experience, the most useful use case of the repr function is to put a string inside another string (using string formatting). This way, you don't have to worry about escaping quotes or anything. But note that there is no eval happening here.
A:
str - Creates a new string object from the given object.
repr - Returns the canonical string representation of the object.
The differences:
str():
makes object readable
generates output for end-user
repr():
needs code that reproduces object
generates output for developer
A:
From the book Fluent Python:
A basic requirement for a Python object is to provide usable
string representations of itself, one used for debugging and
logging, another for presentation to end users. That is why the
special methods __repr__ and __str__ exist in the data model.
A:
One aspect that is missing in other answers. It's true that in general the pattern is:
Goal of __str__: human-readable
Goal of __repr__: unambiguous, possibly machine-readable via eval
Unfortunately, this differentiation is flawed, because the Python REPL and also IPython use __repr__ for printing objects in a REPL console (see related questions for Python and IPython). Thus, projects which are targeted for interactive console work (e.g., Numpy or Pandas) have started to ignore above rules and provide a human-readable __repr__ implementation instead.
A:
You can get some insight from this code:
class Foo():
def __repr__(self):
return("repr")
def __str__(self):
return("str")
foo = Foo()
foo #repr
print(foo) #str
A:
__str__ can be invoked on an object by calling str(obj) and should return a human readable string.
__repr__ can be invoked on an object by calling repr(obj) and should return internal object (object fields/attributes)
This example may help:
class C1:pass
class C2:
def __str__(self):
return str(f"{self.__class__.__name__} class str ")
class C3:
def __repr__(self):
return str(f"{self.__class__.__name__} class repr")
class C4:
def __str__(self):
return str(f"{self.__class__.__name__} class str ")
def __repr__(self):
return str(f"{self.__class__.__name__} class repr")
ci1 = C1()
ci2 = C2()
ci3 = C3()
ci4 = C4()
print(ci1) #<__main__.C1 object at 0x0000024C44A80C18>
print(str(ci1)) #<__main__.C1 object at 0x0000024C44A80C18>
print(repr(ci1)) #<__main__.C1 object at 0x0000024C44A80C18>
print(ci2) #C2 class str
print(str(ci2)) #C2 class str
print(repr(ci2)) #<__main__.C2 object at 0x0000024C44AE12E8>
print(ci3) #C3 class repr
print(str(ci3)) #C3 class repr
print(repr(ci3)) #C3 class repr
print(ci4) #C4 class str
print(str(ci4)) #C4 class str
print(repr(ci4)) #C4 class repr
A:
>>> print(decimal.Decimal(23) / decimal.Decimal("1.05"))
21.90476190476190476190476190
>>> decimal.Decimal(23) / decimal.Decimal("1.05")
Decimal('21.90476190476190476190476190')
When print() is called on the result of decimal.Decimal(23) / decimal.Decimal("1.05") the raw number is printed; this output is in string form which can be achieved with __str__(). If we simply enter the expression we get a decimal.Decimal output — this output is in representational form which can be achieved with __repr__(). All Python objects have two output forms. String form is designed to be human-readable. The representational form is designed to produce output that if fed to a Python interpreter would (when possible) reproduce the represented object.
A:
Excellent answers already cover the difference between __str__ and __repr__, which for me boils down to the former being readable even by an end user, and the latter being as useful as possible to developers. Given that, I find that the default implementation of __repr__ often fails to achieve this goal because it omits information useful to developers.
For this reason, if I have a simple enough __str__, I generally just try to get the best of both worlds with something like:
def __repr__(self):
return '{0} ({1})'.format(object.__repr__(self), str(self))
A:
One important thing to keep in mind is that container's __str__ uses contained objects' __repr__.
>>> from datetime import datetime
>>> from decimal import Decimal
>>> print (Decimal('52'), datetime.now())
(Decimal('52'), datetime.datetime(2015, 11, 16, 10, 51, 26, 185000))
>>> str((Decimal('52'), datetime.now()))
"(Decimal('52'), datetime.datetime(2015, 11, 16, 10, 52, 22, 176000))"
Python favors unambiguity over readability, the __str__ call of a tuple calls the contained objects' __repr__, the "formal" representation of an object. Although the formal representation is harder to read than an informal one, it is unambiguous and more robust against bugs.
A:
In a nutshell:
class Demo:
def __repr__(self):
return 'repr'
def __str__(self):
return 'str'
demo = Demo()
print(demo) # use __str__, output 'str' to stdout
s = str(demo) # __str__ is used, return 'str'
r = repr(demo) # __repr__ is used, return 'repr'
import logging
logger = logging.getLogger(logging.INFO)
logger.info(demo) # use __str__, output 'str' to stdout
from pprint import pprint, pformat
pprint(demo) # use __repr__, output 'repr' to stdout
result = pformat(demo) # use __repr__, result is string which value is 'str'
A:
Understand __str__ and __repr__ intuitively and permanently distinguish them at all.
__str__ return the string disguised body of a given object for readable of eyes
__repr__ return the real flesh body of a given object (return itself) for unambiguity to identify.
See it in an example
In [30]: str(datetime.datetime.now())
Out[30]: '2017-12-07 15:41:14.002752'
Disguised in string form
As to __repr__
In [32]: datetime.datetime.now()
Out[32]: datetime.datetime(2017, 12, 7, 15, 43, 27, 297769)
Presence in real body which allows to be manipulated directly.
We can do arithmetic operation on __repr__ results conveniently.
In [33]: datetime.datetime.now()
Out[33]: datetime.datetime(2017, 12, 7, 15, 47, 9, 741521)
In [34]: datetime.datetime(2017, 12, 7, 15, 47, 9, 741521) - datetime.datetime(2
...: 017, 12, 7, 15, 43, 27, 297769)
Out[34]: datetime.timedelta(0, 222, 443752)
if apply the operation on __str__
In [35]: '2017-12-07 15:43:14.002752' - '2017-12-07 15:41:14.002752'
TypeError: unsupported operand type(s) for -: 'str' and 'str'
Returns nothing but error.
Another example.
In [36]: str('string_body')
Out[36]: 'string_body' # in string form
In [37]: repr('real_body')
Out[37]: "'real_body'" #its real body hide inside
Hope this help you build concrete grounds to explore more answers.
A:
__str__ must return string object whereas __repr__ can return any python expression.
If __str__ implementation is missing then __repr__ function is used as fallback. There is no fallback if __repr__ function implementation is missing.
If __repr__ function is returning String representation of the object, we can skip implementation of __str__ function.
Source: https://www.journaldev.com/22460/python-str-repr-functions
A:
__repr__ is used everywhere, except by print and str methods (when a __str__is defined !)
A:
Every object inherits __repr__ from the base class that all objects created.
class Person:
pass
p=Person()
if you call repr(p) you will get this as default:
<__main__.Person object at 0x7fb2604f03a0>
But if you call str(p) you will get the same output. it is because when __str__ does not exist, Python calls __repr__
Let's implement our own __str__
class Person:
def __init__(self,name,age):
self.name=name
self.age=age
def __repr__(self):
print("__repr__ called")
return f"Person(name='{self.name}',age={self.age})"
p=Person("ali",20)
print(p) and str(p)will return
__repr__ called
Person(name='ali',age=20)
let's add __str__()
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
def __repr__(self):
print('__repr__ called')
return f"Person(name='{self.name}, age=self.age')"
def __str__(self):
print('__str__ called')
return self.name
p=Person("ali",20)
if we call print(p) and str(p), it will call __str__() so it will return
__str__ called
ali
repr(p) will return
repr called
"Person(name='ali, age=self.age')"
Let's omit __repr__ and just implement __str__.
class Person:
def __init__(self, name, age):
self.name = name
self.age = age
def __str__(self):
print('__str__ called')
return self.name
p=Person('ali',20)
print(p) will look for the __str__ and will return:
__str__ called
ali
NOTE= if we had __repr__ and __str__ defined, f'name is {p}' would call __str__
A:
Programmers with prior experience in languages with a toString method tend to implement __str__ and not __repr__.
If you only implement one of these special methods in Python, choose __repr__.
From Fluent Python book, by Ramalho, Luciano.
A:
Basically __str__ or str() is used for creating output that is human-readable are must be for end-users.
On the other hand, repr() or __repr__ mainly returns canonical string representation of objects which serve the purpose of debugging and development helps the programmers.
A:
repr() used when we debug or log.It is used for developers to understand code.
one the other hand str() user for non developer like(QA) or user.
class Customer:
def __init__(self,name):
self.name = name
def __repr__(self):
return "Customer('{}')".format(self.name)
def __str__(self):
return f"cunstomer name is {self.name}"
cus_1 = Customer("Thusi")
print(repr(cus_1)) #print(cus_1.__repr__())
print(str(cus_1)) #print(cus_1.__str__())
A:
As far as I see it:
__str__ is used for converting an object as a string, which makes the object more human-readable (for costumers like me). However, __repr__ must be used for representing the class's object as a string, which seems more unambiguous. So __repr__ is most likely used by developers for development and debugging.
| What is the difference between __str__ and __repr__? | What is the difference between __str__ and __repr__ in Python?
| [
"Alex summarized well but, surprisingly, was too succinct.\nFirst, let me reiterate the main points in Alex’s post:\n\nThe default implementation is useless (it’s hard to think of one which wouldn’t be, but yeah)\n__repr__ goal is to be unambiguous\n__str__ goal is to be readable\nContainer’s __str__ uses contained objects’ __repr__\n\nDefault implementation is useless\nThis is mostly a surprise because Python’s defaults tend to be fairly useful. However, in this case, having a default for __repr__ which would act like:\nreturn \"%s(%r)\" % (self.__class__, self.__dict__)\n\nwould have been too dangerous (for example, too easy to get into infinite recursion if objects reference each other). So Python cops out. Note that there is one default which is true: if __repr__ is defined, and __str__ is not, the object will behave as though __str__=__repr__.\nThis means, in simple terms: almost every object you implement should have a functional __repr__ that’s usable for understanding the object. Implementing __str__ is optional: do that if you need a “pretty print” functionality (for example, used by a report generator).\nThe goal of __repr__ is to be unambiguous\nLet me come right out and say it — I do not believe in debuggers. I don’t really know how to use any debugger, and have never used one seriously. Furthermore, I believe that the big fault in debuggers is their basic nature — most failures I debug happened a long long time ago, in a galaxy far far away. This means that I do believe, with religious fervor, in logging. Logging is the lifeblood of any decent fire-and-forget server system. Python makes it easy to log: with maybe some project specific wrappers, all you need is a\nlog(INFO, \"I am in the weird function and a is\", a, \"and b is\", b, \"but I got a null C — using default\", default_c)\n\nBut you have to do the last step — make sure every object you implement has a useful repr, so code like that can just work. This is why the “eval” thing comes up: if you have enough information so eval(repr(c))==c, that means you know everything there is to know about c. If that’s easy enough, at least in a fuzzy way, do it. If not, make sure you have enough information about c anyway. I usually use an eval-like format: \"MyClass(this=%r,that=%r)\" % (self.this,self.that). It does not mean that you can actually construct MyClass, or that those are the right constructor arguments — but it is a useful form to express “this is everything you need to know about this instance”.\nNote: I used %r above, not %s. You always want to use repr() [or %r formatting character, equivalently] inside __repr__ implementation, or you’re defeating the goal of repr. You want to be able to differentiate MyClass(3) and MyClass(\"3\").\nThe goal of __str__ is to be readable\nSpecifically, it is not intended to be unambiguous — notice that str(3)==str(\"3\"). Likewise, if you implement an IP abstraction, having the str of it look like 192.168.1.1 is just fine. When implementing a date/time abstraction, the str can be \"2010/4/12 15:35:22\", etc. The goal is to represent it in a way that a user, not a programmer, would want to read it. Chop off useless digits, pretend to be some other class — as long is it supports readability, it is an improvement.\nContainer’s __str__ uses contained objects’ __repr__\nThis seems surprising, doesn’t it? It is a little, but how readable would it be if it used their __str__?\n[moshe is, 3, hello\nworld, this is a list, oh I don't know, containing just 4 elements]\n\nNot very. Specifically, the strings in a container would find it way too easy to disturb its string representation. In the face of ambiguity, remember, Python resists the temptation to guess. If you want the above behavior when you’re printing a list, just\nprint(\"[\" + \", \".join(l) + \"]\")\n\n(you can probably also figure out what to do about dictionaries.\nSummary\nImplement __repr__ for any class you implement. This should be second nature. Implement __str__ if you think it would be useful to have a string version which errs on the side of readability.\n",
"My rule of thumb: __repr__ is for developers, __str__ is for customers.\n",
"Unless you specifically act to ensure otherwise, most classes don't have helpful results for either:\n>>> class Sic(object): pass\n... \n>>> print(str(Sic()))\n<__main__.Sic object at 0x8b7d0>\n>>> print(repr(Sic()))\n<__main__.Sic object at 0x8b7d0>\n>>> \n\nAs you see -- no difference, and no info beyond the class and object's id. If you only override one of the two...:\n>>> class Sic(object): \n... def __repr__(self): return 'foo'\n... \n>>> print(str(Sic()))\nfoo\n>>> print(repr(Sic()))\nfoo\n>>> class Sic(object):\n... def __str__(self): return 'foo'\n... \n>>> print(str(Sic()))\nfoo\n>>> print(repr(Sic()))\n<__main__.Sic object at 0x2617f0>\n>>> \n\nas you see, if you override __repr__, that's ALSO used for __str__, but not vice versa.\nOther crucial tidbits to know: __str__ on a built-on container uses the __repr__, NOT the __str__, for the items it contains. And, despite the words on the subject found in typical docs, hardly anybody bothers making the __repr__ of objects be a string that eval may use to build an equal object (it's just too hard, AND not knowing how the relevant module was actually imported makes it actually flat out impossible).\nSo, my advice: focus on making __str__ reasonably human-readable, and __repr__ as unambiguous as you possibly can, even if that interferes with the fuzzy unattainable goal of making __repr__'s returned value acceptable as input to __eval__!\n",
"__repr__: representation of python object usually eval will convert it back to that object\n__str__: is whatever you think is that object in text form\ne.g.\n>>> s=\"\"\"w'o\"w\"\"\"\n>>> repr(s)\n'\\'w\\\\\\'o\"w\\''\n>>> str(s)\n'w\\'o\"w'\n>>> eval(str(s))==s\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"<string>\", line 1\n w'o\"w\n ^\nSyntaxError: EOL while scanning single-quoted string\n>>> eval(repr(s))==s\nTrue\n\n",
"\nIn short, the goal of __repr__ is to be unambiguous and __str__ is to be\n readable.\n\nHere is a good example:\n>>> import datetime\n>>> today = datetime.datetime.now()\n>>> str(today)\n'2012-03-14 09:21:58.130922'\n>>> repr(today)\n'datetime.datetime(2012, 3, 14, 9, 21, 58, 130922)'\n\nRead this documentation for repr:\n\nrepr(object)\nReturn a string containing a printable representation of an object. This is the same value yielded by conversions (reverse\n quotes). It is sometimes useful to be able to access this operation as\n an ordinary function. For many types, this function makes an attempt\n to return a string that would yield an object with the same value when\n passed to eval(), otherwise the representation is a string enclosed in\n angle brackets that contains the name of the type of the object\n together with additional information often including the name and\n address of the object. A class can control what this function returns\n for its instances by defining a __repr__() method.\n\nHere is the documentation for str:\n\nstr(object='')\nReturn a string containing a nicely printable\n representation of an object. For strings, this returns the string\n itself. The difference with repr(object) is that str(object) does not\n always attempt to return a string that is acceptable to eval(); its\n goal is to return a printable string. If no argument is given, returns\n the empty string, ''.\n\n",
"\nWhat is the difference between __str__ and __repr__ in Python?\n\n__str__ (read as \"dunder (double-underscore) string\") and __repr__ (read as \"dunder-repper\" (for \"representation\")) are both special methods that return strings based on the state of the object.\n__repr__ provides backup behavior if __str__ is missing.\nSo one should first write a __repr__ that allows you to reinstantiate an equivalent object from the string it returns e.g. using eval or by typing it in character-for-character in a Python shell.\nAt any time later, one can write a __str__ for a user-readable string representation of the instance, when one believes it to be necessary.\n__str__\nIf you print an object, or pass it to format, str.format, or str, then if a __str__ method is defined, that method will be called, otherwise, __repr__ will be used.\n__repr__\nThe __repr__ method is called by the builtin function repr and is what is echoed on your python shell when it evaluates an expression that returns an object.\nSince it provides a backup for __str__, if you can only write one, start with __repr__\nHere's the builtin help on repr:\nrepr(...)\n repr(object) -> string\n \n Return the canonical string representation of the object.\n For most object types, eval(repr(object)) == object.\n\nThat is, for most objects, if you type in what is printed by repr, you should be able to create an equivalent object. But this is not the default implementation.\nDefault Implementation of __repr__\nThe default object __repr__ is (C Python source) something like:\ndef __repr__(self):\n return '<{0}.{1} object at {2}>'.format(\n type(self).__module__, type(self).__qualname__, hex(id(self)))\n\nThat means by default you'll print the module the object is from, the class name, and the hexadecimal representation of its location in memory - for example:\n<__main__.Foo object at 0x7f80665abdd0>\n\nThis information isn't very useful, but there's no way to derive how one might accurately create a canonical representation of any given instance, and it's better than nothing, at least telling us how we might uniquely identify it in memory.\nHow can __repr__ be useful?\nLet's look at how useful it can be, using the Python shell and datetime objects. First we need to import the datetime module:\nimport datetime\n\nIf we call datetime.now in the shell, we'll see everything we need to recreate an equivalent datetime object. This is created by the datetime __repr__:\n>>> datetime.datetime.now()\ndatetime.datetime(2015, 1, 24, 20, 5, 36, 491180)\n\nIf we print a datetime object, we see a nice human readable (in fact, ISO) format. This is implemented by datetime's __str__:\n>>> print(datetime.datetime.now())\n2015-01-24 20:05:44.977951\n\nIt is a simple matter to recreate the object we lost because we didn't assign it to a variable by copying and pasting from the __repr__ output, and then printing it, and we get it in the same human readable output as the other object:\n>>> the_past = datetime.datetime(2015, 1, 24, 20, 5, 36, 491180)\n>>> print(the_past)\n2015-01-24 20:05:36.491180\n\n#How do I implement them?\nAs you're developing, you'll want to be able to reproduce objects in the same state, if possible. This, for example, is how the datetime object defines __repr__ (Python source). It is fairly complex, because of all of the attributes needed to reproduce such an object:\ndef __repr__(self):\n \"\"\"Convert to formal string, for repr().\"\"\"\n L = [self._year, self._month, self._day, # These are never zero\n self._hour, self._minute, self._second, self._microsecond]\n if L[-1] == 0:\n del L[-1]\n if L[-1] == 0:\n del L[-1]\n s = \"%s.%s(%s)\" % (self.__class__.__module__,\n self.__class__.__qualname__,\n \", \".join(map(str, L)))\n if self._tzinfo is not None:\n assert s[-1:] == \")\"\n s = s[:-1] + \", tzinfo=%r\" % self._tzinfo + \")\"\n if self._fold:\n assert s[-1:] == \")\"\n s = s[:-1] + \", fold=1)\"\n return s\n\nIf you want your object to have a more human readable representation, you can implement __str__ next. Here's how the datetime object (Python source) implements __str__, which it easily does because it already has a function to display it in ISO format:\ndef __str__(self):\n \"Convert to string, for str().\"\n return self.isoformat(sep=' ')\n\nSet __repr__ = __str__?\nThis is a critique of another answer here that suggests setting __repr__ = __str__.\nSetting __repr__ = __str__ is silly - __repr__ is a fallback for __str__ and a __repr__, written for developers usage in debugging, should be written before you write a __str__.\nYou need a __str__ only when you need a textual representation of the object.\nConclusion\nDefine __repr__ for objects you write so you and other developers have a reproducible example when using it as you develop. Define __str__ when you need a human readable string representation of it.\n",
"On page 358 of the book Python scripting for computational science by Hans Petter Langtangen, it clearly states that \n\nThe __repr__ aims at a complete string representation of the object;\nThe __str__ is to return a nice string for printing.\n\nSo, I prefer to understand them as\n\nrepr = reproduce\nstr = string (representation)\n\nfrom the user's point of view\nalthough this is a misunderstanding I made when learning python.\nA small but good example is also given on the same page as follows:\nExample\nIn [38]: str('s')\nOut[38]: 's'\n\nIn [39]: repr('s')\nOut[39]: \"'s'\"\n\nIn [40]: eval(str('s'))\nTraceback (most recent call last):\n\n File \"<ipython-input-40-abd46c0c43e7>\", line 1, in <module>\n eval(str('s'))\n\n File \"<string>\", line 1, in <module>\n\nNameError: name 's' is not defined\n\n\nIn [41]: eval(repr('s'))\nOut[41]: 's'\n\n",
"Apart from all the answers given, I would like to add few points :-\n1) __repr__() is invoked when you simply write object's name on interactive python console and press enter.\n2) __str__() is invoked when you use object with print statement.\n3) In case, if __str__ is missing, then print and any function using str() invokes __repr__() of object.\n4) __str__() of containers, when invoked will execute __repr__() method of its contained elements.\n5) str() called within __str__() could potentially recurse without a base case, and error on maximum recursion depth.\n6) __repr__() can call repr() which will attempt to avoid infinite recursion automatically, replacing an already represented object with ....\n",
"(2020 entry)\nQ: What's the difference between __str__() and __repr__()?\nTL;DR:\n\nLONG\nThis question has been around a long time, and there are a variety of answers of which most are correct (not to mention from several Python community legends[!]). However when it comes down to the nitty-gritty, this question is analogous to asking the difference between the str() and repr() built-in functions. I'm going to describe the differences in my own words (which means I may be \"borrowing\" liberally from Core Python Programming so pls forgive me).\nBoth str() and repr() have the same basic job: their goal is to return a string representation of a Python object. What kind of string representation is what differentiates them.\n\nstr() & __str__() return a printable string representation of\nan object... something human-readable/for human consumption\nrepr() & __repr__() return a string representation of an object that is a valid Python expression, an object you can pass to eval() or type into the Python shell without getting an error.\n\nFor example, let's assign a string to x and an int to y, and simply showing human-readable string versions of each:\n>>> x, y = 'foo', 123\n>>> str(x), str(y)\n('foo', '123')\n\nCan we take what is inside the quotes in both cases and enter them verbatim into the Python interpreter? Let's give it a try:\n>>> 123\n123\n>>> foo\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nNameError: name 'foo' is not defined\n\nClearly you can for an int but not necessarily for a str. Similarly, while I can pass '123' to eval(), that doesn't work for 'foo':\n>>> eval('123')\n123\n>>> eval('foo')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"<string>\", line 1, in <module>\nNameError: name 'foo' is not defined\n\nSo this tells you the Python shell just eval()s what you give it. Got it? Now, let's repr() both expressions and see what we get. More specifically, take its output and dump those out in the interpreter (there's a point to this which we'll address afterwards):\n>>> repr(x), repr(y)\n(\"'foo'\", '123')\n>>> 123\n123\n>>> 'foo'\n'foo'\n\nWow, they both work? That's because 'foo', while a printable string representation of that string, it's not evaluatable, but \"'foo'\" is. 123 is a valid Python int called by either str() or repr(). What happens when we call eval() with these?\n>>> eval('123')\n123\n>>> eval(\"'foo'\")\n'foo'\n\nIt works because 123 and 'foo' are valid Python objects. Another key takeaway is that while sometimes both return the same thing (the same string representation), that's not always the case. (And yes, yes, I can go create a variable foo where the eval() works, but that's not the point.)\nMore factoids about both pairs\n\nSometimes, str() and repr() are called implicitly, meaning they're called on behalf of users: when users execute print (Py1/Py2) or call print() (Py3+), even if users don't call str() explicitly, such a call is made on their behalf before the object is displayed.\nIn the Python shell (interactive interpreter), if you enter a variable at the >>> prompt and press RETURN, the interpreter displays the results of repr() implicitly called on that object.\nTo connect str() and repr() to __str__() and __repr__(), realize that calls to the built-in functions, i.e., str(x) or repr(y) result in calling their object's corresponding special methods: x.__str__() or y.__repr()__\nBy implementing __str__() and __repr__() for your Python classes, you overload the built-in functions (str() and repr()), allowing instances of your classes to be passed in to str() and repr(). When such calls are made, they turn around and call the class' __str__() and __repr__() (per #3).\n\n",
"To put it simply:\n__str__ is used in to show a string representation of your object to be read easily by others.\n__repr__ is used to show a string representation of the object.\nLet's say I want to create a Fraction class where the string representation of a fraction is '(1/2)' and the object (Fraction class) is to be represented as 'Fraction (1,2)'\nSo we can create a simple Fraction class:\nclass Fraction:\n def __init__(self, num, den):\n self.__num = num\n self.__den = den\n\n def __str__(self):\n return '(' + str(self.__num) + '/' + str(self.__den) + ')'\n\n def __repr__(self):\n return 'Fraction (' + str(self.__num) + ',' + str(self.__den) + ')'\n\n\n\nf = Fraction(1,2)\nprint('I want to represent the Fraction STRING as ' + str(f)) # (1/2)\nprint('I want to represent the Fraction OBJECT as ', repr(f)) # Fraction (1,2)\n\n",
"From an (An Unofficial) Python Reference Wiki (archive copy) by effbot:\n__str__ \"computes the \"informal\" string representation of an object. This differs from __repr__ in that it does not have to be a valid Python expression: a more convenient or concise representation may be used instead.\"\n",
"In all honesty, eval(repr(obj)) is never used. If you find yourself using it, you should stop, because eval is dangerous, and strings are a very inefficient way to serialize your objects (use pickle instead).\nTherefore, I would recommend setting __repr__ = __str__. The reason is that str(list) calls repr on the elements (I consider this to be one of the biggest design flaws of Python that was not addressed by Python 3). An actual repr will probably not be very helpful as the output of print([your, objects]).\nTo qualify this, in my experience, the most useful use case of the repr function is to put a string inside another string (using string formatting). This way, you don't have to worry about escaping quotes or anything. But note that there is no eval happening here.\n",
"str - Creates a new string object from the given object.\nrepr - Returns the canonical string representation of the object.\nThe differences:\nstr():\n\nmakes object readable\ngenerates output for end-user\n\nrepr():\n\nneeds code that reproduces object\ngenerates output for developer\n\n",
"From the book Fluent Python:\n\nA basic requirement for a Python object is to provide usable \n string representations of itself, one used for debugging and\n logging, another for presentation to end users. That is why the\n special methods __repr__ and __str__ exist in the data model.\n\n",
"One aspect that is missing in other answers. It's true that in general the pattern is:\n\nGoal of __str__: human-readable\nGoal of __repr__: unambiguous, possibly machine-readable via eval\n\nUnfortunately, this differentiation is flawed, because the Python REPL and also IPython use __repr__ for printing objects in a REPL console (see related questions for Python and IPython). Thus, projects which are targeted for interactive console work (e.g., Numpy or Pandas) have started to ignore above rules and provide a human-readable __repr__ implementation instead.\n",
"You can get some insight from this code:\nclass Foo():\n def __repr__(self):\n return(\"repr\")\n def __str__(self):\n return(\"str\")\n\nfoo = Foo()\nfoo #repr\nprint(foo) #str\n\n",
"__str__ can be invoked on an object by calling str(obj) and should return a human readable string. \n__repr__ can be invoked on an object by calling repr(obj) and should return internal object (object fields/attributes)\nThis example may help:\nclass C1:pass\n\nclass C2: \n def __str__(self):\n return str(f\"{self.__class__.__name__} class str \")\n\nclass C3: \n def __repr__(self): \n return str(f\"{self.__class__.__name__} class repr\")\n\nclass C4: \n def __str__(self):\n return str(f\"{self.__class__.__name__} class str \")\n def __repr__(self): \n return str(f\"{self.__class__.__name__} class repr\")\n\n\nci1 = C1() \nci2 = C2() \nci3 = C3() \nci4 = C4()\n\nprint(ci1) #<__main__.C1 object at 0x0000024C44A80C18>\nprint(str(ci1)) #<__main__.C1 object at 0x0000024C44A80C18>\nprint(repr(ci1)) #<__main__.C1 object at 0x0000024C44A80C18>\nprint(ci2) #C2 class str\nprint(str(ci2)) #C2 class str\nprint(repr(ci2)) #<__main__.C2 object at 0x0000024C44AE12E8>\nprint(ci3) #C3 class repr\nprint(str(ci3)) #C3 class repr\nprint(repr(ci3)) #C3 class repr\nprint(ci4) #C4 class str \nprint(str(ci4)) #C4 class str \nprint(repr(ci4)) #C4 class repr\n\n",
">>> print(decimal.Decimal(23) / decimal.Decimal(\"1.05\"))\n21.90476190476190476190476190\n>>> decimal.Decimal(23) / decimal.Decimal(\"1.05\")\nDecimal('21.90476190476190476190476190')\n\nWhen print() is called on the result of decimal.Decimal(23) / decimal.Decimal(\"1.05\") the raw number is printed; this output is in string form which can be achieved with __str__(). If we simply enter the expression we get a decimal.Decimal output — this output is in representational form which can be achieved with __repr__(). All Python objects have two output forms. String form is designed to be human-readable. The representational form is designed to produce output that if fed to a Python interpreter would (when possible) reproduce the represented object.\n",
"Excellent answers already cover the difference between __str__ and __repr__, which for me boils down to the former being readable even by an end user, and the latter being as useful as possible to developers. Given that, I find that the default implementation of __repr__ often fails to achieve this goal because it omits information useful to developers.\nFor this reason, if I have a simple enough __str__, I generally just try to get the best of both worlds with something like:\ndef __repr__(self):\n return '{0} ({1})'.format(object.__repr__(self), str(self))\n\n",
"\nOne important thing to keep in mind is that container's __str__ uses contained objects' __repr__.\n\n>>> from datetime import datetime\n>>> from decimal import Decimal\n>>> print (Decimal('52'), datetime.now())\n(Decimal('52'), datetime.datetime(2015, 11, 16, 10, 51, 26, 185000))\n>>> str((Decimal('52'), datetime.now()))\n\"(Decimal('52'), datetime.datetime(2015, 11, 16, 10, 52, 22, 176000))\"\n\nPython favors unambiguity over readability, the __str__ call of a tuple calls the contained objects' __repr__, the \"formal\" representation of an object. Although the formal representation is harder to read than an informal one, it is unambiguous and more robust against bugs.\n",
"In a nutshell:\nclass Demo:\n def __repr__(self):\n return 'repr'\n def __str__(self):\n return 'str'\n\ndemo = Demo()\nprint(demo) # use __str__, output 'str' to stdout\n\ns = str(demo) # __str__ is used, return 'str'\nr = repr(demo) # __repr__ is used, return 'repr'\n\nimport logging\nlogger = logging.getLogger(logging.INFO)\nlogger.info(demo) # use __str__, output 'str' to stdout\n\nfrom pprint import pprint, pformat\npprint(demo) # use __repr__, output 'repr' to stdout\nresult = pformat(demo) # use __repr__, result is string which value is 'str'\n\n",
"Understand __str__ and __repr__ intuitively and permanently distinguish them at all.\n__str__ return the string disguised body of a given object for readable of eyes\n__repr__ return the real flesh body of a given object (return itself) for unambiguity to identify.\nSee it in an example\nIn [30]: str(datetime.datetime.now())\nOut[30]: '2017-12-07 15:41:14.002752'\nDisguised in string form\n\nAs to __repr__\nIn [32]: datetime.datetime.now()\nOut[32]: datetime.datetime(2017, 12, 7, 15, 43, 27, 297769)\nPresence in real body which allows to be manipulated directly.\n\nWe can do arithmetic operation on __repr__ results conveniently.\nIn [33]: datetime.datetime.now()\nOut[33]: datetime.datetime(2017, 12, 7, 15, 47, 9, 741521)\nIn [34]: datetime.datetime(2017, 12, 7, 15, 47, 9, 741521) - datetime.datetime(2\n ...: 017, 12, 7, 15, 43, 27, 297769)\nOut[34]: datetime.timedelta(0, 222, 443752)\n\nif apply the operation on __str__\nIn [35]: '2017-12-07 15:43:14.002752' - '2017-12-07 15:41:14.002752'\nTypeError: unsupported operand type(s) for -: 'str' and 'str'\n\nReturns nothing but error.\nAnother example.\nIn [36]: str('string_body')\nOut[36]: 'string_body' # in string form\n\nIn [37]: repr('real_body')\nOut[37]: \"'real_body'\" #its real body hide inside\n\nHope this help you build concrete grounds to explore more answers.\n",
"\n__str__ must return string object whereas __repr__ can return any python expression.\nIf __str__ implementation is missing then __repr__ function is used as fallback. There is no fallback if __repr__ function implementation is missing.\nIf __repr__ function is returning String representation of the object, we can skip implementation of __str__ function.\n\nSource: https://www.journaldev.com/22460/python-str-repr-functions\n",
"__repr__ is used everywhere, except by print and str methods (when a __str__is defined !)\n",
"Every object inherits __repr__ from the base class that all objects created.\nclass Person:\n pass\n\np=Person()\n\nif you call repr(p) you will get this as default:\n <__main__.Person object at 0x7fb2604f03a0>\n\nBut if you call str(p) you will get the same output. it is because when __str__ does not exist, Python calls __repr__\nLet's implement our own __str__\nclass Person:\n def __init__(self,name,age):\n self.name=name\n self.age=age\n def __repr__(self):\n print(\"__repr__ called\")\n return f\"Person(name='{self.name}',age={self.age})\"\n\np=Person(\"ali\",20)\n\nprint(p) and str(p)will return\n __repr__ called\n Person(name='ali',age=20)\n\nlet's add __str__()\nclass Person:\n def __init__(self, name, age):\n self.name = name\n self.age = age\n \n def __repr__(self):\n print('__repr__ called')\n return f\"Person(name='{self.name}, age=self.age')\"\n \n def __str__(self):\n print('__str__ called')\n return self.name\n\np=Person(\"ali\",20)\n\nif we call print(p) and str(p), it will call __str__() so it will return\n__str__ called\nali\n\nrepr(p) will return\nrepr called\n\"Person(name='ali, age=self.age')\"\nLet's omit __repr__ and just implement __str__.\nclass Person:\ndef __init__(self, name, age):\n self.name = name\n self.age = age\n\ndef __str__(self):\n print('__str__ called')\n return self.name\n\np=Person('ali',20)\n\nprint(p) will look for the __str__ and will return:\n__str__ called\nali\n\nNOTE= if we had __repr__ and __str__ defined, f'name is {p}' would call __str__\n",
"\nProgrammers with prior experience in languages with a toString method tend to implement __str__ and not __repr__.\nIf you only implement one of these special methods in Python, choose __repr__.\n\nFrom Fluent Python book, by Ramalho, Luciano.\n",
"Basically __str__ or str() is used for creating output that is human-readable are must be for end-users.\nOn the other hand, repr() or __repr__ mainly returns canonical string representation of objects which serve the purpose of debugging and development helps the programmers.\n",
"repr() used when we debug or log.It is used for developers to understand code.\none the other hand str() user for non developer like(QA) or user.\nclass Customer:\n def __init__(self,name):\n self.name = name\n def __repr__(self):\n return \"Customer('{}')\".format(self.name)\n def __str__(self):\n return f\"cunstomer name is {self.name}\"\n\ncus_1 = Customer(\"Thusi\")\nprint(repr(cus_1)) #print(cus_1.__repr__()) \nprint(str(cus_1)) #print(cus_1.__str__())\n\n",
"As far as I see it:\n__str__ is used for converting an object as a string, which makes the object more human-readable (for costumers like me). However, __repr__ must be used for representing the class's object as a string, which seems more unambiguous. So __repr__ is most likely used by developers for development and debugging.\n"
] | [
3334,
749,
498,
207,
201,
164,
49,
48,
37,
16,
15,
14,
12,
9,
9,
9,
8,
6,
6,
5,
5,
4,
4,
3,
3,
1,
1,
1,
0
] | [] | [] | [
"magic_methods",
"python",
"repr"
] | stackoverflow_0001436703_magic_methods_python_repr.txt |
Q:
Django: One-to-Many and Many-to-One Serializers and Queries
I have a relationship similar to this:
One city has one-to-many buildings; one building has zero-to-many devices.
The user must be able to request a city by its PK, receiving in response the city, the buildings in the city, and the devices in those buildings.
I know that foreign keys are necessary in creating the models, like this:
class City(models.Model):
#Columns here
class Building(models.Model):
cityId = models.ForeignKey(City, on_delete=models.CASCADE)
#Columns here
class Device(models.Model):
buildingId = models.ForeignKey(Building, on_delete=models.CASCADE)
#Columns here
What I'm having trouble with is how to write the serializers and the queries. As of now, my serializers only include fields corresponding to that table's columns:
class CitySerializer(serializers.ModelSerializer):
class Meta:
model = City
fields = ['id', ...]
class BuildingSerializer(serializers.ModelSerializer):
class Meta:
model = Building
fields = ['id', 'cityId', ...]
class DeviceSerializer(serializers.ModelSerializer):
class Meta:
model = Device
fields = ['id', 'buildingId', ...]
However, when I have to respond to a GET request for the city, I only know how to use a nested for loops to get the building and device data after finding the city by the inputted ID. I presume there's a better way, but I'm having trouble finding clear answers online.
A:
Use related_name keyword in your models, you can learn more about it -- here is how it works--
Define you models and add related_name to the foreign keys as -
class City(models.Model):
##fields
class Building(models.Model):
cityId = models.ForeignKey(City, on_delete=models.CASCADE,related_name='abc')
class Device(models.Model):
buildingId = models.ForeignKey(Building, on_delete=models.CASCADE, related_name='xyz')
Then define your serializers in reverse order and add the related_name of models in your serializers field--
class DeviceSerializer(serializers.ModelSerializer):
class Meta:
model = Device
fields = ['id', 'buildingId', ...]
class BuildingSerializer(serializers.ModelSerializer):
xyz = DeviceSerializer(many=True, read_only=True)
class Meta:
model = Building
fields = ['id', 'cityId', xyz ...]
class CitySerializer(serializers.ModelSerializer):
abc = BuildingSerializer(many=True, read_only=True)
class Meta:
model = City
fields = ['id', 'abc' ...]
Now when you will try to get the city, you will get all the building rel;ated to that city and also all the Devices related to each building.
There is one another method called as serializermethodfield, this is of great use in many areas, you can use any one depending upon your choice.
Hope this helps..
| Django: One-to-Many and Many-to-One Serializers and Queries | I have a relationship similar to this:
One city has one-to-many buildings; one building has zero-to-many devices.
The user must be able to request a city by its PK, receiving in response the city, the buildings in the city, and the devices in those buildings.
I know that foreign keys are necessary in creating the models, like this:
class City(models.Model):
#Columns here
class Building(models.Model):
cityId = models.ForeignKey(City, on_delete=models.CASCADE)
#Columns here
class Device(models.Model):
buildingId = models.ForeignKey(Building, on_delete=models.CASCADE)
#Columns here
What I'm having trouble with is how to write the serializers and the queries. As of now, my serializers only include fields corresponding to that table's columns:
class CitySerializer(serializers.ModelSerializer):
class Meta:
model = City
fields = ['id', ...]
class BuildingSerializer(serializers.ModelSerializer):
class Meta:
model = Building
fields = ['id', 'cityId', ...]
class DeviceSerializer(serializers.ModelSerializer):
class Meta:
model = Device
fields = ['id', 'buildingId', ...]
However, when I have to respond to a GET request for the city, I only know how to use a nested for loops to get the building and device data after finding the city by the inputted ID. I presume there's a better way, but I'm having trouble finding clear answers online.
| [
"Use related_name keyword in your models, you can learn more about it -- here is how it works--\n\nDefine you models and add related_name to the foreign keys as -\nclass City(models.Model):\n ##fields\n\n class Building(models.Model):\n cityId = models.ForeignKey(City, on_delete=models.CASCADE,related_name='abc')\n\n class Device(models.Model):\n buildingId = models.ForeignKey(Building, on_delete=models.CASCADE, related_name='xyz')\n\n\nThen define your serializers in reverse order and add the related_name of models in your serializers field--\n class DeviceSerializer(serializers.ModelSerializer):\n class Meta:\n model = Device\n fields = ['id', 'buildingId', ...]\n\n class BuildingSerializer(serializers.ModelSerializer):\n xyz = DeviceSerializer(many=True, read_only=True)\n class Meta:\n model = Building\n fields = ['id', 'cityId', xyz ...]\n\n class CitySerializer(serializers.ModelSerializer):\n abc = BuildingSerializer(many=True, read_only=True)\n class Meta:\n model = City\n fields = ['id', 'abc' ...]\n\n\n\nNow when you will try to get the city, you will get all the building rel;ated to that city and also all the Devices related to each building.\nThere is one another method called as serializermethodfield, this is of great use in many areas, you can use any one depending upon your choice.\nHope this helps..\n"
] | [
1
] | [] | [] | [
"django",
"django_models",
"django_rest_framework",
"django_views",
"rest"
] | stackoverflow_0074670026_django_django_models_django_rest_framework_django_views_rest.txt |
Q:
How can I make lazy registration in Flutter?
I'm trying to make lazy registration in Flutter. I want the user to have access on home page and on some pages even he/she isn't logged in. I tried this tutorial: https://www.kodeco.com/28987851-flutter-navigator-2-0-using-go_router#toc-anchor-009 but here login/registration is guard page and don't have access to other pages. I just want that if the user is logged in, in bottom navigation bar return profile page, if not, return login page but other pages to be accessible in both case.
I tried this:
import 'package:flutter/cupertino.dart';
import 'package:photo_app/bottom_navigation_pages/basket_page.dart';
import 'package:photo_app/bottom_navigation_pages/drafts_page.dart';
import 'package:photo_app/bottom_navigation_pages/favorites_page.dart';
import 'package:photo_app/bottom_navigation_pages/login_page.dart';
import 'package:photo_app/bottom_navigation_pages/profile_page.dart';
import 'package:photo_app/ui/home_body.dart';
import '../login_state/login_state.dart';
//enum appPages { home, favorites, drafts, basket, profile }
class BottomNavigationBarPages extends ChangeNotifier {
late LoginState loginState;
int _selectedTab = 0;
int get selectedTab => _selectedTab;
static var pages = [
const HomeBody(),
const FavoritesPage(),
const DraftsPage(),
const BasketPage(),
loginState.loggedIn ? const ProfilePage() : const LoginPage(),
];
void goToTab(index) {
_selectedTab = index;
notifyListeners();
}
}
but there is error:
The instance member 'loginState' can't be accessed in an initializer. (Documentation) Try replacing the reference to the instance member with a different expression
Here is my routes:
import 'package:flutter/foundation.dart';
import 'package:go_router/go_router.dart';
import 'package:photo_app/bottom_navigation_pages/registration_pages/email_sign_up.dart';
import 'package:photo_app/login_state/login_state.dart';
import 'package:photo_app/ui/home.dart';
import '../bottom_navigation_pages/bottom_navigation_class_pages.dart';
import '../bottom_navigation_pages/profile_page.dart';
import '../constants.dart';
import '../ui/error_screen.dart';
import '../ui/magnets_list.dart';
class MyRouter {
final LoginState loginState;
MyRouter(this.loginState);
//AppStateManager appStateManager = AppStateManager();
int selectedTab = BottomNavigationBarPages().selectedTab;
late final router = GoRouter(
refreshListenable: loginState,
routes: [
GoRoute(
name: rootRouteName,
path: '/',
redirect: (state) =>
state.namedLocation(homeRouteName, params: {'screen': 'homeBody'}),
),
GoRoute(
name: homeRouteName,
path: '/:screen(homeBody|favorites|drafts|basket|profile)',
// name: 'bottomNavBar',
builder: (context, state) {
final String screen = state.params['screen']!;
return Home(screen: screen);
},
routes: [
GoRoute(
name: subMagnetsListRouteName,
path: 'magnets',
builder: (context, state) => const MagnetList(),
),
GoRoute(
name: subSignUpPage,
path: 'sign-up',
builder: (context, state) => const SignUpPage(),
),
],
),
GoRoute(
path: '/homeBody',
redirect: (state) =>
state.namedLocation(homeRouteName, params: {'screen': 'homeBody'}),
),
GoRoute(
path: '/favorites',
redirect: (state) =>
state.namedLocation(homeRouteName, params: {'screen': 'favorites'}),
),
GoRoute(
path: '/drafts',
redirect: (state) =>
state.namedLocation(homeRouteName, params: {'screen': 'drafts'}),
),
GoRoute(
path: '/basket',
redirect: (state) =>
state.namedLocation(homeRouteName, params: {'screen': 'basket'}),
),
GoRoute(
path: '/profile',
redirect: (state) =>
state.namedLocation(homeRouteName, params: {'screen': 'profile'}),
),
GoRoute(
name: magnetsListRouteName,
path: '/magnets',
redirect: (state) => state.namedLocation(subMagnetsListRouteName,
params: {'screen': 'homeBody'}),
),
GoRoute(
name: signUpPage,
path: '/sign-up',
redirect: (state) =>
state.namedLocation(subSignUpPage, params: {'screen': 'profile'}),
),
GoRoute(
name: profilePage,
path: '/profile-page',
redirect: (state) =>
state.namedLocation(profilePage, params: {'screen': 'profile'}),
),
GoRoute(
name: loginPage,
path: '/login-page',
redirect: (state) =>
state.namedLocation(loginPage, params: {'screen': 'profile'}),
),
],
errorBuilder: (context, state) => ErrorScreen(state.error),
/* redirect: (state) {
final login = state.namedLocation(loginPage);
final profile = state.namedLocation(profilePage);
final loggedIn = loginState.loggedIn;
if (!loggedIn) {
if (kDebugMode) {
print(loggedIn);
}
return login;
} else {
if (kDebugMode) {
print(loggedIn);
}
return null;
}
},*/
);
}
And here is my login state:
import 'package:flutter/cupertino.dart';
import 'package:shared_preferences/shared_preferences.dart';
import '../constants.dart';
class LoginState extends ChangeNotifier {
final SharedPreferences prefs;
bool _loggedIn = false;
LoginState(this.prefs) {
loggedIn = prefs.getBool(loggedInKey) ?? false;
}
bool get loggedIn => _loggedIn;
set loggedIn(bool value) {
_loggedIn = value;
prefs.setBool(loggedInKey, value);
notifyListeners();
}
void checkLoggedIn() {
loggedIn = prefs.getBool(loggedInKey) ?? false;
}
}
So how can I do it?
Thanks in advance
A:
It looks like you're trying to access an instance member loginState in a static context, which is causing the error you're seeing. In the BottomNavigationBarPages class, you are trying to access loginState in the pages static variable. However, loginState is an instance member of the class, so it can't be accessed in a static context.
One way to fix this error is to move the pages variable inside the BottomNavigationBarPages constructor and initialize it there. This way, you can use the loginState instance variable to initialize the pages list. Here is an example of how you can do that:
import 'package:flutter/cupertino.dart';
import 'package:photo_app/bottom_navigation_pages/basket_page.dart';
import 'package:photo_app/bottom_navigation_pages/drafts_page.dart';
import 'package:photo_app/bottom_navigation_pages/favorites_page.dart';
import 'package:photo_app/bottom_navigation_pages/login_page.dart';
import 'package:photo_app/bottom_navigation_pages/profile_page.dart';
import 'package:photo_app/ui/home_body.dart';
import '../login_state/login_state.dart';
//enum appPages { home, favorites, drafts, basket, profile }
class BottomNavigationBarPages extends ChangeNotifier {
late LoginState loginState;
int _selectedTab = 0;
int get selectedTab => _selectedTab;
// Move the pages variable inside the constructor and initialize it there
BottomNavigationBarPages(this.loginState) {
pages = [
const HomeBody(),
const FavoritesPage(),
const DraftsPage(),
const BasketPage(),
loginState.loggedIn ? const ProfilePage() : const LoginPage(),
];
}
var pages;
void goToTab(index) {
_selectedTab = index;
notifyListeners();
}
}
In this code, the pages list is initialized in the constructor of the BottomNavigationBarPages class. Since the loginState variable is passed as an argument to the constructor, it is available inside the constructor and can be used to initialize the pages list.
| How can I make lazy registration in Flutter? | I'm trying to make lazy registration in Flutter. I want the user to have access on home page and on some pages even he/she isn't logged in. I tried this tutorial: https://www.kodeco.com/28987851-flutter-navigator-2-0-using-go_router#toc-anchor-009 but here login/registration is guard page and don't have access to other pages. I just want that if the user is logged in, in bottom navigation bar return profile page, if not, return login page but other pages to be accessible in both case.
I tried this:
import 'package:flutter/cupertino.dart';
import 'package:photo_app/bottom_navigation_pages/basket_page.dart';
import 'package:photo_app/bottom_navigation_pages/drafts_page.dart';
import 'package:photo_app/bottom_navigation_pages/favorites_page.dart';
import 'package:photo_app/bottom_navigation_pages/login_page.dart';
import 'package:photo_app/bottom_navigation_pages/profile_page.dart';
import 'package:photo_app/ui/home_body.dart';
import '../login_state/login_state.dart';
//enum appPages { home, favorites, drafts, basket, profile }
class BottomNavigationBarPages extends ChangeNotifier {
late LoginState loginState;
int _selectedTab = 0;
int get selectedTab => _selectedTab;
static var pages = [
const HomeBody(),
const FavoritesPage(),
const DraftsPage(),
const BasketPage(),
loginState.loggedIn ? const ProfilePage() : const LoginPage(),
];
void goToTab(index) {
_selectedTab = index;
notifyListeners();
}
}
but there is error:
The instance member 'loginState' can't be accessed in an initializer. (Documentation) Try replacing the reference to the instance member with a different expression
Here is my routes:
import 'package:flutter/foundation.dart';
import 'package:go_router/go_router.dart';
import 'package:photo_app/bottom_navigation_pages/registration_pages/email_sign_up.dart';
import 'package:photo_app/login_state/login_state.dart';
import 'package:photo_app/ui/home.dart';
import '../bottom_navigation_pages/bottom_navigation_class_pages.dart';
import '../bottom_navigation_pages/profile_page.dart';
import '../constants.dart';
import '../ui/error_screen.dart';
import '../ui/magnets_list.dart';
class MyRouter {
final LoginState loginState;
MyRouter(this.loginState);
//AppStateManager appStateManager = AppStateManager();
int selectedTab = BottomNavigationBarPages().selectedTab;
late final router = GoRouter(
refreshListenable: loginState,
routes: [
GoRoute(
name: rootRouteName,
path: '/',
redirect: (state) =>
state.namedLocation(homeRouteName, params: {'screen': 'homeBody'}),
),
GoRoute(
name: homeRouteName,
path: '/:screen(homeBody|favorites|drafts|basket|profile)',
// name: 'bottomNavBar',
builder: (context, state) {
final String screen = state.params['screen']!;
return Home(screen: screen);
},
routes: [
GoRoute(
name: subMagnetsListRouteName,
path: 'magnets',
builder: (context, state) => const MagnetList(),
),
GoRoute(
name: subSignUpPage,
path: 'sign-up',
builder: (context, state) => const SignUpPage(),
),
],
),
GoRoute(
path: '/homeBody',
redirect: (state) =>
state.namedLocation(homeRouteName, params: {'screen': 'homeBody'}),
),
GoRoute(
path: '/favorites',
redirect: (state) =>
state.namedLocation(homeRouteName, params: {'screen': 'favorites'}),
),
GoRoute(
path: '/drafts',
redirect: (state) =>
state.namedLocation(homeRouteName, params: {'screen': 'drafts'}),
),
GoRoute(
path: '/basket',
redirect: (state) =>
state.namedLocation(homeRouteName, params: {'screen': 'basket'}),
),
GoRoute(
path: '/profile',
redirect: (state) =>
state.namedLocation(homeRouteName, params: {'screen': 'profile'}),
),
GoRoute(
name: magnetsListRouteName,
path: '/magnets',
redirect: (state) => state.namedLocation(subMagnetsListRouteName,
params: {'screen': 'homeBody'}),
),
GoRoute(
name: signUpPage,
path: '/sign-up',
redirect: (state) =>
state.namedLocation(subSignUpPage, params: {'screen': 'profile'}),
),
GoRoute(
name: profilePage,
path: '/profile-page',
redirect: (state) =>
state.namedLocation(profilePage, params: {'screen': 'profile'}),
),
GoRoute(
name: loginPage,
path: '/login-page',
redirect: (state) =>
state.namedLocation(loginPage, params: {'screen': 'profile'}),
),
],
errorBuilder: (context, state) => ErrorScreen(state.error),
/* redirect: (state) {
final login = state.namedLocation(loginPage);
final profile = state.namedLocation(profilePage);
final loggedIn = loginState.loggedIn;
if (!loggedIn) {
if (kDebugMode) {
print(loggedIn);
}
return login;
} else {
if (kDebugMode) {
print(loggedIn);
}
return null;
}
},*/
);
}
And here is my login state:
import 'package:flutter/cupertino.dart';
import 'package:shared_preferences/shared_preferences.dart';
import '../constants.dart';
class LoginState extends ChangeNotifier {
final SharedPreferences prefs;
bool _loggedIn = false;
LoginState(this.prefs) {
loggedIn = prefs.getBool(loggedInKey) ?? false;
}
bool get loggedIn => _loggedIn;
set loggedIn(bool value) {
_loggedIn = value;
prefs.setBool(loggedInKey, value);
notifyListeners();
}
void checkLoggedIn() {
loggedIn = prefs.getBool(loggedInKey) ?? false;
}
}
So how can I do it?
Thanks in advance
| [
"It looks like you're trying to access an instance member loginState in a static context, which is causing the error you're seeing. In the BottomNavigationBarPages class, you are trying to access loginState in the pages static variable. However, loginState is an instance member of the class, so it can't be accessed in a static context.\nOne way to fix this error is to move the pages variable inside the BottomNavigationBarPages constructor and initialize it there. This way, you can use the loginState instance variable to initialize the pages list. Here is an example of how you can do that:\nimport 'package:flutter/cupertino.dart';\nimport 'package:photo_app/bottom_navigation_pages/basket_page.dart';\nimport 'package:photo_app/bottom_navigation_pages/drafts_page.dart';\nimport 'package:photo_app/bottom_navigation_pages/favorites_page.dart';\nimport 'package:photo_app/bottom_navigation_pages/login_page.dart';\nimport 'package:photo_app/bottom_navigation_pages/profile_page.dart';\nimport 'package:photo_app/ui/home_body.dart';\n\nimport '../login_state/login_state.dart';\n\n//enum appPages { home, favorites, drafts, basket, profile }\n\nclass BottomNavigationBarPages extends ChangeNotifier {\n late LoginState loginState;\n int _selectedTab = 0;\n int get selectedTab => _selectedTab;\n\n // Move the pages variable inside the constructor and initialize it there\n BottomNavigationBarPages(this.loginState) {\n pages = [\n const HomeBody(),\n const FavoritesPage(),\n const DraftsPage(),\n const BasketPage(),\n loginState.loggedIn ? const ProfilePage() : const LoginPage(),\n ];\n }\n\n var pages;\n\n void goToTab(index) {\n _selectedTab = index;\n notifyListeners();\n }\n}\n\nIn this code, the pages list is initialized in the constructor of the BottomNavigationBarPages class. Since the loginState variable is passed as an argument to the constructor, it is available inside the constructor and can be used to initialize the pages list.\n"
] | [
1
] | [] | [] | [
"dart",
"flutter",
"flutter_go_router"
] | stackoverflow_0074671065_dart_flutter_flutter_go_router.txt |
Q:
transduce - adds numbers
I have a calculation someting like the following:
;; for sake of simplicity we use round numbers
(def data [{:a 1} {:a 10} {:a 100}])
(reduce - 0.0 (map :a data))
And it evaluates to -111.0. I want to do the transformation with a transducer to speed it up a bit by preventing unnecessary allocations:
(transduce (map :a) - 0.0 data)
However, the signum of the result changed to positive! Apparently it does not matter if I use + or - as the reducer in the expression, as the form will evaluate to +111.0 in both cases.
This is surprising to me, why did the introduction of transduce change the semantics, what am I missing here?
(the strange behaviour happens with * and / too!)
A:
The problem has nothing to do with transduce itself, and everything to do with the fact that reduce and transduce use different default identities for the given reducing function.
The default identity for reduce is the first element in the input collection ({:a 1}).
This means that the expression (reduce - 0.0 (map :a data)) is equivalent to:
(- (- (- 0.0 1) 10) 100)
which evaluates to -111.0 as observed.
On the other hand, the default identity for transduce is the identity function for the given reducing function, which in this case is -.
This means that the expression (transduce (map :a) - 0.0 data) is equivalent to:
(- (- (- (- 0.0 1) 10) 100))
which evaluates to +111.0.
To get the same result using transduce as you did using reduce, use the completing transducer and provide the first element of the input collection as the initial value:
(transduce (completing -) 0.0 (map :a data))
This will give the same result as the original reduce expression.
A:
So apparently transduce calls the reducer function on the result of the reduction, so - just flips the sign when the reduction is done. We need to wrap the reducer function with completing to hide this call:
(transduce (completing -) 0.0 (map :a data))
Although, at this point it is shorter to just write:
(- (transduce + 0.0 (map :a data)))
| transduce - adds numbers | I have a calculation someting like the following:
;; for sake of simplicity we use round numbers
(def data [{:a 1} {:a 10} {:a 100}])
(reduce - 0.0 (map :a data))
And it evaluates to -111.0. I want to do the transformation with a transducer to speed it up a bit by preventing unnecessary allocations:
(transduce (map :a) - 0.0 data)
However, the signum of the result changed to positive! Apparently it does not matter if I use + or - as the reducer in the expression, as the form will evaluate to +111.0 in both cases.
This is surprising to me, why did the introduction of transduce change the semantics, what am I missing here?
(the strange behaviour happens with * and / too!)
| [
"The problem has nothing to do with transduce itself, and everything to do with the fact that reduce and transduce use different default identities for the given reducing function.\nThe default identity for reduce is the first element in the input collection ({:a 1}).\nThis means that the expression (reduce - 0.0 (map :a data)) is equivalent to:\n(- (- (- 0.0 1) 10) 100)\n\nwhich evaluates to -111.0 as observed.\nOn the other hand, the default identity for transduce is the identity function for the given reducing function, which in this case is -.\nThis means that the expression (transduce (map :a) - 0.0 data) is equivalent to:\n(- (- (- (- 0.0 1) 10) 100))\n\nwhich evaluates to +111.0.\nTo get the same result using transduce as you did using reduce, use the completing transducer and provide the first element of the input collection as the initial value:\n(transduce (completing -) 0.0 (map :a data))\n\nThis will give the same result as the original reduce expression.\n",
"So apparently transduce calls the reducer function on the result of the reduction, so - just flips the sign when the reduction is done. We need to wrap the reducer function with completing to hide this call:\n(transduce (completing -) 0.0 (map :a data))\n\nAlthough, at this point it is shorter to just write:\n(- (transduce + 0.0 (map :a data)))\n\n"
] | [
1,
0
] | [] | [] | [
"clojure",
"clojure_core",
"transducer"
] | stackoverflow_0074670039_clojure_clojure_core_transducer.txt |
Q:
How can I clean up empty fields when converting CSV to JSON with Miller?
I have several CSV files of item data for a game I'm messing around with that I need to convert to JSON for consumption. The data can be quite irregular with several empty fields per record, which makes for sort of ugly JSON output.
Example with dummy values:
Id,Name,Value,Type,Properties/1,Properties/2,Properties/3,Properties/4
01:Foo:13,Foo,13,ACME,CanExplode,IsRocket,,
02:Bar:42,Bar,42,,IsRocket,,,
03:Baz:37,Baz,37,BlackMesa,CanExplode,IsAlive,IsHungry,
Converted output:
[
{
"Id": "01:Foo:13",
"Name": "Foo",
"Value": 13,
"Type": "ACME",
"Properties": ["CanExplode", "IsRocket", ""]
},
{
"Id": "02:Bar:42",
"Name": "Bar",
"Value": 42,
"Type": "",
"Properties": ["IsRocket", "", ""]
},
{
"Id": "03:Baz:37",
"Name": "Baz",
"Value": 37,
"Type": "BlackMesa",
"Properties": ["CanExplode", "IsAlive", "IsHungry"]
}
]
So far I've been quite successful with using Miller. I've managed to remove completely empty columns from the CSV as well as aggregate the Properties/X columns into a single array.
But now I'd like to do two more things to improve the output format to make consuming the JSON easier:
remove empty strings "" from the Properties array
replace the other empty strings "" (e.g. Type of the second record) with null
Desired output:
[
{
"Id": "01:Foo:13",
"Name": "Foo",
"Value": 13,
"Type": "ACME",
"Properties": ["CanExplode", "IsRocket"]
},
{
"Id": "02:Bar:42",
"Name": "Bar",
"Value": 42,
"Type": null,
"Properties": ["IsRocket"]
},
{
"Id": "03:Baz:37",
"Name": "Baz",
"Value": 37,
"Type": "BlackMesa",
"Properties": ["CanExplode", "IsAlive", "IsHungry"]
}
]
Is there a way to achieve that with Miller?
My current commands are:
mlr -I --csv remove-empty-columns file.csv to clean up the columns
mlr --icsv --ojson --jflatsep '/' --jlistwrap cat file.csv > file.json for the conversion
A:
It's not probably the way you want to do it. I use also jq.
Running
mlr --c2j --jflatsep '/' --jlistwrap remove-empty-columns then cat input.csv | \
jq '.[].Properties|=map(select(length > 0))' | \
jq '.[].Type|=(if . == "" then null else . end)'
you will have
[
{
"Id": "01:Foo:13",
"Name": "Foo",
"Value": 13,
"Type": "ACME",
"Properties": [
"CanExplode",
"IsRocket"
]
},
{
"Id": "02:Bar:42",
"Name": "Bar",
"Value": 42,
"Type": null,
"Properties": [
"IsRocket"
]
},
{
"Id": "03:Baz:37",
"Name": "Baz",
"Value": 37,
"Type": "BlackMesa",
"Properties": [
"CanExplode",
"IsAlive",
"IsHungry"
]
}
]
A:
Using Miller, you can "filter out" the empty fields from each record with:
mlr --c2j --jflatsep '/' --jlistwrap put '
$* = select($*, func(k,v) {return v != ""})
' file.csv
remark: actually, we're building a new record containing the non-empty fields instead of deleting the empty fields from the record; the final result is equivalent though:
[
{
"Id": "01:Foo:13",
"Name": "Foo",
"Value": 13,
"Type": "ACME",
"Properties": ["CanExplode", "IsRocket"]
},
{
"Id": "02:Bar:42",
"Name": "Bar",
"Value": 42,
"Properties": ["IsRocket"]
},
{
"Id": "03:Baz:37",
"Name": "Baz",
"Value": 37,
"Type": "BlackMesa",
"Properties": ["CanExplode", "IsAlive", "IsHungry"]
}
]
| How can I clean up empty fields when converting CSV to JSON with Miller? | I have several CSV files of item data for a game I'm messing around with that I need to convert to JSON for consumption. The data can be quite irregular with several empty fields per record, which makes for sort of ugly JSON output.
Example with dummy values:
Id,Name,Value,Type,Properties/1,Properties/2,Properties/3,Properties/4
01:Foo:13,Foo,13,ACME,CanExplode,IsRocket,,
02:Bar:42,Bar,42,,IsRocket,,,
03:Baz:37,Baz,37,BlackMesa,CanExplode,IsAlive,IsHungry,
Converted output:
[
{
"Id": "01:Foo:13",
"Name": "Foo",
"Value": 13,
"Type": "ACME",
"Properties": ["CanExplode", "IsRocket", ""]
},
{
"Id": "02:Bar:42",
"Name": "Bar",
"Value": 42,
"Type": "",
"Properties": ["IsRocket", "", ""]
},
{
"Id": "03:Baz:37",
"Name": "Baz",
"Value": 37,
"Type": "BlackMesa",
"Properties": ["CanExplode", "IsAlive", "IsHungry"]
}
]
So far I've been quite successful with using Miller. I've managed to remove completely empty columns from the CSV as well as aggregate the Properties/X columns into a single array.
But now I'd like to do two more things to improve the output format to make consuming the JSON easier:
remove empty strings "" from the Properties array
replace the other empty strings "" (e.g. Type of the second record) with null
Desired output:
[
{
"Id": "01:Foo:13",
"Name": "Foo",
"Value": 13,
"Type": "ACME",
"Properties": ["CanExplode", "IsRocket"]
},
{
"Id": "02:Bar:42",
"Name": "Bar",
"Value": 42,
"Type": null,
"Properties": ["IsRocket"]
},
{
"Id": "03:Baz:37",
"Name": "Baz",
"Value": 37,
"Type": "BlackMesa",
"Properties": ["CanExplode", "IsAlive", "IsHungry"]
}
]
Is there a way to achieve that with Miller?
My current commands are:
mlr -I --csv remove-empty-columns file.csv to clean up the columns
mlr --icsv --ojson --jflatsep '/' --jlistwrap cat file.csv > file.json for the conversion
| [
"It's not probably the way you want to do it. I use also jq.\nRunning\nmlr --c2j --jflatsep '/' --jlistwrap remove-empty-columns then cat input.csv | \\\njq '.[].Properties|=map(select(length > 0))' | \\\njq '.[].Type|=(if . == \"\" then null else . end)'\n\nyou will have\n[\n {\n \"Id\": \"01:Foo:13\",\n \"Name\": \"Foo\",\n \"Value\": 13,\n \"Type\": \"ACME\",\n \"Properties\": [\n \"CanExplode\",\n \"IsRocket\"\n ]\n },\n {\n \"Id\": \"02:Bar:42\",\n \"Name\": \"Bar\",\n \"Value\": 42,\n \"Type\": null,\n \"Properties\": [\n \"IsRocket\"\n ]\n },\n {\n \"Id\": \"03:Baz:37\",\n \"Name\": \"Baz\",\n \"Value\": 37,\n \"Type\": \"BlackMesa\",\n \"Properties\": [\n \"CanExplode\",\n \"IsAlive\",\n \"IsHungry\"\n ]\n }\n]\n\n",
"Using Miller, you can \"filter out\" the empty fields from each record with:\nmlr --c2j --jflatsep '/' --jlistwrap put '\n $* = select($*, func(k,v) {return v != \"\"})\n' file.csv\n\nremark: actually, we're building a new record containing the non-empty fields instead of deleting the empty fields from the record; the final result is equivalent though:\n[\n{\n \"Id\": \"01:Foo:13\",\n \"Name\": \"Foo\",\n \"Value\": 13,\n \"Type\": \"ACME\",\n \"Properties\": [\"CanExplode\", \"IsRocket\"]\n},\n{\n \"Id\": \"02:Bar:42\",\n \"Name\": \"Bar\",\n \"Value\": 42,\n \"Properties\": [\"IsRocket\"]\n},\n{\n \"Id\": \"03:Baz:37\",\n \"Name\": \"Baz\",\n \"Value\": 37,\n \"Type\": \"BlackMesa\",\n \"Properties\": [\"CanExplode\", \"IsAlive\", \"IsHungry\"]\n}\n]\n\n"
] | [
1,
1
] | [] | [] | [
"csv",
"json",
"miller"
] | stackoverflow_0073900843_csv_json_miller.txt |
Q:
In Angular module federation, how do I expose/access the application component of my remote module?
I'm using Angular 14 and module federation. I would like to know how to access my remote module's application component from my shell application. In my remote module, I have this webconfig.config.js set up
const { shareAll, withModuleFederationPlugin } = require('@angular-architects/module-federation/webpack');
module.exports = withModuleFederationPlugin({
name: 'productlist',
exposes: {
'./Component': './src/app/app.component.ts',
'./home':'./src/app/my-module/my-module.module.ts'
},
shared: {
...shareAll({ singleton: true, strictVersion: true, requiredVersion: 'auto' }),
},
});
In my src/app/app.component.ts file I have
import { Component } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss']
})
export class AppComponent {
title = 'myco-ui-productlist';
}
and my src/app/app.component.html file looks like
<router-outlet></router-outlet>
In my shell application, I have this in my webpack.config.js
module.exports = withModuleFederationPlugin({
name: 'shell',
remotes: {
"productlist": "http://localhost:8001/remoteEntry.js",
},
and this set up for my routes
const routes: Routes = [
{
path: '',
component: HomeComponent,
children: [
...
{
path: 'products-list',
loadChildren: () => import('productlist/Component').then(m => m.ProductListsModule)
}
],
}
]
but in my shell application, when I go to the specified route (/products-list), I get this JS error
ERROR Error: Uncaught (in promise): TypeError: type is undefined
getNgModuleDef@http://localhost:8000/node_modules_angular_core_fesm2020_core_mjs.js:1675:23
NgModuleRef@http://localhost:8000/node_modules_angular_core_fesm2020_core_mjs.js:25461:39
create@http://localhost:8000/node_modules_angular_core_fesm2020_core_mjs.js:25505:12
loadChildren/loadRunner<@http://localhost:8000/node_modules_angular_router_fesm2020_router_mjs-_6f000.js:5634:36
map/</<@http://localhost:8000/node_modules_rxjs_dist_esm_operators_index_js.js:3325:31
OperatorSubscriber/this._next<@http://localhost:8000/node_modules_rxjs_dist_esm_operators_index_js.js:1777:15
next@http://localhost:8000/node_modules_rxjs_dist_esm_operators_index_js.js:748:12
doInnerSub/<@http://localhost:8000/node_modules_rxjs_dist_esm_operators_index_js.js:3493:20
OperatorSubscriber/this._next<@http://localhost:8000/node_modules_rxjs_dist_esm_operators_index_js.js:1777:15
next@http://localhost:8000/node_modules_rxjs_dist_esm_operators_index_js.js:748:12
fromPromise/</<@http://localhost:8000/node_modules_rxjs_dist_esm_operators_index_js.js:1468:20
invoke@http://localhost:8000/polyfills.js:8158:158
onInvoke@http://localhost:8000/node_modules_angular_core_fesm2020_core_mjs.js:30816:25
invoke@http://localhost:8000/polyfills.js:8158:46
run@http://localhost:8000/polyfills.js:7899:35
scheduleResolveOrReject/<@http://localhost:8000/polyfills.js:9243:28
invokeTask@http://localhost:8000/polyfills.js:8191:171
onInvokeTask@http://localhost:8000/node_modules_angular_core_fesm2020_core_mjs.js:30804:25
invokeTask@http://localhost:8000/polyfills.js:8191:54
runTask@http://localhost:8000/polyfills.js:7952:37
drainMicroTaskQueue@http://localhost:8000/polyfills.js:8400:23
promise callback*nativeScheduleMicroTask@http://localhost:8000/polyfills.js:8371:18
scheduleMicroTask@http://localhost:8000/polyfills.js:8382:30
scheduleTask@http://localhost:8000/polyfills.js:8181:28
onScheduleTask@http://localhost:8000/polyfills.js:8079:61
scheduleTask@http://localhost:8000/polyfills.js:8174:43
scheduleTask@http://localhost:8000/polyfills.js:8000:35
scheduleMicroTask@http://localhost:8000/polyfills.js:8025:19
scheduleResolveOrReject@http://localhost:8000/polyfills.js:9231:10
resolvePromise@http://localhost:8000/polyfills.js:9159:34
makeResolver/<@http://localhost:8000/polyfills.js:9065:23
wrapper/<@http://localhost:8000/polyfills.js:9082:25
webpackJsonpCallback@http://localhost:8001/remoteEntry.js:8793:41
@http://localhost:8001/src_app_app_component_ts.js:1:105
and nothing loads. What else do I need to do to expose the remote module's application component and load it successfully?
A:
you can access the application component of your remote module by exposing it in your remote module's webpack configuration file. You can do this by using the exposes property in the configuration object passed to the withModuleFederationPlugin method.
if you want to expose your remote module's AppComponent, you can add an exposes property to the configuration object like this:
const { shareAll, withModuleFederationPlugin } = require('@angular-architects/module-federation/webpack');
module.exports = withModuleFederationPlugin({
name: 'productlist',
exposes: {
'./Component': './src/app/app.component.ts',
...
},
shared: {
...shareAll({ singleton: true, strictVersion: true, requiredVersion: 'auto' }),
},
});
Then, in your shell application, you can import the AppComponent from your remote module and use it in your application. :
import { AppComponent } from 'productlist/Component';
And then you can use the AppComponent in your shell application's template like this:
<app-root></app-root>
You can also add the AppComponent to the declarations array of the NgModule that defines your shell application. This will make the component available to use in the templates of other components in your shell application.
| In Angular module federation, how do I expose/access the application component of my remote module? | I'm using Angular 14 and module federation. I would like to know how to access my remote module's application component from my shell application. In my remote module, I have this webconfig.config.js set up
const { shareAll, withModuleFederationPlugin } = require('@angular-architects/module-federation/webpack');
module.exports = withModuleFederationPlugin({
name: 'productlist',
exposes: {
'./Component': './src/app/app.component.ts',
'./home':'./src/app/my-module/my-module.module.ts'
},
shared: {
...shareAll({ singleton: true, strictVersion: true, requiredVersion: 'auto' }),
},
});
In my src/app/app.component.ts file I have
import { Component } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss']
})
export class AppComponent {
title = 'myco-ui-productlist';
}
and my src/app/app.component.html file looks like
<router-outlet></router-outlet>
In my shell application, I have this in my webpack.config.js
module.exports = withModuleFederationPlugin({
name: 'shell',
remotes: {
"productlist": "http://localhost:8001/remoteEntry.js",
},
and this set up for my routes
const routes: Routes = [
{
path: '',
component: HomeComponent,
children: [
...
{
path: 'products-list',
loadChildren: () => import('productlist/Component').then(m => m.ProductListsModule)
}
],
}
]
but in my shell application, when I go to the specified route (/products-list), I get this JS error
ERROR Error: Uncaught (in promise): TypeError: type is undefined
getNgModuleDef@http://localhost:8000/node_modules_angular_core_fesm2020_core_mjs.js:1675:23
NgModuleRef@http://localhost:8000/node_modules_angular_core_fesm2020_core_mjs.js:25461:39
create@http://localhost:8000/node_modules_angular_core_fesm2020_core_mjs.js:25505:12
loadChildren/loadRunner<@http://localhost:8000/node_modules_angular_router_fesm2020_router_mjs-_6f000.js:5634:36
map/</<@http://localhost:8000/node_modules_rxjs_dist_esm_operators_index_js.js:3325:31
OperatorSubscriber/this._next<@http://localhost:8000/node_modules_rxjs_dist_esm_operators_index_js.js:1777:15
next@http://localhost:8000/node_modules_rxjs_dist_esm_operators_index_js.js:748:12
doInnerSub/<@http://localhost:8000/node_modules_rxjs_dist_esm_operators_index_js.js:3493:20
OperatorSubscriber/this._next<@http://localhost:8000/node_modules_rxjs_dist_esm_operators_index_js.js:1777:15
next@http://localhost:8000/node_modules_rxjs_dist_esm_operators_index_js.js:748:12
fromPromise/</<@http://localhost:8000/node_modules_rxjs_dist_esm_operators_index_js.js:1468:20
invoke@http://localhost:8000/polyfills.js:8158:158
onInvoke@http://localhost:8000/node_modules_angular_core_fesm2020_core_mjs.js:30816:25
invoke@http://localhost:8000/polyfills.js:8158:46
run@http://localhost:8000/polyfills.js:7899:35
scheduleResolveOrReject/<@http://localhost:8000/polyfills.js:9243:28
invokeTask@http://localhost:8000/polyfills.js:8191:171
onInvokeTask@http://localhost:8000/node_modules_angular_core_fesm2020_core_mjs.js:30804:25
invokeTask@http://localhost:8000/polyfills.js:8191:54
runTask@http://localhost:8000/polyfills.js:7952:37
drainMicroTaskQueue@http://localhost:8000/polyfills.js:8400:23
promise callback*nativeScheduleMicroTask@http://localhost:8000/polyfills.js:8371:18
scheduleMicroTask@http://localhost:8000/polyfills.js:8382:30
scheduleTask@http://localhost:8000/polyfills.js:8181:28
onScheduleTask@http://localhost:8000/polyfills.js:8079:61
scheduleTask@http://localhost:8000/polyfills.js:8174:43
scheduleTask@http://localhost:8000/polyfills.js:8000:35
scheduleMicroTask@http://localhost:8000/polyfills.js:8025:19
scheduleResolveOrReject@http://localhost:8000/polyfills.js:9231:10
resolvePromise@http://localhost:8000/polyfills.js:9159:34
makeResolver/<@http://localhost:8000/polyfills.js:9065:23
wrapper/<@http://localhost:8000/polyfills.js:9082:25
webpackJsonpCallback@http://localhost:8001/remoteEntry.js:8793:41
@http://localhost:8001/src_app_app_component_ts.js:1:105
and nothing loads. What else do I need to do to expose the remote module's application component and load it successfully?
| [
"you can access the application component of your remote module by exposing it in your remote module's webpack configuration file. You can do this by using the exposes property in the configuration object passed to the withModuleFederationPlugin method.\nif you want to expose your remote module's AppComponent, you can add an exposes property to the configuration object like this:\nconst { shareAll, withModuleFederationPlugin } = require('@angular-architects/module-federation/webpack');\n\nmodule.exports = withModuleFederationPlugin({\n\n name: 'productlist',\n\n exposes: {\n './Component': './src/app/app.component.ts',\n ...\n },\n\n shared: {\n ...shareAll({ singleton: true, strictVersion: true, requiredVersion: 'auto' }),\n },\n\n});\n\nThen, in your shell application, you can import the AppComponent from your remote module and use it in your application. :\nimport { AppComponent } from 'productlist/Component';\n\nAnd then you can use the AppComponent in your shell application's template like this:\n<app-root></app-root>\n\nYou can also add the AppComponent to the declarations array of the NgModule that defines your shell application. This will make the component available to use in the templates of other components in your shell application.\n"
] | [
0
] | [] | [] | [
"angular",
"angular14",
"angular_module_federation",
"remote_access",
"webpack_module_federation"
] | stackoverflow_0074647338_angular_angular14_angular_module_federation_remote_access_webpack_module_federation.txt |
Q:
Scroll inside of a fixed sidebar
I have a fixed sidebar on the left of my site with content that has too much content to display on the screen. How can I make that content scrollable while still allowing the right side to be scrollable?
I think a simple overflow-y: scroll; would suffice. I need to have a max-height on the sidebar, but setting that max-height to 100% does nothing. I'm sure this is a simple code pattern, but my CSS skills have deserted me today.
A simple example here:
http://jsfiddle.net/tvysB/1/
A:
Set the top and bottom to 0, so that the sidebar is exactly the same height as the viewport:
#leftCol {
position: fixed;
width: 150px;
overflow-y: scroll;
top: 0;
bottom: 0;
}
Here's your fiddle: http://jsfiddle.net/tvysB/2/
A:
I had this same issue and fixed it using:
.WhateverYourNavIs {
max-height: calc(100vh - 9rem);
overflow-y: auto;
}
This sets the max height for your nav in a way that is responsive to the height of the users browser and then gives it a scroll when it needs it.
A:
If you are using position:sticky and want to add a scroll in it, @Marc answer works well adding to it I have added hiding scroll bar functionality
Solution with hiding scroll goes like this
.ContainerElem{
-ms-overflow-style: none; /* Internet Explorer 10+ */
scrollbar-width: none; /* Firefox */
max-height: calc(100vh - 9rem);
overflow-y: auto;
}
.ContainerElem::-webkit-scrollbar {
display: none; /* Safari and Chrome */
}
| Scroll inside of a fixed sidebar | I have a fixed sidebar on the left of my site with content that has too much content to display on the screen. How can I make that content scrollable while still allowing the right side to be scrollable?
I think a simple overflow-y: scroll; would suffice. I need to have a max-height on the sidebar, but setting that max-height to 100% does nothing. I'm sure this is a simple code pattern, but my CSS skills have deserted me today.
A simple example here:
http://jsfiddle.net/tvysB/1/
| [
"Set the top and bottom to 0, so that the sidebar is exactly the same height as the viewport:\n#leftCol {\n position: fixed;\n width: 150px;\n overflow-y: scroll;\n top: 0;\n bottom: 0;\n}\n\nHere's your fiddle: http://jsfiddle.net/tvysB/2/\n",
"I had this same issue and fixed it using:\n.WhateverYourNavIs {\n max-height: calc(100vh - 9rem);\n overflow-y: auto;\n }\n\nThis sets the max height for your nav in a way that is responsive to the height of the users browser and then gives it a scroll when it needs it.\n",
"If you are using position:sticky and want to add a scroll in it, @Marc answer works well adding to it I have added hiding scroll bar functionality\nSolution with hiding scroll goes like this\n.ContainerElem{\n -ms-overflow-style: none; /* Internet Explorer 10+ */\n scrollbar-width: none; /* Firefox */\n max-height: calc(100vh - 9rem);\n overflow-y: auto;\n}\n.ContainerElem::-webkit-scrollbar { \ndisplay: none; /* Safari and Chrome */\n}\n\n"
] | [
137,
27,
0
] | [] | [] | [
"css",
"scroll"
] | stackoverflow_0013337646_css_scroll.txt |
Q:
Convert Variable Name to String?
I would like to convert a python variable name into the string equivalent as shown. Any ideas how?
var = {}
print ??? # Would like to see 'var'
something_else = 3
print ??? # Would print 'something_else'
A:
TL;DR: Not possible. See 'conclusion' at the end.
There is an usage scenario where you might need this. I'm not implying there are not better ways or achieving the same functionality.
This would be useful in order to 'dump' an arbitrary list of dictionaries in case of error, in debug modes and other similar situations.
What would be needed, is the reverse of the eval() function:
get_indentifier_name_missing_function()
which would take an identifier name ('variable','dictionary',etc) as an argument, and return a
string containing the identifier’s name.
Consider the following current state of affairs:
random_function(argument_data)
If one is passing an identifier name ('function','variable','dictionary',etc) argument_data to a random_function() (another identifier name), one actually passes an identifier (e.g.: <argument_data object at 0xb1ce10>) to another identifier (e.g.: <function random_function at 0xafff78>):
<function random_function at 0xafff78>(<argument_data object at 0xb1ce10>)
From my understanding, only the memory address is passed to the function:
<function at 0xafff78>(<object at 0xb1ce10>)
Therefore, one would need to pass a string as an argument to random_function() in order for that function to have the argument's identifier name:
random_function('argument_data')
Inside the random_function()
def random_function(first_argument):
, one would use the already supplied string 'argument_data' to:
serve as an 'identifier name' (to display, log, string split/concat, whatever)
feed the eval() function in order to get a reference to the actual identifier, and therefore, a reference to the real data:
print("Currently working on", first_argument)
some_internal_var = eval(first_argument)
print("here comes the data: " + str(some_internal_var))
Unfortunately, this doesn't work in all cases. It only works if the random_function() can resolve the 'argument_data' string to an actual identifier. I.e. If argument_data identifier name is available in the random_function()'s namespace.
This isn't always the case:
# main1.py
import some_module1
argument_data = 'my data'
some_module1.random_function('argument_data')
# some_module1.py
def random_function(first_argument):
print("Currently working on", first_argument)
some_internal_var = eval(first_argument)
print("here comes the data: " + str(some_internal_var))
######
Expected results would be:
Currently working on: argument_data
here comes the data: my data
Because argument_data identifier name is not available in the random_function()'s namespace, this would yield instead:
Currently working on argument_data
Traceback (most recent call last):
File "~/main1.py", line 6, in <module>
some_module1.random_function('argument_data')
File "~/some_module1.py", line 4, in random_function
some_internal_var = eval(first_argument)
File "<string>", line 1, in <module>
NameError: name 'argument_data' is not defined
Now, consider the hypotetical usage of a get_indentifier_name_missing_function() which would behave as described above.
Here's a dummy Python 3.0 code: .
# main2.py
import some_module2
some_dictionary_1 = { 'definition_1':'text_1',
'definition_2':'text_2',
'etc':'etc.' }
some_other_dictionary_2 = { 'key_3':'value_3',
'key_4':'value_4',
'etc':'etc.' }
#
# more such stuff
#
some_other_dictionary_n = { 'random_n':'random_n',
'etc':'etc.' }
for each_one_of_my_dictionaries in ( some_dictionary_1,
some_other_dictionary_2,
...,
some_other_dictionary_n ):
some_module2.some_function(each_one_of_my_dictionaries)
# some_module2.py
def some_function(a_dictionary_object):
for _key, _value in a_dictionary_object.items():
print( get_indentifier_name_missing_function(a_dictionary_object) +
" " +
str(_key) +
" = " +
str(_value) )
######
Expected results would be:
some_dictionary_1 definition_1 = text_1
some_dictionary_1 definition_2 = text_2
some_dictionary_1 etc = etc.
some_other_dictionary_2 key_3 = value_3
some_other_dictionary_2 key_4 = value_4
some_other_dictionary_2 etc = etc.
......
......
......
some_other_dictionary_n random_n = random_n
some_other_dictionary_n etc = etc.
Unfortunately, get_indentifier_name_missing_function() would not see the 'original' identifier names (some_dictionary_,some_other_dictionary_2,some_other_dictionary_n). It would only see the a_dictionary_object identifier name.
Therefore the real result would rather be:
a_dictionary_object definition_1 = text_1
a_dictionary_object definition_2 = text_2
a_dictionary_object etc = etc.
a_dictionary_object key_3 = value_3
a_dictionary_object key_4 = value_4
a_dictionary_object etc = etc.
......
......
......
a_dictionary_object random_n = random_n
a_dictionary_object etc = etc.
So, the reverse of the eval() function won't be that useful in this case.
Currently, one would need to do this:
# main2.py same as above, except:
for each_one_of_my_dictionaries_names in ( 'some_dictionary_1',
'some_other_dictionary_2',
'...',
'some_other_dictionary_n' ):
some_module2.some_function( { each_one_of_my_dictionaries_names :
eval(each_one_of_my_dictionaries_names) } )
# some_module2.py
def some_function(a_dictionary_name_object_container):
for _dictionary_name, _dictionary_object in a_dictionary_name_object_container.items():
for _key, _value in _dictionary_object.items():
print( str(_dictionary_name) +
" " +
str(_key) +
" = " +
str(_value) )
######
In conclusion:
Python passes only memory addresses as arguments to functions.
Strings representing the name of an identifier, can only be referenced back to the actual identifier by the eval() function if the name identifier is available in the current namespace.
A hypothetical reverse of the eval() function, would not be useful in cases where the identifier name is not 'seen' directly by the calling code. E.g. inside any called function.
Currently one needs to pass to a function:
the string representing the identifier name
the actual identifier (memory address)
This can be achieved by passing both the 'string' and eval('string') to the called function at the same time. I think this is the most 'general' way of solving this egg-chicken problem across arbitrary functions, modules, namespaces, without using corner-case solutions. The only downside is the use of the eval() function which may easily lead to unsecured code. Care must be taken to not feed the eval() function with just about anything, especially unfiltered external-input data.
A:
Totally possible with the python-varname package (python3):
from varname import nameof
s = 'Hey!'
print (nameof(s))
Output:
s
Install:
pip3 install varname
Or get the package here:
https://github.com/pwwang/python-varname
A:
I searched for this question because I wanted a Python program to print assignment statements for some of the variables in the program. For example, it might print "foo = 3, bar = 21, baz = 432". The print function would need the variable names in string form. I could have provided my code with the strings "foo","bar", and "baz", but that felt like repeating myself. After reading the previous answers, I developed the solution below.
The globals() function behaves like a dict with variable names (in the form of strings) as keys. I wanted to retrieve from globals() the key corresponding to the value of each variable. The method globals().items() returns a list of tuples; in each tuple the first item is the variable name (as a string) and the second is the variable value. My variablename() function searches through that list to find the variable name(s) that corresponds to the value of the variable whose name I need in string form.
The function itertools.ifilter() does the search by testing each tuple in the globals().items() list with the function lambda x: var is globals()[x[0]]. In that function x is the tuple being tested; x[0] is the variable name (as a string) and x[1] is the value. The lambda function tests whether the value of the tested variable is the same as the value of the variable passed to variablename(). In fact, by using the is operator, the lambda function tests whether the name of the tested variable is bound to the exact same object as the variable passed to variablename(). If so, the tuple passes the test and is returned by ifilter().
The itertools.ifilter() function actually returns an iterator which doesn't return any results until it is called properly. To get it called properly, I put it inside a list comprehension [tpl[0] for tpl ... globals().items())]. The list comprehension saves only the variable name tpl[0], ignoring the variable value. The list that is created contains one or more names (as strings) that are bound to the value of the variable passed to variablename().
In the uses of variablename() shown below, the desired string is returned as an element in a list. In many cases, it will be the only item in the list. If another variable name is assigned the same value, however, the list will be longer.
>>> def variablename(var):
... import itertools
... return [tpl[0] for tpl in
... itertools.ifilter(lambda x: var is x[1], globals().items())]
...
>>> var = {}
>>> variablename(var)
['var']
>>> something_else = 3
>>> variablename(something_else)
['something_else']
>>> yet_another = 3
>>> variablename(something_else)
['yet_another', 'something_else']
A:
as long as it's a variable and not a second class, this here works for me:
def print_var_name(variable):
for name in globals():
if eval(name) == variable:
print name
foo = 123
print_var_name(foo)
>>>foo
this happens for class members:
class xyz:
def __init__(self):
pass
member = xyz()
print_var_name(member)
>>>member
ans this for classes (as example):
abc = xyz
print_var_name(abc)
>>>abc
>>>xyz
So for classes it gives you the name AND the properteries
A:
This is not possible.
In Python, there really isn't any such thing as a "variable". What Python really has are "names" which can have objects bound to them. It makes no difference to the object what names, if any, it might be bound to. It might be bound to dozens of different names, or none.
Consider this example:
foo = 1
bar = 1
baz = 1
Now, suppose you have the integer object with value 1, and you want to work backwards and find its name. What would you print? Three different names have that object bound to them, and all are equally valid.
In Python, a name is a way to access an object, so there is no way to work with names directly. There might be some clever way to hack the Python bytecodes or something to get the value of the name, but that is at best a parlor trick.
If you know you want print foo to print "foo", you might as well just execute print "foo" in the first place.
EDIT: I have changed the wording slightly to make this more clear. Also, here is an even better example:
foo = 1
bar = foo
baz = foo
In practice, Python reuses the same object for integers with common values like 0 or 1, so the first example should bind the same object to all three names. But this example is crystal clear: the same object is bound to foo, bar, and baz.
A:
Technically the information is available to you, but as others have asked, how would you make use of it in a sensible way?
>>> x = 52
>>> globals()
{'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__',
'x': 52, '__doc__': None, '__package__': None}
This shows that the variable name is present as a string in the globals() dictionary.
>>> globals().keys()[2]
'x'
In this case it happens to be the third key, but there's no reliable way to know where a given variable name will end up
>>> for k in globals().keys():
... if not k.startswith("_"):
... print k
...
x
>>>
You could filter out system variables like this, but you're still going to get all of your own items. Just running that code above created another variable "k" that changed the position of "x" in the dict.
But maybe this is a useful start for you. If you tell us what you want this capability for, more helpful information could possibly be given.
A:
By using the the unpacking operator:
>>> def tostr(**kwargs):
return kwargs
>>> var = {}
>>> something_else = 3
>>> tostr(var = var,something_else=something_else)
{'var' = {},'something_else'=3}
A:
You somehow have to refer to the variable you want to print the name of. So it would look like:
print varname(something_else)
There is no such function, but if there were it would be kind of pointless. You have to type out something_else, so you can as well just type quotes to the left and right of it to print the name as a string:
print "something_else"
A:
What are you trying to achieve? There is absolutely no reason to ever do what you describe, and there is likely a much better solution to the problem you're trying to solve..
The most obvious alternative to what you request is a dictionary. For example:
>>> my_data = {'var': 'something'}
>>> my_data['something_else'] = 'something'
>>> print my_data.keys()
['var', 'something_else']
>>> print my_data['var']
something
Mostly as a.. challenge, I implemented your desired output. Do not use this code, please!
#!/usr/bin/env python2.6
class NewLocals:
"""Please don't ever use this code.."""
def __init__(self, initial_locals):
self.prev_locals = list(initial_locals.keys())
def show_new(self, new_locals):
output = ", ".join(list(set(new_locals) - set(self.prev_locals)))
self.prev_locals = list(new_locals.keys())
return output
# Set up
eww = None
eww = NewLocals(locals())
# "Working" requested code
var = {}
print eww.show_new(locals()) # Outputs: var
something_else = 3
print eww.show_new(locals()) # Outputs: something_else
# Further testing
another_variable = 4
and_a_final_one = 5
print eww.show_new(locals()) # Outputs: another_variable, and_a_final_one
A:
Does Django not do this when generating field names?
http://docs.djangoproject.com/en/dev//topics/db/models/#verbose-field-names
Seems reasonable to me.
A:
I think this is a cool solution and I suppose the best you can get. But do you see any way to handle the ambigious results, your function may return?
As "is" operator behaves unexpectedly with integers shows, low integers and strings of the same value get cached by python so that your variablename-function might priovide ambigous results with a high probability.
In my case, I would like to create a decorator, that adds a new variable to a class by the varialbename i pass it:
def inject(klass, dependency):
klass.__dict__["__"+variablename(dependency)]=dependency
But if your method returns ambigous results, how can I know the name of the variable I added?
var any_var="myvarcontent"
var myvar="myvarcontent"
@inject(myvar)
class myclasss():
def myclass_method(self):
print self.__myvar #I can not be sure, that this variable will be set...
Maybe if I will also check the local list I could at least remove the "dependency"-Variable from the list, but this will not be a reliable result.
A:
Here is a succinct variation that lets you specify any directory.
The issue with using directories to find anything is that multiple variables can have the same value. So this code returns a list of possible variables.
def varname( var, dir=locals()):
return [ key for key, val in dir.items() if id( val) == id( var)]
A:
I don't know it's right or not, but it worked for me
def varname(variable):
for name in list(globals().keys()):
expression = f'id({name})'
if id(variable) == eval(expression):
return name
A:
it is possible to a limited extent. the answer is similar to the solution by @tamtam .
The given example assumes the following assumptions -
You are searching for a variable by its value
The variable has a distinct value
The value is in the global namespace
Example:
testVar = "unique value"
varNameAsString = [k for k,v in globals().items() if v == "unique value"]
#
# the variable "varNameAsString" will contain all the variable name that matches
# the value "unique value"
# for this example, it will be a list of a single entry "testVar"
#
print(varNameAsString)
Output : ['testVar']
You can extend this example for any other variable/data type
A:
I'd like to point out a use case for this that is not an anti-pattern, and there is no better way to do it.
This seems to be a missing feature in python.
There are a number of functions, like patch.object, that take the name of a method or property to be patched or accessed.
Consider this:
patch.object(obj, "method_name", new_reg)
This can potentially start "false succeeding" when you change the name of a method. IE: you can ship a bug, you thought you were testing.... simply because of a bad method name refactor.
Now consider: varname. This could be an efficient, built-in function. But for now it can work by iterating an object or the caller's frame:
Now your call can be:
patch.member(obj, obj.method_name, new_reg)
And the patch function can call:
varname(var, obj=obj)
This would: assert that the var is bound to the obj and return the name of the member. Or if the obj is not specified, use the callers stack frame to derive it, etc.
Could be made an efficient built in at some point, but here's a definition that works. I deliberately didn't support builtins, easy to add tho:
Feel free to stick this in a package called varname.py, and use it in your patch.object calls:
patch.object(obj, varname(obj, obj.method_name), new_reg)
Note: this was written for python 3.
import inspect
def _varname_dict(var, dct):
key_name = None
for key, val in dct.items():
if val is var:
if key_name is not None:
raise NotImplementedError("Duplicate names not supported %s, %s" % (key_name, key))
key_name = key
return key_name
def _varname_obj(var, obj):
key_name = None
for key in dir(obj):
val = getattr(obj, key)
equal = val is var
if equal:
if key_name is not None:
raise NotImplementedError("Duplicate names not supported %s, %s" % (key_name, key))
key_name = key
return key_name
def varname(var, obj=None):
if obj is None:
if hasattr(var, "__self__"):
return var.__name__
caller_frame = inspect.currentframe().f_back
try:
ret = _varname_dict(var, caller_frame.f_locals)
except NameError:
ret = _varname_dict(var, caller_frame.f_globals)
else:
ret = _varname_obj(var, obj)
if ret is None:
raise NameError("Name not found. (Note: builtins not supported)")
return ret
A:
This will work for simnple data types (str, int, float, list etc.)
>>> def my_print(var_str) :
print var_str+':', globals()[var_str]
>>> a = 5
>>> b = ['hello', ',world!']
>>> my_print('a')
a: 5
>>> my_print('b')
b: ['hello', ',world!']
A:
It's not very Pythonesque but I was curious and found this solution. You need to duplicate the globals dictionary since its size will change as soon as you define a new variable.
def var_to_name(var):
# noinspection PyTypeChecker
dict_vars = dict(globals().items())
var_string = None
for name in dict_vars.keys():
if dict_vars[name] is var:
var_string = name
break
return var_string
if __name__ == "__main__":
test = 3
print(f"test = {test}")
print(f"variable name: {var_to_name(test)}")
which returns:
test = 3
variable name: test
A:
To get the variable name of var as a string:
var = 1000
var_name = [k for k,v in locals().items() if v == var][0]
print(var_name) # ---> outputs 'var'
A:
Thanks @restrepo, this was exactly what I needed to create a standard save_df_to_file() function. For this, I made some small changes to your tostr() function. Hope this will help someone else:
def variabletostr(**df):
variablename = list(df.keys())[0]
return variablename
variabletostr(df=0)
A:
The original question is pretty old, but I found an almost solution with Python 3. (I say almost because I think you can get close to a solution but I do not believe there is a solution concrete enough to satisfy the exact request).
First, you might want to consider the following:
objects are a core concept in Python, and they may be assigned a variable, but the variable itself is a bound name (think pointer or reference) not the object itself
var is just a variable name bound to an object and that object could have more than one reference (in your example it does not seem to)
in this case, var appears to be in the global namespace so you can use the global builtin conveniently named global
different name references to the same object will all share the same id which can be checked by running the id builtin id like so: id(var)
This function grabs the global variables and filters out the ones matching the content of your variable.
def get_bound_names(target_variable):
'''Returns a list of bound object names.'''
return [k for k, v in globals().items() if v is target_variable]
The real challenge here is that you are not guaranteed to get back the variable name by itself. It will be a list, but that list will contain the variable name you are looking for. If your target variable (bound to an object) is really the only bound name, you could access it this way:
bound_names = get_variable_names(target_variable)
var_string = bound_names[0]
A:
Possible for Python >= 3.8 (with f'{var=}' string )
Not sure if this could be used in production code, but in Python 3.8(and up) you can use f' string debugging specifier. Add = at the end of an expression, and it will print both the expression and its value:
my_salary_variable = 5000
print(f'{my_salary_variable = }')
Output:
my_salary_variable = 5000
To uncover this magic here is another example:
param_list = f'{my_salary_variable=}'.split('=')
print(param_list)
Output:
['my_salary_variable', '5000']
Explanation: when you put '=' after your var in f'string, it returns a string with variable name, '=' and its value. Split it with .split('=') and get a List of 2 strings, [0] - your_variable_name, and [1] - actual object of variable.
Pick up [0] element of the list if you need variable name only.
my_salary_variable = 5000
param_list = f'{my_salary_variable=}'.split('=')
print(param_list[0])
Output:
my_salary_variable
or, in one line
my_salary_variable = 5000
print(f'{my_salary_variable=}'.split('=')[0])
Output:
my_salary_variable
Works with functions too:
def my_super_calc_foo(number):
return number**3
print(f'{my_super_calc_foo(5) = }')
print(f'{my_super_calc_foo(5)=}'.split('='))
Output:
my_super_calc_foo(5) = 125
['my_super_calc_foo(5)', '125']
Process finished with exit code 0
| Convert Variable Name to String? | I would like to convert a python variable name into the string equivalent as shown. Any ideas how?
var = {}
print ??? # Would like to see 'var'
something_else = 3
print ??? # Would print 'something_else'
| [
"TL;DR: Not possible. See 'conclusion' at the end.\n\nThere is an usage scenario where you might need this. I'm not implying there are not better ways or achieving the same functionality.\nThis would be useful in order to 'dump' an arbitrary list of dictionaries in case of error, in debug modes and other similar situations.\nWhat would be needed, is the reverse of the eval() function:\nget_indentifier_name_missing_function()\n\nwhich would take an identifier name ('variable','dictionary',etc) as an argument, and return a\nstring containing the identifier’s name.\n\nConsider the following current state of affairs:\nrandom_function(argument_data)\n\nIf one is passing an identifier name ('function','variable','dictionary',etc) argument_data to a random_function() (another identifier name), one actually passes an identifier (e.g.: <argument_data object at 0xb1ce10>) to another identifier (e.g.: <function random_function at 0xafff78>):\n<function random_function at 0xafff78>(<argument_data object at 0xb1ce10>)\n\nFrom my understanding, only the memory address is passed to the function:\n<function at 0xafff78>(<object at 0xb1ce10>)\n\nTherefore, one would need to pass a string as an argument to random_function() in order for that function to have the argument's identifier name:\nrandom_function('argument_data')\n\nInside the random_function()\ndef random_function(first_argument):\n\n, one would use the already supplied string 'argument_data' to:\n\nserve as an 'identifier name' (to display, log, string split/concat, whatever)\n\nfeed the eval() function in order to get a reference to the actual identifier, and therefore, a reference to the real data:\nprint(\"Currently working on\", first_argument)\nsome_internal_var = eval(first_argument)\nprint(\"here comes the data: \" + str(some_internal_var))\n\n\n\nUnfortunately, this doesn't work in all cases. It only works if the random_function() can resolve the 'argument_data' string to an actual identifier. I.e. If argument_data identifier name is available in the random_function()'s namespace.\nThis isn't always the case:\n# main1.py\nimport some_module1\n\nargument_data = 'my data'\n\nsome_module1.random_function('argument_data')\n\n\n# some_module1.py\ndef random_function(first_argument):\n print(\"Currently working on\", first_argument)\n some_internal_var = eval(first_argument)\n print(\"here comes the data: \" + str(some_internal_var))\n######\n\nExpected results would be:\nCurrently working on: argument_data\nhere comes the data: my data\n\nBecause argument_data identifier name is not available in the random_function()'s namespace, this would yield instead:\nCurrently working on argument_data\nTraceback (most recent call last):\n File \"~/main1.py\", line 6, in <module>\n some_module1.random_function('argument_data')\n File \"~/some_module1.py\", line 4, in random_function\n some_internal_var = eval(first_argument)\n File \"<string>\", line 1, in <module>\nNameError: name 'argument_data' is not defined\n\n\nNow, consider the hypotetical usage of a get_indentifier_name_missing_function() which would behave as described above.\nHere's a dummy Python 3.0 code: .\n# main2.py\nimport some_module2\nsome_dictionary_1 = { 'definition_1':'text_1',\n 'definition_2':'text_2',\n 'etc':'etc.' }\nsome_other_dictionary_2 = { 'key_3':'value_3',\n 'key_4':'value_4', \n 'etc':'etc.' }\n#\n# more such stuff\n#\nsome_other_dictionary_n = { 'random_n':'random_n',\n 'etc':'etc.' }\n\nfor each_one_of_my_dictionaries in ( some_dictionary_1,\n some_other_dictionary_2,\n ...,\n some_other_dictionary_n ):\n some_module2.some_function(each_one_of_my_dictionaries)\n\n\n# some_module2.py\ndef some_function(a_dictionary_object):\n for _key, _value in a_dictionary_object.items():\n print( get_indentifier_name_missing_function(a_dictionary_object) +\n \" \" +\n str(_key) +\n \" = \" +\n str(_value) )\n######\n\nExpected results would be:\nsome_dictionary_1 definition_1 = text_1\nsome_dictionary_1 definition_2 = text_2\nsome_dictionary_1 etc = etc.\nsome_other_dictionary_2 key_3 = value_3\nsome_other_dictionary_2 key_4 = value_4\nsome_other_dictionary_2 etc = etc.\n......\n......\n......\nsome_other_dictionary_n random_n = random_n\nsome_other_dictionary_n etc = etc.\n\nUnfortunately, get_indentifier_name_missing_function() would not see the 'original' identifier names (some_dictionary_,some_other_dictionary_2,some_other_dictionary_n). It would only see the a_dictionary_object identifier name.\nTherefore the real result would rather be:\na_dictionary_object definition_1 = text_1\na_dictionary_object definition_2 = text_2\na_dictionary_object etc = etc.\na_dictionary_object key_3 = value_3\na_dictionary_object key_4 = value_4\na_dictionary_object etc = etc.\n......\n......\n......\na_dictionary_object random_n = random_n\na_dictionary_object etc = etc.\n\nSo, the reverse of the eval() function won't be that useful in this case.\n\nCurrently, one would need to do this:\n# main2.py same as above, except:\n\n for each_one_of_my_dictionaries_names in ( 'some_dictionary_1',\n 'some_other_dictionary_2',\n '...',\n 'some_other_dictionary_n' ):\n some_module2.some_function( { each_one_of_my_dictionaries_names :\n eval(each_one_of_my_dictionaries_names) } )\n \n \n # some_module2.py\n def some_function(a_dictionary_name_object_container):\n for _dictionary_name, _dictionary_object in a_dictionary_name_object_container.items():\n for _key, _value in _dictionary_object.items():\n print( str(_dictionary_name) +\n \" \" +\n str(_key) +\n \" = \" +\n str(_value) )\n ######\n\n\nIn conclusion:\n\nPython passes only memory addresses as arguments to functions.\nStrings representing the name of an identifier, can only be referenced back to the actual identifier by the eval() function if the name identifier is available in the current namespace.\nA hypothetical reverse of the eval() function, would not be useful in cases where the identifier name is not 'seen' directly by the calling code. E.g. inside any called function.\nCurrently one needs to pass to a function:\n\nthe string representing the identifier name\nthe actual identifier (memory address)\n\n\n\nThis can be achieved by passing both the 'string' and eval('string') to the called function at the same time. I think this is the most 'general' way of solving this egg-chicken problem across arbitrary functions, modules, namespaces, without using corner-case solutions. The only downside is the use of the eval() function which may easily lead to unsecured code. Care must be taken to not feed the eval() function with just about anything, especially unfiltered external-input data.\n",
"Totally possible with the python-varname package (python3):\nfrom varname import nameof\n\ns = 'Hey!'\n\nprint (nameof(s))\n\nOutput:\ns\n\nInstall:\npip3 install varname\n\nOr get the package here:\nhttps://github.com/pwwang/python-varname\n",
"I searched for this question because I wanted a Python program to print assignment statements for some of the variables in the program. For example, it might print \"foo = 3, bar = 21, baz = 432\". The print function would need the variable names in string form. I could have provided my code with the strings \"foo\",\"bar\", and \"baz\", but that felt like repeating myself. After reading the previous answers, I developed the solution below.\nThe globals() function behaves like a dict with variable names (in the form of strings) as keys. I wanted to retrieve from globals() the key corresponding to the value of each variable. The method globals().items() returns a list of tuples; in each tuple the first item is the variable name (as a string) and the second is the variable value. My variablename() function searches through that list to find the variable name(s) that corresponds to the value of the variable whose name I need in string form.\nThe function itertools.ifilter() does the search by testing each tuple in the globals().items() list with the function lambda x: var is globals()[x[0]]. In that function x is the tuple being tested; x[0] is the variable name (as a string) and x[1] is the value. The lambda function tests whether the value of the tested variable is the same as the value of the variable passed to variablename(). In fact, by using the is operator, the lambda function tests whether the name of the tested variable is bound to the exact same object as the variable passed to variablename(). If so, the tuple passes the test and is returned by ifilter().\nThe itertools.ifilter() function actually returns an iterator which doesn't return any results until it is called properly. To get it called properly, I put it inside a list comprehension [tpl[0] for tpl ... globals().items())]. The list comprehension saves only the variable name tpl[0], ignoring the variable value. The list that is created contains one or more names (as strings) that are bound to the value of the variable passed to variablename().\nIn the uses of variablename() shown below, the desired string is returned as an element in a list. In many cases, it will be the only item in the list. If another variable name is assigned the same value, however, the list will be longer.\n>>> def variablename(var):\n... import itertools\n... return [tpl[0] for tpl in \n... itertools.ifilter(lambda x: var is x[1], globals().items())]\n... \n>>> var = {}\n>>> variablename(var)\n['var']\n>>> something_else = 3\n>>> variablename(something_else)\n['something_else']\n>>> yet_another = 3\n>>> variablename(something_else)\n['yet_another', 'something_else']\n\n",
"as long as it's a variable and not a second class, this here works for me:\ndef print_var_name(variable):\n for name in globals():\n if eval(name) == variable:\n print name\nfoo = 123\nprint_var_name(foo)\n>>>foo\n\nthis happens for class members:\nclass xyz:\n def __init__(self):\n pass\nmember = xyz()\nprint_var_name(member)\n>>>member\n\nans this for classes (as example):\nabc = xyz\nprint_var_name(abc)\n>>>abc\n>>>xyz\n\nSo for classes it gives you the name AND the properteries\n",
"This is not possible.\nIn Python, there really isn't any such thing as a \"variable\". What Python really has are \"names\" which can have objects bound to them. It makes no difference to the object what names, if any, it might be bound to. It might be bound to dozens of different names, or none.\nConsider this example:\nfoo = 1\nbar = 1\nbaz = 1\n\nNow, suppose you have the integer object with value 1, and you want to work backwards and find its name. What would you print? Three different names have that object bound to them, and all are equally valid.\nIn Python, a name is a way to access an object, so there is no way to work with names directly. There might be some clever way to hack the Python bytecodes or something to get the value of the name, but that is at best a parlor trick.\nIf you know you want print foo to print \"foo\", you might as well just execute print \"foo\" in the first place.\nEDIT: I have changed the wording slightly to make this more clear. Also, here is an even better example:\nfoo = 1\nbar = foo\nbaz = foo\n\nIn practice, Python reuses the same object for integers with common values like 0 or 1, so the first example should bind the same object to all three names. But this example is crystal clear: the same object is bound to foo, bar, and baz.\n",
"Technically the information is available to you, but as others have asked, how would you make use of it in a sensible way?\n>>> x = 52\n>>> globals()\n{'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', \n'x': 52, '__doc__': None, '__package__': None}\n\nThis shows that the variable name is present as a string in the globals() dictionary.\n>>> globals().keys()[2]\n'x'\n\nIn this case it happens to be the third key, but there's no reliable way to know where a given variable name will end up\n>>> for k in globals().keys():\n... if not k.startswith(\"_\"):\n... print k\n...\nx\n>>>\n\nYou could filter out system variables like this, but you're still going to get all of your own items. Just running that code above created another variable \"k\" that changed the position of \"x\" in the dict.\nBut maybe this is a useful start for you. If you tell us what you want this capability for, more helpful information could possibly be given.\n",
"By using the the unpacking operator:\n>>> def tostr(**kwargs):\n return kwargs\n\n>>> var = {}\n>>> something_else = 3\n>>> tostr(var = var,something_else=something_else)\n{'var' = {},'something_else'=3}\n\n",
"You somehow have to refer to the variable you want to print the name of. So it would look like:\nprint varname(something_else)\n\nThere is no such function, but if there were it would be kind of pointless. You have to type out something_else, so you can as well just type quotes to the left and right of it to print the name as a string:\nprint \"something_else\"\n\n",
"What are you trying to achieve? There is absolutely no reason to ever do what you describe, and there is likely a much better solution to the problem you're trying to solve..\nThe most obvious alternative to what you request is a dictionary. For example:\n>>> my_data = {'var': 'something'}\n>>> my_data['something_else'] = 'something'\n>>> print my_data.keys()\n['var', 'something_else']\n>>> print my_data['var']\nsomething\n\nMostly as a.. challenge, I implemented your desired output. Do not use this code, please!\n#!/usr/bin/env python2.6\nclass NewLocals:\n \"\"\"Please don't ever use this code..\"\"\"\n def __init__(self, initial_locals):\n self.prev_locals = list(initial_locals.keys())\n\n def show_new(self, new_locals):\n output = \", \".join(list(set(new_locals) - set(self.prev_locals)))\n self.prev_locals = list(new_locals.keys())\n return output\n# Set up\neww = None\neww = NewLocals(locals())\n\n# \"Working\" requested code\n\nvar = {}\n\nprint eww.show_new(locals()) # Outputs: var\n\nsomething_else = 3\nprint eww.show_new(locals()) # Outputs: something_else\n\n# Further testing\n\nanother_variable = 4\nand_a_final_one = 5\n\nprint eww.show_new(locals()) # Outputs: another_variable, and_a_final_one\n\n",
"Does Django not do this when generating field names?\nhttp://docs.djangoproject.com/en/dev//topics/db/models/#verbose-field-names\nSeems reasonable to me.\n",
"I think this is a cool solution and I suppose the best you can get. But do you see any way to handle the ambigious results, your function may return?\nAs \"is\" operator behaves unexpectedly with integers shows, low integers and strings of the same value get cached by python so that your variablename-function might priovide ambigous results with a high probability. \nIn my case, I would like to create a decorator, that adds a new variable to a class by the varialbename i pass it:\ndef inject(klass, dependency):\nklass.__dict__[\"__\"+variablename(dependency)]=dependency\n\nBut if your method returns ambigous results, how can I know the name of the variable I added?\nvar any_var=\"myvarcontent\"\nvar myvar=\"myvarcontent\"\n@inject(myvar)\nclass myclasss():\n def myclass_method(self):\n print self.__myvar #I can not be sure, that this variable will be set...\n\nMaybe if I will also check the local list I could at least remove the \"dependency\"-Variable from the list, but this will not be a reliable result.\n",
"Here is a succinct variation that lets you specify any directory.\nThe issue with using directories to find anything is that multiple variables can have the same value. So this code returns a list of possible variables.\ndef varname( var, dir=locals()):\n return [ key for key, val in dir.items() if id( val) == id( var)]\n\n",
"I don't know it's right or not, but it worked for me\ndef varname(variable):\n for name in list(globals().keys()):\n expression = f'id({name})'\n if id(variable) == eval(expression):\n return name\n\n",
"it is possible to a limited extent. the answer is similar to the solution by @tamtam .\nThe given example assumes the following assumptions -\n\nYou are searching for a variable by its value\nThe variable has a distinct value\nThe value is in the global namespace\n\nExample:\ntestVar = \"unique value\"\nvarNameAsString = [k for k,v in globals().items() if v == \"unique value\"]\n#\n# the variable \"varNameAsString\" will contain all the variable name that matches\n# the value \"unique value\"\n# for this example, it will be a list of a single entry \"testVar\"\n#\nprint(varNameAsString)\n\nOutput : ['testVar']\nYou can extend this example for any other variable/data type\n",
"I'd like to point out a use case for this that is not an anti-pattern, and there is no better way to do it.\nThis seems to be a missing feature in python.\nThere are a number of functions, like patch.object, that take the name of a method or property to be patched or accessed.\nConsider this:\npatch.object(obj, \"method_name\", new_reg)\nThis can potentially start \"false succeeding\" when you change the name of a method. IE: you can ship a bug, you thought you were testing.... simply because of a bad method name refactor.\nNow consider: varname. This could be an efficient, built-in function. But for now it can work by iterating an object or the caller's frame:\nNow your call can be:\npatch.member(obj, obj.method_name, new_reg)\nAnd the patch function can call:\nvarname(var, obj=obj)\nThis would: assert that the var is bound to the obj and return the name of the member. Or if the obj is not specified, use the callers stack frame to derive it, etc.\nCould be made an efficient built in at some point, but here's a definition that works. I deliberately didn't support builtins, easy to add tho:\nFeel free to stick this in a package called varname.py, and use it in your patch.object calls:\npatch.object(obj, varname(obj, obj.method_name), new_reg)\nNote: this was written for python 3.\nimport inspect\n\ndef _varname_dict(var, dct):\n key_name = None\n for key, val in dct.items():\n if val is var:\n if key_name is not None:\n raise NotImplementedError(\"Duplicate names not supported %s, %s\" % (key_name, key))\n key_name = key\n return key_name\n\ndef _varname_obj(var, obj):\n key_name = None\n for key in dir(obj):\n val = getattr(obj, key)\n equal = val is var\n if equal:\n if key_name is not None:\n raise NotImplementedError(\"Duplicate names not supported %s, %s\" % (key_name, key))\n key_name = key\n return key_name\n\ndef varname(var, obj=None):\n if obj is None:\n if hasattr(var, \"__self__\"):\n return var.__name__\n caller_frame = inspect.currentframe().f_back\n try:\n ret = _varname_dict(var, caller_frame.f_locals)\n except NameError:\n ret = _varname_dict(var, caller_frame.f_globals)\n else:\n ret = _varname_obj(var, obj)\n if ret is None:\n raise NameError(\"Name not found. (Note: builtins not supported)\")\n return ret\n\n",
"This will work for simnple data types (str, int, float, list etc.)\n\n>>> def my_print(var_str) : \n print var_str+':', globals()[var_str]\n>>> a = 5\n>>> b = ['hello', ',world!']\n>>> my_print('a')\na: 5\n>>> my_print('b')\nb: ['hello', ',world!']\n\n",
"It's not very Pythonesque but I was curious and found this solution. You need to duplicate the globals dictionary since its size will change as soon as you define a new variable.\ndef var_to_name(var):\n # noinspection PyTypeChecker\n dict_vars = dict(globals().items())\n\n var_string = None\n\n for name in dict_vars.keys():\n if dict_vars[name] is var:\n var_string = name\n break\n\n return var_string\n\n\nif __name__ == \"__main__\":\n test = 3\n print(f\"test = {test}\")\n print(f\"variable name: {var_to_name(test)}\")\n\nwhich returns:\ntest = 3\nvariable name: test\n\n",
"To get the variable name of var as a string:\nvar = 1000\nvar_name = [k for k,v in locals().items() if v == var][0] \nprint(var_name) # ---> outputs 'var'\n\n",
"Thanks @restrepo, this was exactly what I needed to create a standard save_df_to_file() function. For this, I made some small changes to your tostr() function. Hope this will help someone else:\ndef variabletostr(**df):\n variablename = list(df.keys())[0]\n return variablename\n \n variabletostr(df=0)\n\n",
"The original question is pretty old, but I found an almost solution with Python 3. (I say almost because I think you can get close to a solution but I do not believe there is a solution concrete enough to satisfy the exact request).\nFirst, you might want to consider the following:\n\nobjects are a core concept in Python, and they may be assigned a variable, but the variable itself is a bound name (think pointer or reference) not the object itself\nvar is just a variable name bound to an object and that object could have more than one reference (in your example it does not seem to)\nin this case, var appears to be in the global namespace so you can use the global builtin conveniently named global\ndifferent name references to the same object will all share the same id which can be checked by running the id builtin id like so: id(var)\n\nThis function grabs the global variables and filters out the ones matching the content of your variable.\ndef get_bound_names(target_variable):\n '''Returns a list of bound object names.'''\n return [k for k, v in globals().items() if v is target_variable]\n\nThe real challenge here is that you are not guaranteed to get back the variable name by itself. It will be a list, but that list will contain the variable name you are looking for. If your target variable (bound to an object) is really the only bound name, you could access it this way:\nbound_names = get_variable_names(target_variable)\nvar_string = bound_names[0]\n\n",
"Possible for Python >= 3.8 (with f'{var=}' string )\nNot sure if this could be used in production code, but in Python 3.8(and up) you can use f' string debugging specifier. Add = at the end of an expression, and it will print both the expression and its value:\nmy_salary_variable = 5000\nprint(f'{my_salary_variable = }')\n\nOutput:\nmy_salary_variable = 5000\n\nTo uncover this magic here is another example:\nparam_list = f'{my_salary_variable=}'.split('=')\nprint(param_list)\n\nOutput:\n['my_salary_variable', '5000']\n\nExplanation: when you put '=' after your var in f'string, it returns a string with variable name, '=' and its value. Split it with .split('=') and get a List of 2 strings, [0] - your_variable_name, and [1] - actual object of variable.\nPick up [0] element of the list if you need variable name only.\nmy_salary_variable = 5000\nparam_list = f'{my_salary_variable=}'.split('=')\nprint(param_list[0])\nOutput:\nmy_salary_variable\n\nor, in one line\n\nmy_salary_variable = 5000\nprint(f'{my_salary_variable=}'.split('=')[0])\nOutput:\nmy_salary_variable\n\nWorks with functions too:\ndef my_super_calc_foo(number):\n return number**3\n\nprint(f'{my_super_calc_foo(5) = }')\nprint(f'{my_super_calc_foo(5)=}'.split('='))\n\nOutput:\nmy_super_calc_foo(5) = 125\n['my_super_calc_foo(5)', '125']\n\nProcess finished with exit code 0\n\n"
] | [
62,
41,
14,
12,
7,
6,
3,
2,
2,
2,
2,
2,
2,
2,
1,
0,
0,
0,
0,
0,
0
] | [
"This module works for converting variables names to a string:\nhttps://pypi.org/project/varname/\nUse it like this:\nfrom varname import nameof\n\nvariable=0\n\nname=nameof(variable)\n\nprint(name)\n\n//output: variable\n\nInstall it by:\npip install varname\n\n",
"print \"var\"\nprint \"something_else\"\n\nOr did you mean something_else?\n"
] | [
-1,
-3
] | [
"python",
"string",
"variables"
] | stackoverflow_0001534504_python_string_variables.txt |
Q:
Trouble with fading image into viewport through CSS/JS
I am attempting to make an image on a website fade in and up when a user scrolls the image into the viewport. The code I have so far is below. However, when I run this code I get a 404 error output. Any assistance is appreciated! I am quite new to JS and have been trying to figure this out for a while.
Here is my CSS.
.section3 {
opacity: 0;
transform: translateY(20vh);
visibility: hidden;
transition: opacity 0.6s ease-out, transform 1.2s ease-out;
will-change: opacity, visibility;
}
.fade {
opacity: 1;
transform: none;
visibility: visible;
}
Below is the HTML and JS.
<section id="section3" class="section3">
<img style="width: 100%;" src="lovethyneighbor.jpg">
</section>
<script>
var section3 = document.getElementById("section3");
var location = section3.getBoundingClientRect();
if (location.top >= 0) {
document.getElementById("section3").classList.add("fade");
} else {
document.getElementById("section3").classList.add("section3");
}
</script>
A:
Introducing the Intersection Observer API! This is included in JavaScript and is a great tool for triggering an event when an element is in the viewport.
It's a really powerful tool and I'd highly suggest using this over getBoundingClientRect(). One of the main reasons for this is with your code:
if (location.top >= 0) {
document.getElementById("section3").classList.add("fade");
}
else {
document.getElementById("section3").classList.add("section3");
}
You will have to run a function on every single mousewheel event, which is unreliable and can hurt performance. If you're using Intersection Observer, the API will "watch" your page and will run a function whenever the element is in the viewport.
The code below is explained through inline comments.
Scalable with multiple elements that need different animations
// the sections/containers
const sections = document.querySelectorAll("section.section");
// options for the intersection function
const options = {
root: null,
threshold: 0.5, // how much of the element should be visible before the function is triggered? from 0 - 1
rootMargin: "0px 0px 0px 0px" // default rootmargin value
};
// the observer - with foreach we can trigger multiple elements with multiple animations if need be
let observer = new IntersectionObserver((entries) => {
entries.forEach((entry) => {
// the element that is going to be animated
const block = entry.target.querySelector("img.fader");
// elements to be animated, e.g.
// if multiple elements with animations need to run inside the same section
const animationBlocks = entry.target.querySelectorAll("[data-animation]");
// when the element is triggered
if (entry.isIntersecting) {
// foreach, if multiple animations need to run on the same element
animationBlocks.forEach((animation) => {
animationClass = animation.dataset.animation;
// adding the data-animation class class to the element, so the animation can run
animation.classList.add(animationClass);
});
}
});
}, options);
observer.observe(document.querySelector("section.section"));
// running the animations
document.addEventListener("DOMContentLoaded", function() {
Array.from(sections).forEach(function(element) {
observer.observe(element);
});
});
body {
height: 300vh;
display: flex;
align-items: center;
justify-content: center;
flex-direction: column;
background-color: teal;
gap: 400px;
}
/* starting values */
[data-animation="fadeInUp"] {
opacity: 0;
transform: translate3d(0, 20px, 0);
}
/* when classname is applied to the element, run the animation */
.fadeInUp {
animation-name: fadeInUp;
animation-duration: 0.6s;
animation-fill-mode: both;
}
/* the animation */
@keyframes fadeInUp {
from {
opacity: 0;
transform: translate3d(0, 20px, 0);
}
to {
opacity: 1;
transform: translate3d(0, 0, 0);
}
}
<section id="section2" class="section section2">
<img data-animation="fadeInUp" class="fader" style="width: 100%;" src="https://picsum.photos/200/300">
</section>
<section id="section3" class="section section3">
<img data-animation="fadeInUp" class="fader" style="width: 100%;" src="https://picsum.photos/200/300">
</section>
Simple function with single element and single animation
const sections = document.querySelectorAll("section.section");
const options = {
root: null,
threshold: 0.5
};
let observer = new IntersectionObserver((entries) => {
entries.forEach((entry) => {
const block = entry.target.querySelector("img.fader");
if (entry.isIntersecting) {
block.classList.add('fadeInUp');
}
});
}, options);
observer.observe(document.querySelector("section.section"));
// running the animations
document.addEventListener("DOMContentLoaded", function() {
Array.from(sections).forEach(function(element) {
observer.observe(element);
});
});
body {
height: 300vh;
display: flex;
align-items: center;
justify-content: center;
flex-direction: column;
background-color: teal;
gap: 400px;
}
img.fader {
opacity: 0;
transform: translate3d(0, 20px, 0);
}
.fadeInUp {
animation-name: fadeInUp;
animation-duration: 0.6s;
animation-fill-mode: both;
}
/* the animation */
@keyframes fadeInUp {
from {
opacity: 0;
transform: translate3d(0, 20px, 0);
}
to {
opacity: 1;
transform: translate3d(0, 0, 0);
}
}
<section id="section2" class="section section2">
<img data-animation="fadeInUp" class="fader" style="width: 100%;" src="https://picsum.photos/200/300">
</section>
<section id="section3" class="section section3">
<img class="fader" style="width: 100%;" src="https://picsum.photos/200/300">
</section>
| Trouble with fading image into viewport through CSS/JS | I am attempting to make an image on a website fade in and up when a user scrolls the image into the viewport. The code I have so far is below. However, when I run this code I get a 404 error output. Any assistance is appreciated! I am quite new to JS and have been trying to figure this out for a while.
Here is my CSS.
.section3 {
opacity: 0;
transform: translateY(20vh);
visibility: hidden;
transition: opacity 0.6s ease-out, transform 1.2s ease-out;
will-change: opacity, visibility;
}
.fade {
opacity: 1;
transform: none;
visibility: visible;
}
Below is the HTML and JS.
<section id="section3" class="section3">
<img style="width: 100%;" src="lovethyneighbor.jpg">
</section>
<script>
var section3 = document.getElementById("section3");
var location = section3.getBoundingClientRect();
if (location.top >= 0) {
document.getElementById("section3").classList.add("fade");
} else {
document.getElementById("section3").classList.add("section3");
}
</script>
| [
"Introducing the Intersection Observer API! This is included in JavaScript and is a great tool for triggering an event when an element is in the viewport.\nIt's a really powerful tool and I'd highly suggest using this over getBoundingClientRect(). One of the main reasons for this is with your code:\nif (location.top >= 0) {\n document.getElementById(\"section3\").classList.add(\"fade\");\n} \n\nelse { \n document.getElementById(\"section3\").classList.add(\"section3\");\n}\n\nYou will have to run a function on every single mousewheel event, which is unreliable and can hurt performance. If you're using Intersection Observer, the API will \"watch\" your page and will run a function whenever the element is in the viewport.\nThe code below is explained through inline comments.\nScalable with multiple elements that need different animations\n\n\n// the sections/containers\nconst sections = document.querySelectorAll(\"section.section\");\n\n// options for the intersection function\nconst options = {\n root: null,\n threshold: 0.5, // how much of the element should be visible before the function is triggered? from 0 - 1\n rootMargin: \"0px 0px 0px 0px\" // default rootmargin value\n};\n\n// the observer - with foreach we can trigger multiple elements with multiple animations if need be\nlet observer = new IntersectionObserver((entries) => {\n entries.forEach((entry) => {\n\n // the element that is going to be animated\n const block = entry.target.querySelector(\"img.fader\");\n\n // elements to be animated, e.g.\n // if multiple elements with animations need to run inside the same section\n const animationBlocks = entry.target.querySelectorAll(\"[data-animation]\");\n\n // when the element is triggered\n if (entry.isIntersecting) {\n // foreach, if multiple animations need to run on the same element\n animationBlocks.forEach((animation) => {\n animationClass = animation.dataset.animation;\n\n // adding the data-animation class class to the element, so the animation can run\n animation.classList.add(animationClass);\n });\n }\n });\n}, options);\n\nobserver.observe(document.querySelector(\"section.section\"));\n\n// running the animations\ndocument.addEventListener(\"DOMContentLoaded\", function() {\n Array.from(sections).forEach(function(element) {\n observer.observe(element);\n });\n});\nbody {\n height: 300vh;\n display: flex;\n align-items: center;\n justify-content: center;\n flex-direction: column;\n background-color: teal;\n gap: 400px;\n}\n\n/* starting values */\n[data-animation=\"fadeInUp\"] {\n opacity: 0;\n transform: translate3d(0, 20px, 0);\n}\n\n/* when classname is applied to the element, run the animation */\n.fadeInUp {\n animation-name: fadeInUp;\n animation-duration: 0.6s;\n animation-fill-mode: both;\n}\n\n/* the animation */\n@keyframes fadeInUp {\n from {\n opacity: 0;\n transform: translate3d(0, 20px, 0);\n }\n to {\n opacity: 1;\n transform: translate3d(0, 0, 0);\n }\n}\n<section id=\"section2\" class=\"section section2\">\n <img data-animation=\"fadeInUp\" class=\"fader\" style=\"width: 100%;\" src=\"https://picsum.photos/200/300\">\n</section>\n\n<section id=\"section3\" class=\"section section3\">\n <img data-animation=\"fadeInUp\" class=\"fader\" style=\"width: 100%;\" src=\"https://picsum.photos/200/300\">\n</section>\n\n\n\nSimple function with single element and single animation\n\n\nconst sections = document.querySelectorAll(\"section.section\");\n\nconst options = {\n root: null,\n threshold: 0.5\n};\n\nlet observer = new IntersectionObserver((entries) => {\n entries.forEach((entry) => {\n\n const block = entry.target.querySelector(\"img.fader\");\n\n if (entry.isIntersecting) {\n block.classList.add('fadeInUp');\n }\n });\n}, options);\n\nobserver.observe(document.querySelector(\"section.section\"));\n\n// running the animations\ndocument.addEventListener(\"DOMContentLoaded\", function() {\n Array.from(sections).forEach(function(element) {\n observer.observe(element);\n });\n});\nbody {\n height: 300vh;\n display: flex;\n align-items: center;\n justify-content: center;\n flex-direction: column;\n background-color: teal;\n gap: 400px;\n}\n\nimg.fader {\n opacity: 0;\n transform: translate3d(0, 20px, 0);\n}\n\n.fadeInUp {\n animation-name: fadeInUp;\n animation-duration: 0.6s;\n animation-fill-mode: both;\n}\n\n\n/* the animation */\n\n@keyframes fadeInUp {\n from {\n opacity: 0;\n transform: translate3d(0, 20px, 0);\n }\n to {\n opacity: 1;\n transform: translate3d(0, 0, 0);\n }\n}\n<section id=\"section2\" class=\"section section2\">\n <img data-animation=\"fadeInUp\" class=\"fader\" style=\"width: 100%;\" src=\"https://picsum.photos/200/300\">\n</section>\n\n<section id=\"section3\" class=\"section section3\">\n <img class=\"fader\" style=\"width: 100%;\" src=\"https://picsum.photos/200/300\">\n</section>\n\n\n\n"
] | [
1
] | [] | [] | [
"css",
"getboundingclientrect",
"html",
"javascript"
] | stackoverflow_0074670929_css_getboundingclientrect_html_javascript.txt |
Q:
How to display search results using React Typescript?
This is the code for the interface:
export interface ActorAttributes {
TYPE?: string,
NAME?: string,
}
export interface MovieAttributes {
OBJECTID: number,
SID: string,
NAME: string,
DIRECTOR: string,
DESCRIP: string,
}
App.tsx code:
import { useEffect, useState } from 'react';
import { searchMovies, searchActors, MovieAttributes, ActorAttributes } from "@utils/atts"
const Home: React.FC = () => {
const [search, setSearch] = useState(false);
const [movieSearch, setMovieSearch] = useState<MovieAttributes[]>([]);
const [actorSearch, setActorSearch] = useState<ActorAttributes>([]);
const [searchTerm, setSearchTerm] = useState('');
useEffect(() => {
if (searchTerm.length > 0) {
setSearch(true);
searchMovies(searchTerm).then(results => {
setMovieSearch(results);
});
searchActors(searchTerm, movieSearch[1].SID).then(results => {
setActorSearch(results);
});
setSearch(false);
}
}, [searchTerm, movieSearch]);
const handleSearchTermChange = (event: React.ChangeEvent<HTMLInputElement>) => {
setSearchTerm(event.target.value);
}
return (
<div>
<input type="text" value={searchTerm} onChange={handleSearchTermChange} />
{search && <p>Searching...</p>}
{movieSearch.length > 0 && <p>Found {movieSearch.length} movies</p>}
{actorSearch.length > 0 && <p>Found {actorSearch.length} actors</p>}
</div>
);
}
So when a user searches a movie name or actor it's able to display the amount of movies and actors found but I'm curious on how I would display the data attributes (i.e. NAME, DIRECTOR, DESCRIP, etc.)
This is what I've tried so far, but it just displays errors stating that the movieAttributes doesn't contain toLowerCase() and I'm not quiet sure where to go from here or if I'm on the right track. I apologize in advance for the errors within my code as I am fairly new to react. If anyone has any tips, ideas, suggestions, etc. please feel free to leave a comment.
<div className="App">
<ul className="posts">
<input type="text" onChange={handleSearchTermChange} />
{movieSearch.map((movieSearch) => {
if (searchTerm == "" || movieSearch.toLowerCase().includes(searchTerm.toLowerCase())) {
return (
<li key={movieSearch.OBJECTID}>
<h3>{movieSearch.NAME}</h3>
<p>{movieSearch.DIRECTOR}</p>
<p>{movieSearch.DESCRIP}</p>
</li>
);
}
return null;
)}
</ul>
/div>
A:
There are some problems in your code.
actorSearch state type should be ActorAttributes[].
You should remove movieSearch from useEffect dependencies. Because each time the state gets updated, the useEffect gets executed but it's wrong.
You're using movieSearch right after updating its value. It's wrong because set state functions in react are async. Instead, you should use another variable to hold the new movieSearch data in useEffect.
In your .map function, you're using the same name as your state name (movieSearch) which is wrong.
You're executing toLowerCase() on the MovieAttributes object but instead, you should do it on NAME property.
So hope this helps you:
import { useEffect, useState } from 'react';
import { searchMovies, searchActors, MovieAttributes, ActorAttributes } from "@utils/atts"
const Home: React.FC = () => {
const [search, setSearch] = useState(false);
const [movieSearch, setMovieSearch] = useState<MovieAttributes[]>([]);
const [actorSearch, setActorSearch] = useState<ActorAttributes[]>([]);
const [searchTerm, setSearchTerm] = useState('');
useEffect(() => {
if (searchTerm.length > 0) {
setSearch(true);
let newMovieSearch = [...movieSearch];
searchMovies(searchTerm).then(results => {
newMovieSearch = results;
setMovieSearch(results);
});
searchActors(searchTerm, newMovieSearch[1].SID).then(results => {
setActorSearch(results);
});
setSearch(false);
}
}, [searchTerm]);
const handleSearchTermChange = (event: React.ChangeEvent<HTMLInputElement>) => {
setSearchTerm(event.target.value);
}
return (
<div>
<input type="text" value={searchTerm} onChange={handleSearchTermChange} />
{search && <p>Searching...</p>}
{movieSearch.length > 0 && <p>Found {movieSearch.length} movies</p>}
{actorSearch.length > 0 && <p>Found {actorSearch.length} actors</p>}
</div>
);
}
<div className="App">
<ul className="posts">
<input type="text" onChange={handleSearchTermChange} />
{movieSearch.map((movie) => {
if (searchTerm == "" || movie.NAME.toLowerCase().includes(searchTerm.toLowerCase())) {
return (
<li key={movie.OBJECTID}>
<h3>{movie.NAME}</h3>
<p>{movie.DIRECTOR}</p>
<p>{movie.DESCRIP}</p>
</li>
);
}
return null;
)}
</ul>
</div>
| How to display search results using React Typescript? | This is the code for the interface:
export interface ActorAttributes {
TYPE?: string,
NAME?: string,
}
export interface MovieAttributes {
OBJECTID: number,
SID: string,
NAME: string,
DIRECTOR: string,
DESCRIP: string,
}
App.tsx code:
import { useEffect, useState } from 'react';
import { searchMovies, searchActors, MovieAttributes, ActorAttributes } from "@utils/atts"
const Home: React.FC = () => {
const [search, setSearch] = useState(false);
const [movieSearch, setMovieSearch] = useState<MovieAttributes[]>([]);
const [actorSearch, setActorSearch] = useState<ActorAttributes>([]);
const [searchTerm, setSearchTerm] = useState('');
useEffect(() => {
if (searchTerm.length > 0) {
setSearch(true);
searchMovies(searchTerm).then(results => {
setMovieSearch(results);
});
searchActors(searchTerm, movieSearch[1].SID).then(results => {
setActorSearch(results);
});
setSearch(false);
}
}, [searchTerm, movieSearch]);
const handleSearchTermChange = (event: React.ChangeEvent<HTMLInputElement>) => {
setSearchTerm(event.target.value);
}
return (
<div>
<input type="text" value={searchTerm} onChange={handleSearchTermChange} />
{search && <p>Searching...</p>}
{movieSearch.length > 0 && <p>Found {movieSearch.length} movies</p>}
{actorSearch.length > 0 && <p>Found {actorSearch.length} actors</p>}
</div>
);
}
So when a user searches a movie name or actor it's able to display the amount of movies and actors found but I'm curious on how I would display the data attributes (i.e. NAME, DIRECTOR, DESCRIP, etc.)
This is what I've tried so far, but it just displays errors stating that the movieAttributes doesn't contain toLowerCase() and I'm not quiet sure where to go from here or if I'm on the right track. I apologize in advance for the errors within my code as I am fairly new to react. If anyone has any tips, ideas, suggestions, etc. please feel free to leave a comment.
<div className="App">
<ul className="posts">
<input type="text" onChange={handleSearchTermChange} />
{movieSearch.map((movieSearch) => {
if (searchTerm == "" || movieSearch.toLowerCase().includes(searchTerm.toLowerCase())) {
return (
<li key={movieSearch.OBJECTID}>
<h3>{movieSearch.NAME}</h3>
<p>{movieSearch.DIRECTOR}</p>
<p>{movieSearch.DESCRIP}</p>
</li>
);
}
return null;
)}
</ul>
/div>
| [
"There are some problems in your code.\n\nactorSearch state type should be ActorAttributes[].\n\nYou should remove movieSearch from useEffect dependencies. Because each time the state gets updated, the useEffect gets executed but it's wrong.\n\nYou're using movieSearch right after updating its value. It's wrong because set state functions in react are async. Instead, you should use another variable to hold the new movieSearch data in useEffect.\n\nIn your .map function, you're using the same name as your state name (movieSearch) which is wrong.\n\nYou're executing toLowerCase() on the MovieAttributes object but instead, you should do it on NAME property.\n\n\nSo hope this helps you:\nimport { useEffect, useState } from 'react';\nimport { searchMovies, searchActors, MovieAttributes, ActorAttributes } from \"@utils/atts\"\n\nconst Home: React.FC = () => {\n const [search, setSearch] = useState(false);\n const [movieSearch, setMovieSearch] = useState<MovieAttributes[]>([]);\n const [actorSearch, setActorSearch] = useState<ActorAttributes[]>([]);\n const [searchTerm, setSearchTerm] = useState('');\n\n useEffect(() => {\n if (searchTerm.length > 0) {\n setSearch(true);\n\n let newMovieSearch = [...movieSearch];\n\n searchMovies(searchTerm).then(results => {\n newMovieSearch = results;\n setMovieSearch(results);\n });\n\n searchActors(searchTerm, newMovieSearch[1].SID).then(results => {\n setActorSearch(results);\n });\n\n setSearch(false);\n }\n }, [searchTerm]);\n\n const handleSearchTermChange = (event: React.ChangeEvent<HTMLInputElement>) => {\n setSearchTerm(event.target.value);\n }\n\n return (\n <div>\n <input type=\"text\" value={searchTerm} onChange={handleSearchTermChange} />\n {search && <p>Searching...</p>}\n {movieSearch.length > 0 && <p>Found {movieSearch.length} movies</p>}\n {actorSearch.length > 0 && <p>Found {actorSearch.length} actors</p>}\n </div>\n );\n}\n\n<div className=\"App\">\n <ul className=\"posts\">\n <input type=\"text\" onChange={handleSearchTermChange} />\n {movieSearch.map((movie) => {\n if (searchTerm == \"\" || movie.NAME.toLowerCase().includes(searchTerm.toLowerCase())) {\n return (\n <li key={movie.OBJECTID}>\n <h3>{movie.NAME}</h3>\n <p>{movie.DIRECTOR}</p>\n <p>{movie.DESCRIP}</p>\n </li>\n );\n }\n return null;\n )}\n </ul>\n</div>\n\n"
] | [
0
] | [] | [] | [
"react_hooks",
"reactjs",
"typescript"
] | stackoverflow_0074670957_react_hooks_reactjs_typescript.txt |
Q:
How to toggle the css of a mapped button?
I'm just trying to figure out how to toggle a css class for an individual button that is generated from a mapped array.
My code works, but it toggles every mapped button, not just the button selected.
<div className='synonym-keeper'>
{synArr.map((syn) => (
<button
className={`synonym ${isPressed && 'active'}`}
onClick={() => toggleIsPressed(!isPressed)}
>
{syn}
</button>
))}
</div>
How do I make just the selected button's css toggle?
A:
Create another component called Togglebutton and keep the toggle logic in it. That way you can toggle the individual button.
This would also work:
const synArr = ["button 1", "button 2", "button 3"];
const ToggleButton = ({ text }) => {
const [isPressed, toggleIsPressed] = React.useState(false);
return (
<button
className={`synonym ${isPressed && "active"}`}
onClick={() => toggleIsPressed(!isPressed)}
>
{text}
</button>
);
};
function App() {
return (
<div className="synonym-keeper">
{synArr.map((syn) => (
<ToggleButton text={syn} key={syn}/>
))}
</div>
);
}
ReactDOM.render(<App />, document.querySelector('.react'));
.synonym.active {
background-color: green;
}
<script crossorigin src="https://unpkg.com/react@16/umd/react.development.js"></script>
<script crossorigin src="https://unpkg.com/react-dom@16/umd/react-dom.development.js"></script>
<div class='react'></div>
A:
I resolved it by making an array for the className and changing its contents onClick, as below:
<div className='synonym-keeper'>
{synArr.map((syn, idx) => (
<button
className={`synonym ${isPressed[idx]}`}
onClick={() => {
const newIsPressed = [...isPressed];
newIsPressed[idx] === ''
? (newIsPressed[idx] = 'active')
: (newIsPressed[idx] = '');
setIsPressed(newIsPressed);
}}
>
{syn}
</button>
))}
</div>
This resolves the issue and allows me to select one or more buttons sequentially. I really like the cleanliness of Amila's answer though so I will mark theirs as accepted.
| How to toggle the css of a mapped button? | I'm just trying to figure out how to toggle a css class for an individual button that is generated from a mapped array.
My code works, but it toggles every mapped button, not just the button selected.
<div className='synonym-keeper'>
{synArr.map((syn) => (
<button
className={`synonym ${isPressed && 'active'}`}
onClick={() => toggleIsPressed(!isPressed)}
>
{syn}
</button>
))}
</div>
How do I make just the selected button's css toggle?
| [
"Create another component called Togglebutton and keep the toggle logic in it. That way you can toggle the individual button.\nThis would also work:\n\n\nconst synArr = [\"button 1\", \"button 2\", \"button 3\"];\n\nconst ToggleButton = ({ text }) => {\n const [isPressed, toggleIsPressed] = React.useState(false);\n\n return (\n <button\n className={`synonym ${isPressed && \"active\"}`}\n onClick={() => toggleIsPressed(!isPressed)}\n >\n {text}\n </button>\n );\n};\n\nfunction App() {\n return (\n <div className=\"synonym-keeper\">\n {synArr.map((syn) => (\n <ToggleButton text={syn} key={syn}/>\n ))}\n </div>\n );\n}\n\nReactDOM.render(<App />, document.querySelector('.react'));\n.synonym.active {\n background-color: green;\n}\n<script crossorigin src=\"https://unpkg.com/react@16/umd/react.development.js\"></script>\n<script crossorigin src=\"https://unpkg.com/react-dom@16/umd/react-dom.development.js\"></script>\n<div class='react'></div>\n\n\n\n",
"I resolved it by making an array for the className and changing its contents onClick, as below:\n <div className='synonym-keeper'>\n {synArr.map((syn, idx) => (\n <button\n className={`synonym ${isPressed[idx]}`}\n onClick={() => {\n const newIsPressed = [...isPressed];\n newIsPressed[idx] === ''\n ? (newIsPressed[idx] = 'active')\n : (newIsPressed[idx] = '');\n setIsPressed(newIsPressed);\n }}\n >\n {syn}\n </button>\n ))}\n </div>\n\nThis resolves the issue and allows me to select one or more buttons sequentially. I really like the cleanliness of Amila's answer though so I will mark theirs as accepted.\n"
] | [
2,
0
] | [] | [] | [
"css",
"javascript",
"reactjs"
] | stackoverflow_0074664526_css_javascript_reactjs.txt |
Q:
Is there a better way to type check my search by keyword query
My query and queryStr types seem excessive, they work perfectly fine, Im just wondering if they could be simplified. The types are coming my product model(ProductDoc), could generics be used instead? I've tried a fews combination but none made sense, Please let me know if its better to use generics or if the current types are the best approach.
export class ApiFeatures {
query: Query<
(ProductDoc & { _id: Types.ObjectId })[],
ProductDoc & { _id: Types.ObjectId },
{},
ProductDoc
>;
queryStr: ParsedQs;
constructor(
query: Query<
(ProductDoc & { _id: Types.ObjectId })[],
ProductDoc & { _id: Types.ObjectId },
{},
ProductDoc
>,
queryStr: ParsedQs
) {
this.query = query;
this.queryStr = queryStr;
}
search() {
const keyword = this.queryStr.keyword
? {
title: {
$regex: this.queryStr?.keyword,
$options: "i",
},
}
: {};
this.query = this.query.find({ ...keyword });
return this;
}
}
A:
Without seeing the implementation of the Query<> type it's not possible to fully answer this question, but with regards to whether you should use a generic - the Query<> type is already a generic. You can, however, simplify your ProductDoc & { _id: Types.ObjectId } in the places where it is used by creating a new type that extends ProductDoc with the _id field.
interface ProductDocWithId extends ProductDoc {
_id: Types.ObjectId;
}
export class ApiFeatures {
query: Query<ProductDocWithId[], ProductDocWithId, {}, ProductDoc>;
queryStr: ParsedQs;
// implementation
}
| Is there a better way to type check my search by keyword query | My query and queryStr types seem excessive, they work perfectly fine, Im just wondering if they could be simplified. The types are coming my product model(ProductDoc), could generics be used instead? I've tried a fews combination but none made sense, Please let me know if its better to use generics or if the current types are the best approach.
export class ApiFeatures {
query: Query<
(ProductDoc & { _id: Types.ObjectId })[],
ProductDoc & { _id: Types.ObjectId },
{},
ProductDoc
>;
queryStr: ParsedQs;
constructor(
query: Query<
(ProductDoc & { _id: Types.ObjectId })[],
ProductDoc & { _id: Types.ObjectId },
{},
ProductDoc
>,
queryStr: ParsedQs
) {
this.query = query;
this.queryStr = queryStr;
}
search() {
const keyword = this.queryStr.keyword
? {
title: {
$regex: this.queryStr?.keyword,
$options: "i",
},
}
: {};
this.query = this.query.find({ ...keyword });
return this;
}
}
| [
"Without seeing the implementation of the Query<> type it's not possible to fully answer this question, but with regards to whether you should use a generic - the Query<> type is already a generic. You can, however, simplify your ProductDoc & { _id: Types.ObjectId } in the places where it is used by creating a new type that extends ProductDoc with the _id field.\ninterface ProductDocWithId extends ProductDoc {\n _id: Types.ObjectId;\n}\n\nexport class ApiFeatures {\n query: Query<ProductDocWithId[], ProductDocWithId, {}, ProductDoc>;\n queryStr: ParsedQs;\n\n // implementation\n}\n\n"
] | [
0
] | [] | [] | [
"typescript"
] | stackoverflow_0074668997_typescript.txt |
Q:
AWS S3 Boto3 Python - An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied
I am having a problem implementing delete_object.
def delete_image_from_s3(img=None):
if img:
try:
response = client.delete_object(
Bucket='my-bucket',
Key='uploads/img.jpg',
)
print(response)
except ClientError as ce:
print("error", ce)
whenever i send a request to delete a certain file, I keep receiving error caught by exception:
An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied
I know it has something to do with my policies, i already set required policies to allow it.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
I have full acces set for my IAM user for s3..
Or does it do something with bucket being public? or I am just missing something. Any suggestions will do, thanks for responding.
A:
The policy you have shown appears to be a Bucket Policy that is assigned to a specific bucket. This policy is granting anyone in the world permission to use your S3 bucket, so it is not recommended from a security viewpoint. You should remove this bucket policy.
You have mentioned that the provided code is running on "localhost" -- I will presume this means you are running it on your own computer.
In order to make API calls to AWS from that code, you will need to provide AWS credentials. Typically, these credentials are stored in the ~/.aws/credentials file by running the AWS CLI aws configure command. You would need to provide an Access Key and Secret Key associated with an IAM User (via the Security Credentials tab in the IAM management console).
You would then need to assign permissions to the IAM User so that they can use the S3 bucket. Note that the permissions are given to the IAM User, not the bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
The difference here is that there is no Principal because it is directly attached to the IAM User.
| AWS S3 Boto3 Python - An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied | I am having a problem implementing delete_object.
def delete_image_from_s3(img=None):
if img:
try:
response = client.delete_object(
Bucket='my-bucket',
Key='uploads/img.jpg',
)
print(response)
except ClientError as ce:
print("error", ce)
whenever i send a request to delete a certain file, I keep receiving error caught by exception:
An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied
I know it has something to do with my policies, i already set required policies to allow it.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:GetObject",
"s3:GetObjectAcl",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
I have full acces set for my IAM user for s3..
Or does it do something with bucket being public? or I am just missing something. Any suggestions will do, thanks for responding.
| [
"The policy you have shown appears to be a Bucket Policy that is assigned to a specific bucket. This policy is granting anyone in the world permission to use your S3 bucket, so it is not recommended from a security viewpoint. You should remove this bucket policy.\nYou have mentioned that the provided code is running on \"localhost\" -- I will presume this means you are running it on your own computer.\nIn order to make API calls to AWS from that code, you will need to provide AWS credentials. Typically, these credentials are stored in the ~/.aws/credentials file by running the AWS CLI aws configure command. You would need to provide an Access Key and Secret Key associated with an IAM User (via the Security Credentials tab in the IAM management console).\nYou would then need to assign permissions to the IAM User so that they can use the S3 bucket. Note that the permissions are given to the IAM User, not the bucket:\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"AllowPublicRead\",\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:GetObject\",\n \"s3:GetObjectAcl\",\n \"s3:PutObject\",\n \"s3:PutObjectAcl\",\n \"s3:DeleteObject\"\n ],\n \"Resource\": \"arn:aws:s3:::my-bucket/*\"\n }\n ]\n}\n\nThe difference here is that there is no Principal because it is directly attached to the IAM User.\n"
] | [
0
] | [] | [] | [
"amazon_s3",
"amazon_web_services",
"boto3",
"python"
] | stackoverflow_0074669404_amazon_s3_amazon_web_services_boto3_python.txt |
Q:
How do you prevent videos from being recorded on the browser by the 'CoCoCut'?
How do you prevent videos from being recorded on the browser by the 'CoCoCut'?
A:
it's simple: you can't. Not unless you coded something that detects if someone has cococut, and refuses to play the video entirely if they have it. that's the only way.
| How do you prevent videos from being recorded on the browser by the 'CoCoCut'? | How do you prevent videos from being recorded on the browser by the 'CoCoCut'?
| [
"it's simple: you can't. Not unless you coded something that detects if someone has cococut, and refuses to play the video entirely if they have it. that's the only way.\n"
] | [
0
] | [] | [] | [
"video",
"video_recording"
] | stackoverflow_0070210682_video_video_recording.txt |
Q:
KeyError: '...' keeps coming back
I keep getting the error and can't find where the problem lies. I'm trying so I can choose wether I want the attack the creature or both printed and what type the creature is: 'easy', 'medium' or 'hard', I want to store that into a variable.
creature = {'easy': ['chicken', 'slime', 'rat'],
'medium': ['wolf', 'cow', 'fox'],
'hard': ['baby dragon', 'demon', 'lesser demi god']
}
attack = {
'easy': ['pecks you', 'spits juice at', 'scratches'],
'medium': ['bites', 'charges at', 'bites'],
'hard': ['spits sparks of fire at', 'rends', 'smashes']
}
creature_easy = ['chicken', 'slime', 'rat']
cre = random.choice(creature_easy)
linked = dict(zip(creature[cre], attack[cre]))
cre_type = linked[0]
cre = random.choice(dict(creature))
print(linked[cre])
KeyError: 'rat'
Thanks in advance
A:
You might want something like:
chosen_level = 'easy'
game_data = dict(zip(creature[chosen_level], attack[chosen_level]))
import random
cre = random.choice(list(game_data))
att = game_data[cre]
print(cre, att)
Output: rat scratches
| KeyError: '...' keeps coming back | I keep getting the error and can't find where the problem lies. I'm trying so I can choose wether I want the attack the creature or both printed and what type the creature is: 'easy', 'medium' or 'hard', I want to store that into a variable.
creature = {'easy': ['chicken', 'slime', 'rat'],
'medium': ['wolf', 'cow', 'fox'],
'hard': ['baby dragon', 'demon', 'lesser demi god']
}
attack = {
'easy': ['pecks you', 'spits juice at', 'scratches'],
'medium': ['bites', 'charges at', 'bites'],
'hard': ['spits sparks of fire at', 'rends', 'smashes']
}
creature_easy = ['chicken', 'slime', 'rat']
cre = random.choice(creature_easy)
linked = dict(zip(creature[cre], attack[cre]))
cre_type = linked[0]
cre = random.choice(dict(creature))
print(linked[cre])
KeyError: 'rat'
Thanks in advance
| [
"You might want something like:\nchosen_level = 'easy'\ngame_data = dict(zip(creature[chosen_level], attack[chosen_level]))\n\nimport random\ncre = random.choice(list(game_data))\natt = game_data[cre]\n\nprint(cre, att) \n\nOutput: rat scratches\n"
] | [
2
] | [] | [] | [
"dictionary",
"keyerror",
"python"
] | stackoverflow_0074671020_dictionary_keyerror_python.txt |
Q:
How can i group this data in postgresql?
I have a table with millions of data. I'm having trouble making reports on data.
This is the table I have:
"channel_id" "datetime" "parameter" "raw"
10 "2022-12-02 16:16:00" "Günlük Debi" 3423.89
9 "2022-12-02 16:16:00" "KABIN NEM" 36.27
8 "2022-12-02 16:16:00" "KABIN SICAKLIK" 20.18
7 "2022-12-02 16:16:00" "AKM" 4.54
6 "2022-12-02 16:16:00" "KOi" 24.4
5 "2022-12-02 16:16:00" "AkisHizi" 0.59
4 "2022-12-02 16:16:00" "Sicaklik" 13.53
3 "2022-12-02 16:16:00" "Debi" 3.04
2 "2022-12-02 16:16:00" "CozunmusOksijen" 5.05
1 "2022-12-02 16:16:00" "Iletkenlik" 1125.64
0 "2022-12-02 16:16:00" "pH" 7.09
9 "2022-12-02 16:17:00" "KABIN NEM" 20.22
8 "2022-12-02 16:17:00" "KABIN SICAKLIK" 6.49
7 "2022-12-02 16:17:00" "AKM" 6.36
6 "2022-12-02 16:17:00" "KOi" 30.12
5 "2022-12-02 16:17:00" "AkisHizi" 0.82
4 "2022-12-02 16:17:00" "Sicaklik" 20.36
3 "2022-12-02 16:17:00" "Debi" 16.15
2 "2022-12-02 16:17:00" "CozunmusOksijen" 2.45
1 "2022-12-02 16:17:00" "Iletkenlik" 1570.75
0 "2022-12-02 16:17:00" "pH" 7.48
7 "2022-12-02 16:13:00" "AKM" 16.02
6 "2022-12-02 16:13:00" "KOi" 25.98
5 "2022-12-02 16:13:00" "AkisHizi" 0.83
4 "2022-12-02 16:13:00" "Sicaklik" 17.87
3 "2022-12-02 16:13:00" "Debi" 27.85
2 "2022-12-02 16:13:00" "CozunmusOksijen" 5.91
1 "2022-12-02 16:13:00" "Iletkenlik" 2221.36
0 "2022-12-02 16:13:00" "pH" 7.25
9 "2022-12-02 16:14:00" "KABIN NEM" 62.28
8 "2022-12-02 16:14:00" "KABIN SICAKLIK" 13.99
7 "2022-12-02 16:14:00" "AKM" 6.02
6 "2022-12-02 16:14:00" "KOi" 21.36
5 "2022-12-02 16:14:00" "AkisHizi" 0.56
4 "2022-12-02 16:14:00" "Sicaklik" 21.6
3 "2022-12-02 16:14:00" "Debi" 10.35
2 "2022-12-02 16:14:00" "CozunmusOksijen" 0.32
1 "2022-12-02 16:14:00" "Iletkenlik" 7325.54
0 "2022-12-02 16:14:00" "pH" 7.57
10 "2022-12-02 16:15:00" "Günlük Debi" 5363.51
9 "2022-12-02 16:15:00" "KABIN NEM" 34.65
8 "2022-12-02 16:15:00" "KABIN SICAKLIK" 20.25
7 "2022-12-02 16:15:00" "AKM" 6.52
6 "2022-12-02 16:15:00" "KOi" 12.71
5 "2022-12-02 16:15:00" "AkisHizi" 0.54
4 "2022-12-02 16:15:00" "Sicaklik" 14.41
3 "2022-12-02 16:15:00" "Debi" 5.09
2 "2022-12-02 16:15:00" "CozunmusOksijen" 5.86
1 "2022-12-02 16:15:00" "Iletkenlik" 1933.55
0 "2022-12-02 16:15:00" "pH" 7.24
7 "2022-12-02 16:13:00" "AKM" 38.64
6 "2022-12-02 16:13:00" "KOi" 26.17
5 "2022-12-02 16:13:00" "AkisHizi" 0.52
4 "2022-12-02 16:13:00" "Sicaklik" 12.46
3 "2022-12-02 16:13:00" "Debi" 1.32
2 "2022-12-02 16:13:00" "CozunmusOksijen" 9.06
1 "2022-12-02 16:13:00" "Iletkenlik" 2566.5
0 "2022-12-02 16:13:00" "pH" 7.33
9 "2022-12-02 16:14:00" "KABIN NEM" 21.71
8 "2022-12-02 16:14:00" "KABIN SICAKLIK" 16.5
7 "2022-12-02 16:14:00" "AKM" 12.56
6 "2022-12-02 16:14:00" "KOi" 18.64
5 "2022-12-02 16:14:00" "AkisHizi" 0.63
4 "2022-12-02 16:14:00" "Sicaklik" 12.56
3 "2022-12-02 16:14:00" "Debi" 4.84
2 "2022-12-02 16:14:00" "CozunmusOksijen" 2.15
1 "2022-12-02 16:14:00" "Iletkenlik" 621.05
0 "2022-12-02 16:14:00" "pH" 5.16
9 "2022-12-02 16:14:00" "KABIN NEM" 20.65
8 "2022-12-02 16:14:00" "KABIN SICAKLIK" 21.32
7 "2022-12-02 16:14:00" "AKM" 9.28
6 "2022-12-02 16:14:00" "KOi" 23.24
5 "2022-12-02 16:14:00" "AkisHizi" 0.63
4 "2022-12-02 16:14:00" "Sicaklik" 12.79
3 "2022-12-02 16:14:00" "Debi" 3.09
2 "2022-12-02 16:14:00" "CozunmusOksijen" 2.53
1 "2022-12-02 16:14:00" "Iletkenlik" 1473.54
0 "2022-12-02 16:14:00" "pH" 7.69
10 "2022-12-02 16:14:00" "Günlük Debi" 8453.81
9 "2022-12-02 16:14:00" "KABIN NEM" 32.88
8 "2022-12-02 16:14:00" "KABIN SICAKLIK" 24.88
7 "2022-12-02 16:14:00" "AKM" 6.16
6 "2022-12-02 16:14:00" "KOi" 51.93
5 "2022-12-02 16:14:00" "AkisHizi" 0.54
4 "2022-12-02 16:14:00" "Sicaklik" 17.91
3 "2022-12-02 16:14:00" "Debi" 9.3
2 "2022-12-02 16:14:00" "CozunmusOksijen" 2.69
1 "2022-12-02 16:14:00" "Iletkenlik" 2318.17
0 "2022-12-02 16:14:00" "pH" 7.27
10 "2022-12-02 16:14:00" "Günlük Debi" 3342.46
9 "2022-12-02 16:14:00" "KABIN NEM" 57.81
8 "2022-12-02 16:14:00" "KABIN SICAKLIK" 42.21
7 "2022-12-02 16:14:00" "AKM" 14.7
6 "2022-12-02 16:14:00" "KOi" 38.02
5 "2022-12-02 16:14:00" "AkisHizi" 0.61
4 "2022-12-02 16:14:00" "Sicaklik" 19.88
3 "2022-12-02 16:14:00" "Debi" 3.39
2 "2022-12-02 16:14:00" "CozunmusOksijen" 3.94
1 "2022-12-02 16:14:00" "Iletkenlik" 901.02
0 "2022-12-02 16:14:00" "pH" 7.33
The result I want to achieve is like this:
datetime values
2022-12-02 16:16:00 [{..PULSAR,Günlük Debi,3423.89},{...GENTEK...}...]
2022-12-02 16:17:00 [{..Pi,pH,7.09},{...GENTEK...}...]
.
.
.
I want to group data recorded on the same date in one row.
How can I achieve this? Is there a way?
I pulled the data by time period and then grouped it with a python for loop, but this was a very long process in large time intervals.
A:
Assume you meant to group data on the same value of datetime column, you can do this:
select datetime,
array_to_json(array_agg(json_build_object(parameter, raw))) as parameters
from a_table
group by 1
order by 1;
Result:
datetime |parameters |
-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2022-12-02 16:13:00.000|[{"AKM" : 16.02},{"KOi" : 25.98},{"AkisHizi" : 0.83},{"Sicaklik" : 17.87},{"Debi" : 27.85},{"CozunmusOksijen" : 5.91},{"Iletkenlik" : 2221.36},{"pH" : 7.25},{"AKM" : 38.64},{"KOi" : 26.17},{"AkisHizi" : 0.52},{"Sicaklik" : 12.46},{"Debi" : 1.32},{"Cozunmu|
2022-12-02 16:14:00.000|[{"KABIN NEM" : 62.28},{"KABIN SICAKLIK" : 13.99},{"AKM" : 6.02},{"KOi" : 21.36},{"AkisHizi" : 0.56},{"Sicaklik" : 21.6},{"Debi" : 10.35},{"CozunmusOksijen" : 0.32},{"Iletkenlik" : 7325.54},{"pH" : 7.57},{"KABIN NEM" : 21.71},{"KABIN SICAKLIK" : 16.5},{"A|
2022-12-02 16:15:00.000|[{"Günlük Debi" : 5363.51},{"KABIN NEM" : 34.65},{"KABIN SICAKLIK" : 20.25},{"AKM" : 6.52},{"KOi" : 12.71},{"AkisHizi" : 0.54},{"Sicaklik" : 14.41},{"Debi" : 5.09},{"CozunmusOksijen" : 5.86},{"Iletkenlik" : 1933.55},{"pH" : 7.24}] |
2022-12-02 16:16:00.000|[{"Günlük Debi" : 3423.89},{"KABIN NEM" : 36.27},{"KABIN SICAKLIK" : 20.18},{"AKM" : 4.54},{"KOi" : 24.4},{"AkisHizi" : 0.59},{"Sicaklik" : 13.53},{"Debi" : 3.04},{"CozunmusOksijen" : 5.05},{"Iletkenlik" : 1125.64},{"pH" : 7.09}] |
2022-12-02 16:17:00.000|[{"KABIN NEM" : 20.22},{"KABIN SICAKLIK" : 6.49},{"AKM" : 6.36},{"KOi" : 30.12},{"AkisHizi" : 0.82},{"Sicaklik" : 20.36},{"Debi" : 16.15},{"CozunmusOksijen" : 2.45},{"Iletkenlik" : 1570.75},{"pH" : 7.48}] |
| How can i group this data in postgresql? | I have a table with millions of data. I'm having trouble making reports on data.
This is the table I have:
"channel_id" "datetime" "parameter" "raw"
10 "2022-12-02 16:16:00" "Günlük Debi" 3423.89
9 "2022-12-02 16:16:00" "KABIN NEM" 36.27
8 "2022-12-02 16:16:00" "KABIN SICAKLIK" 20.18
7 "2022-12-02 16:16:00" "AKM" 4.54
6 "2022-12-02 16:16:00" "KOi" 24.4
5 "2022-12-02 16:16:00" "AkisHizi" 0.59
4 "2022-12-02 16:16:00" "Sicaklik" 13.53
3 "2022-12-02 16:16:00" "Debi" 3.04
2 "2022-12-02 16:16:00" "CozunmusOksijen" 5.05
1 "2022-12-02 16:16:00" "Iletkenlik" 1125.64
0 "2022-12-02 16:16:00" "pH" 7.09
9 "2022-12-02 16:17:00" "KABIN NEM" 20.22
8 "2022-12-02 16:17:00" "KABIN SICAKLIK" 6.49
7 "2022-12-02 16:17:00" "AKM" 6.36
6 "2022-12-02 16:17:00" "KOi" 30.12
5 "2022-12-02 16:17:00" "AkisHizi" 0.82
4 "2022-12-02 16:17:00" "Sicaklik" 20.36
3 "2022-12-02 16:17:00" "Debi" 16.15
2 "2022-12-02 16:17:00" "CozunmusOksijen" 2.45
1 "2022-12-02 16:17:00" "Iletkenlik" 1570.75
0 "2022-12-02 16:17:00" "pH" 7.48
7 "2022-12-02 16:13:00" "AKM" 16.02
6 "2022-12-02 16:13:00" "KOi" 25.98
5 "2022-12-02 16:13:00" "AkisHizi" 0.83
4 "2022-12-02 16:13:00" "Sicaklik" 17.87
3 "2022-12-02 16:13:00" "Debi" 27.85
2 "2022-12-02 16:13:00" "CozunmusOksijen" 5.91
1 "2022-12-02 16:13:00" "Iletkenlik" 2221.36
0 "2022-12-02 16:13:00" "pH" 7.25
9 "2022-12-02 16:14:00" "KABIN NEM" 62.28
8 "2022-12-02 16:14:00" "KABIN SICAKLIK" 13.99
7 "2022-12-02 16:14:00" "AKM" 6.02
6 "2022-12-02 16:14:00" "KOi" 21.36
5 "2022-12-02 16:14:00" "AkisHizi" 0.56
4 "2022-12-02 16:14:00" "Sicaklik" 21.6
3 "2022-12-02 16:14:00" "Debi" 10.35
2 "2022-12-02 16:14:00" "CozunmusOksijen" 0.32
1 "2022-12-02 16:14:00" "Iletkenlik" 7325.54
0 "2022-12-02 16:14:00" "pH" 7.57
10 "2022-12-02 16:15:00" "Günlük Debi" 5363.51
9 "2022-12-02 16:15:00" "KABIN NEM" 34.65
8 "2022-12-02 16:15:00" "KABIN SICAKLIK" 20.25
7 "2022-12-02 16:15:00" "AKM" 6.52
6 "2022-12-02 16:15:00" "KOi" 12.71
5 "2022-12-02 16:15:00" "AkisHizi" 0.54
4 "2022-12-02 16:15:00" "Sicaklik" 14.41
3 "2022-12-02 16:15:00" "Debi" 5.09
2 "2022-12-02 16:15:00" "CozunmusOksijen" 5.86
1 "2022-12-02 16:15:00" "Iletkenlik" 1933.55
0 "2022-12-02 16:15:00" "pH" 7.24
7 "2022-12-02 16:13:00" "AKM" 38.64
6 "2022-12-02 16:13:00" "KOi" 26.17
5 "2022-12-02 16:13:00" "AkisHizi" 0.52
4 "2022-12-02 16:13:00" "Sicaklik" 12.46
3 "2022-12-02 16:13:00" "Debi" 1.32
2 "2022-12-02 16:13:00" "CozunmusOksijen" 9.06
1 "2022-12-02 16:13:00" "Iletkenlik" 2566.5
0 "2022-12-02 16:13:00" "pH" 7.33
9 "2022-12-02 16:14:00" "KABIN NEM" 21.71
8 "2022-12-02 16:14:00" "KABIN SICAKLIK" 16.5
7 "2022-12-02 16:14:00" "AKM" 12.56
6 "2022-12-02 16:14:00" "KOi" 18.64
5 "2022-12-02 16:14:00" "AkisHizi" 0.63
4 "2022-12-02 16:14:00" "Sicaklik" 12.56
3 "2022-12-02 16:14:00" "Debi" 4.84
2 "2022-12-02 16:14:00" "CozunmusOksijen" 2.15
1 "2022-12-02 16:14:00" "Iletkenlik" 621.05
0 "2022-12-02 16:14:00" "pH" 5.16
9 "2022-12-02 16:14:00" "KABIN NEM" 20.65
8 "2022-12-02 16:14:00" "KABIN SICAKLIK" 21.32
7 "2022-12-02 16:14:00" "AKM" 9.28
6 "2022-12-02 16:14:00" "KOi" 23.24
5 "2022-12-02 16:14:00" "AkisHizi" 0.63
4 "2022-12-02 16:14:00" "Sicaklik" 12.79
3 "2022-12-02 16:14:00" "Debi" 3.09
2 "2022-12-02 16:14:00" "CozunmusOksijen" 2.53
1 "2022-12-02 16:14:00" "Iletkenlik" 1473.54
0 "2022-12-02 16:14:00" "pH" 7.69
10 "2022-12-02 16:14:00" "Günlük Debi" 8453.81
9 "2022-12-02 16:14:00" "KABIN NEM" 32.88
8 "2022-12-02 16:14:00" "KABIN SICAKLIK" 24.88
7 "2022-12-02 16:14:00" "AKM" 6.16
6 "2022-12-02 16:14:00" "KOi" 51.93
5 "2022-12-02 16:14:00" "AkisHizi" 0.54
4 "2022-12-02 16:14:00" "Sicaklik" 17.91
3 "2022-12-02 16:14:00" "Debi" 9.3
2 "2022-12-02 16:14:00" "CozunmusOksijen" 2.69
1 "2022-12-02 16:14:00" "Iletkenlik" 2318.17
0 "2022-12-02 16:14:00" "pH" 7.27
10 "2022-12-02 16:14:00" "Günlük Debi" 3342.46
9 "2022-12-02 16:14:00" "KABIN NEM" 57.81
8 "2022-12-02 16:14:00" "KABIN SICAKLIK" 42.21
7 "2022-12-02 16:14:00" "AKM" 14.7
6 "2022-12-02 16:14:00" "KOi" 38.02
5 "2022-12-02 16:14:00" "AkisHizi" 0.61
4 "2022-12-02 16:14:00" "Sicaklik" 19.88
3 "2022-12-02 16:14:00" "Debi" 3.39
2 "2022-12-02 16:14:00" "CozunmusOksijen" 3.94
1 "2022-12-02 16:14:00" "Iletkenlik" 901.02
0 "2022-12-02 16:14:00" "pH" 7.33
The result I want to achieve is like this:
datetime values
2022-12-02 16:16:00 [{..PULSAR,Günlük Debi,3423.89},{...GENTEK...}...]
2022-12-02 16:17:00 [{..Pi,pH,7.09},{...GENTEK...}...]
.
.
.
I want to group data recorded on the same date in one row.
How can I achieve this? Is there a way?
I pulled the data by time period and then grouped it with a python for loop, but this was a very long process in large time intervals.
| [
"Assume you meant to group data on the same value of datetime column, you can do this:\nselect datetime,\n array_to_json(array_agg(json_build_object(parameter, raw))) as parameters\n from a_table\n group by 1\n order by 1;\n\n\nResult:\ndatetime |parameters |\n-----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n2022-12-02 16:13:00.000|[{\"AKM\" : 16.02},{\"KOi\" : 25.98},{\"AkisHizi\" : 0.83},{\"Sicaklik\" : 17.87},{\"Debi\" : 27.85},{\"CozunmusOksijen\" : 5.91},{\"Iletkenlik\" : 2221.36},{\"pH\" : 7.25},{\"AKM\" : 38.64},{\"KOi\" : 26.17},{\"AkisHizi\" : 0.52},{\"Sicaklik\" : 12.46},{\"Debi\" : 1.32},{\"Cozunmu|\n2022-12-02 16:14:00.000|[{\"KABIN NEM\" : 62.28},{\"KABIN SICAKLIK\" : 13.99},{\"AKM\" : 6.02},{\"KOi\" : 21.36},{\"AkisHizi\" : 0.56},{\"Sicaklik\" : 21.6},{\"Debi\" : 10.35},{\"CozunmusOksijen\" : 0.32},{\"Iletkenlik\" : 7325.54},{\"pH\" : 7.57},{\"KABIN NEM\" : 21.71},{\"KABIN SICAKLIK\" : 16.5},{\"A|\n2022-12-02 16:15:00.000|[{\"Günlük Debi\" : 5363.51},{\"KABIN NEM\" : 34.65},{\"KABIN SICAKLIK\" : 20.25},{\"AKM\" : 6.52},{\"KOi\" : 12.71},{\"AkisHizi\" : 0.54},{\"Sicaklik\" : 14.41},{\"Debi\" : 5.09},{\"CozunmusOksijen\" : 5.86},{\"Iletkenlik\" : 1933.55},{\"pH\" : 7.24}] |\n2022-12-02 16:16:00.000|[{\"Günlük Debi\" : 3423.89},{\"KABIN NEM\" : 36.27},{\"KABIN SICAKLIK\" : 20.18},{\"AKM\" : 4.54},{\"KOi\" : 24.4},{\"AkisHizi\" : 0.59},{\"Sicaklik\" : 13.53},{\"Debi\" : 3.04},{\"CozunmusOksijen\" : 5.05},{\"Iletkenlik\" : 1125.64},{\"pH\" : 7.09}] |\n2022-12-02 16:17:00.000|[{\"KABIN NEM\" : 20.22},{\"KABIN SICAKLIK\" : 6.49},{\"AKM\" : 6.36},{\"KOi\" : 30.12},{\"AkisHizi\" : 0.82},{\"Sicaklik\" : 20.36},{\"Debi\" : 16.15},{\"CozunmusOksijen\" : 2.45},{\"Iletkenlik\" : 1570.75},{\"pH\" : 7.48}] |\n\n"
] | [
1
] | [] | [] | [
"database",
"postgresql",
"query_optimization",
"sql"
] | stackoverflow_0074667358_database_postgresql_query_optimization_sql.txt |
Q:
Suggest Web Architecture Design for following scenaio
I have a web application built in ReactJS. Using OKTA for user authentication. Backend is all Java Restful APIs. User is serving our customer over call using this application. User performs high risk action (editing some information for customer). We intercept the request processed through our rule engine and send a notification (SMS) to customer via REST API (some vendor integration). Customer responds to that message (Yes - Authorize the action or No - I dont recognize requesting this action ). Once SMS is received through our communication service. We want Web UI to automatically proceed with use decision.
What type of communication do I have to establish between my Web Front-End and Backend service. How do I block or wait for the response from customer and let my Web UI know the response is received?
A:
There are several ways you could establish communication between your web front-end and your backend service. One approach could be to use WebSockets, which allow for full-duplex communication between a client and a server. This would allow your web front-end to receive real-time updates from the backend service when a customer responds to the SMS message.
Another approach could be to use a server-side event source, such as Server-Sent Events (SSE) or long polling. These technologies allow the server to send events to the client without the need for the client to continuously poll the server for updates.
To block or wait for the response from the customer, you could use a combination of these technologies to send a request to the backend service and then wait for a response. The backend service could then send a notification to the web front-end when the response from the customer is received, at which point the web UI could proceed with the user's decision.
It's worth noting that the specific implementation of this functionality will depend on the details of your application, such as the technologies and frameworks you're using, so it's best to consult with a developer with experience in building web applications to determine the best approach for your specific use case.
| Suggest Web Architecture Design for following scenaio | I have a web application built in ReactJS. Using OKTA for user authentication. Backend is all Java Restful APIs. User is serving our customer over call using this application. User performs high risk action (editing some information for customer). We intercept the request processed through our rule engine and send a notification (SMS) to customer via REST API (some vendor integration). Customer responds to that message (Yes - Authorize the action or No - I dont recognize requesting this action ). Once SMS is received through our communication service. We want Web UI to automatically proceed with use decision.
What type of communication do I have to establish between my Web Front-End and Backend service. How do I block or wait for the response from customer and let my Web UI know the response is received?
| [
"There are several ways you could establish communication between your web front-end and your backend service. One approach could be to use WebSockets, which allow for full-duplex communication between a client and a server. This would allow your web front-end to receive real-time updates from the backend service when a customer responds to the SMS message.\nAnother approach could be to use a server-side event source, such as Server-Sent Events (SSE) or long polling. These technologies allow the server to send events to the client without the need for the client to continuously poll the server for updates.\nTo block or wait for the response from the customer, you could use a combination of these technologies to send a request to the backend service and then wait for a response. The backend service could then send a notification to the web front-end when the response from the customer is received, at which point the web UI could proceed with the user's decision.\nIt's worth noting that the specific implementation of this functionality will depend on the details of your application, such as the technologies and frameworks you're using, so it's best to consult with a developer with experience in building web applications to determine the best approach for your specific use case.\n"
] | [
0
] | [] | [] | [
"architecture",
"frontend",
"reactjs",
"rest",
"websocket"
] | stackoverflow_0074671098_architecture_frontend_reactjs_rest_websocket.txt |
Q:
Is there Amazon API for Amazon Stores?
I would like to know if there's an existing API for Stores on Amazon.com? A way for developers to get the Insights from the Stores.
Service: https://advertising.amazon.com/en/solutions/products/stores
I wanna develop a integration with the service
A:
From Amazon Ads API - Manage campaigns programmatically | Amazon Ads:
The Amazon Ads API provides a way to automate, scale, and optimize advertising. Campaign and performance data for Sponsored Products, Sponsored Brands, and Sponsored Display are available through the API, enabling programmatic access for campaign management and reporting. Amazon Attribution (beta) insights are also available through the Amazon Ads API. Amazon Attribution can help measure the full-funnel impact non-Amazon Ads media such as search ads, social ads, display ads, video ads, and email marketing. Insights throughout the shopping journey including clicks, detail page views, and purchases can be utilized to optimize campaign ROI.
| Is there Amazon API for Amazon Stores? | I would like to know if there's an existing API for Stores on Amazon.com? A way for developers to get the Insights from the Stores.
Service: https://advertising.amazon.com/en/solutions/products/stores
I wanna develop a integration with the service
| [
"From Amazon Ads API - Manage campaigns programmatically | Amazon Ads:\n\nThe Amazon Ads API provides a way to automate, scale, and optimize advertising. Campaign and performance data for Sponsored Products, Sponsored Brands, and Sponsored Display are available through the API, enabling programmatic access for campaign management and reporting. Amazon Attribution (beta) insights are also available through the Amazon Ads API. Amazon Attribution can help measure the full-funnel impact non-Amazon Ads media such as search ads, social ads, display ads, video ads, and email marketing. Insights throughout the shopping journey including clicks, detail page views, and purchases can be utilized to optimize campaign ROI.\n\n"
] | [
0
] | [] | [] | [
"amazon_advertising_api",
"amazon_web_services"
] | stackoverflow_0074666658_amazon_advertising_api_amazon_web_services.txt |
Q:
Is it possible to determine the location of the user from sim card details?
Tiktok privacy policy says that TikTok can also determine its users' location through sim card info even if the GPS service is turned off. So, I just want to know how it's possible and how they are doing it? I am writing an app for iPhone which requires users' locations to work properly.
A:
I'm not an iPhone dev, but the MCC (Mobile Country Code) can be returned from the network operator. It won't give you a precise location (it's country level), but it doesn't require GPS.
Another similar question on this can be found here. Hopefully this will point you in the right direction.
| Is it possible to determine the location of the user from sim card details? | Tiktok privacy policy says that TikTok can also determine its users' location through sim card info even if the GPS service is turned off. So, I just want to know how it's possible and how they are doing it? I am writing an app for iPhone which requires users' locations to work properly.
| [
"I'm not an iPhone dev, but the MCC (Mobile Country Code) can be returned from the network operator. It won't give you a precise location (it's country level), but it doesn't require GPS.\nAnother similar question on this can be found here. Hopefully this will point you in the right direction.\n"
] | [
0
] | [] | [] | [
"geolocation",
"gps",
"ios"
] | stackoverflow_0074670919_geolocation_gps_ios.txt |
Q:
Android Studio and JDK17
I've been trying to use this API in Android Studio via Gradle, but when I build my project it throws Unsupported class file major version 61.
From what I've researched it's because I use JDK version 17 and Gradle does not yet support it, but in the API it's stated that it requires Java 17. Is there any way I could still use this API in Android Studio?
Sorry if this is a newbie question but I'm fairly new to Android.
A:
As per the docs, JDK 17 isn't supported yet in Android Studio. Actually, JDK 11 was supported starting from version 4.2 which was released in April 2021.
A:
JDK 17 is now supported by Android
A:
Perhaps you can try Android Studio Flamingo | 2022.2.1. It's still in Canary at time of writing but JDK 17 is bundled in to it as the default according to the release notes.
Starting from Android Studio Flamingo Canary 3, the Studio IDE is bundled with JDK 17. If Android Studio is configured to use the embedded JDK, new projects will use the latest stable version of the Android Gradle plugin and JDK 17. However, existing projects might break, and you might have to manually set the JDK to a compatible version.
| Android Studio and JDK17 | I've been trying to use this API in Android Studio via Gradle, but when I build my project it throws Unsupported class file major version 61.
From what I've researched it's because I use JDK version 17 and Gradle does not yet support it, but in the API it's stated that it requires Java 17. Is there any way I could still use this API in Android Studio?
Sorry if this is a newbie question but I'm fairly new to Android.
| [
"As per the docs, JDK 17 isn't supported yet in Android Studio. Actually, JDK 11 was supported starting from version 4.2 which was released in April 2021.\n",
"JDK 17 is now supported by Android\n",
"Perhaps you can try Android Studio Flamingo | 2022.2.1. It's still in Canary at time of writing but JDK 17 is bundled in to it as the default according to the release notes.\n\nStarting from Android Studio Flamingo Canary 3, the Studio IDE is bundled with JDK 17. If Android Studio is configured to use the embedded JDK, new projects will use the latest stable version of the Android Gradle plugin and JDK 17. However, existing projects might break, and you might have to manually set the JDK to a compatible version.\n\n"
] | [
2,
1,
0
] | [] | [] | [
"android",
"gradle",
"java"
] | stackoverflow_0069646277_android_gradle_java.txt |
Q:
How do I fill a dictionary with indices in a for loop?
I have a transposed Dataframe tr:
7128
8719
14051
14636
JDUTC_0
2451957.36
2452149.36
2457243.98
2452531.89
JDUTC_1
2451957.37
2452149.36
2457243.99
2452531.90
JDUTC_2
2451957.37
2452149.36
2457244.00
2452531.91
JDUTC_3
NaN
2452149.36
NaN
NaN
JDUTC_4
NaN
2452149.36
NaN
NaN
JDUTC_5
NaN
2452149.36
NaN
NaN
JDUTC_6
1.23
2452149.37
NaN
NaN
JDUTC_7
NaN
NaN
NaN
NaN
JDUTC_8
NaN
NaN
NaN
NaN
JDUTC_9
NaN
NaN
NaN
NaN
And I create dict 'a' with this block of code:
a = {}
b=[]
for _, contents in tr.items():
b.clear()
for ind, val in enumerate(contents):
if np.isnan(val):
b.append(ind)
continue
else:
pass
print(_)
print(b)
a[_] = b
print(a)
Which gives me this output:
7128
[3, 4, 5, 7, 8, 9]
{7128: [3, 4, 5, 7, 8, 9]}
8719
[7, 8, 9]
{7128: [7, 8, 9], 8719: [7, 8, 9]}
14051
[3, 4, 5, 6, 7, 8, 9]
{7128: [3, 4, 5, 6, 7, 8, 9], 8719: [3, 4, 5, 6, 7, 8, 9], 14051: [3, 4, 5, 6, 7, 8, 9]}
14636
[3, 4, 5, 6, 7, 8, 9]
{7128: [3, 4, 5, 6, 7, 8, 9], 8719: [3, 4, 5, 6, 7, 8, 9], 14051: [3, 4, 5, 6, 7, 8, 9],
14636: [3, 4, 5, 6, 7, 8, 9]}
What I expect dict 'a' to look like is this:
{7128: [3, 4, 5, 7, 8, 9]
8719: [7, 8, 9]
14051: [3, 4, 5, 6, 7, 8, 9]
14636: [3, 4, 5, 6, 7, 8, 9]}
What I am doing wrong? Why is a[_] = b overwriting all the previous keys when print(_) is verifying that _ is always the next column label?
A:
The problem is you are assigning same list to all keys.
a = {}
b=[] # < --- You create one Array/list 'b'
for _, contents in tr.items():
b.clear()
for ind, val in enumerate(contents):
if np.isnan(val):
b.append(ind)
continue
else:
pass
print(_)
print(b)
a[_] = b # <-- assign same array to all keys.
print(a)
Check my comment on the code above.
b.clear()
This line just clears the same array, it does not create a new array.
To run the code as you intended, create a new array/list in side the loop.
a = {}
for _, contents in tr.items():
b = [] # <--- new array/list is created
for ind, val in enumerate(contents):
if np.isnan(val):
b.append(ind)
continue
else:
pass
print(_)
print(b)
a[_] = b # <--- Now you assign the new array 'b' to a[_]
print(a)
A:
With the correct name convention, I would change your code
after:
import numpy as np
import pandas as pd
import sys
if sys.version_info[0] < 3:
from StringIO import StringIO
else:
from io import StringIO
s = StringIO("""idx 7128 8719 14051 14636
JDUTC_0 2451957.36 2452149.36 2457243.98 2452531.89
JDUTC_1 2451957.37 2452149.36 2457243.99 2452531.90
JDUTC_2 2451957.37 2452149.36 2457244.00 2452531.91
JDUTC_3 NaN 2452149.36 NaN NaN
JDUTC_4 NaN 2452149.36 NaN NaN
JDUTC_5 NaN 2452149.36 NaN NaN
JDUTC_6 1.23 2452149.37 NaN NaN
JDUTC_7 NaN NaN NaN NaN
JDUTC_8 NaN NaN NaN NaN
JDUTC_9 NaN NaN NaN NaN""")
tr = pd.read_csv(s, sep="\t", index_col=0)
(people should give minimal working code - but often forget to give e.g. the code to build the data frame etc. and the imports)
to:
a = {}
b = []
for name, values in tr.items():
b.clear() # this is problematic as you know
for ind, val in enumerate(values):
if np.isnan(val):
b.append(ind)
continue
else:
pass
a[name] = b
continue and pass are not necessary - they just say "go on" with the loop.
In Python, you are not forced to give the else branch:
for name, values in tr.items():
b.clear() # This is still problematic at this state.
for ind, val in enumerate(values):
if np.isnan(val):
b.append(ind)
a[name] = b
Such collection of data using for-loops are better done with list-comprehensions:
a = {}
for name, values in tr.items():
b = [ind for ind, val in enumerate(values) if np.isnan(val)]
a[name] = b
# now the result is already correct!
And finally, you can even build list-comprehensions for dictionaries -
making this entire code a one-liner - but a readable one - when one is familiar with list comprehensions:
a = {name: [i for i, x in enumerate(vals) if np.isnan(x)] for name, vals in tr.items()}
You can see the result:
a
# which returns:
{'7128': [3, 4, 5, 7, 8, 9],
'8719': [7, 8, 9],
'14051': [3, 4, 5, 6, 7, 8, 9],
'14636': [3, 4, 5, 6, 7, 8, 9]}
List-comprehensions are going into the direction of Functional Programming (FP).
Which exactly deals with the problem of not to apply mutation (like the b.append() or b.clear() methods - because - as you have seen: your case is a demonstration of how easily a bug is generated when using mutation. - and would contribute to the discussion - why FP - while it at the first sight looks brain-unfriendly - is
actually the more brain-friendly way to program.
List comprehensions are the Pythonic form of "map" - and if you use a "if" inside list comprehensions - this is the Pythonic equivalent to "filter" which FP people know like a second brain for breathing.
| How do I fill a dictionary with indices in a for loop? | I have a transposed Dataframe tr:
7128
8719
14051
14636
JDUTC_0
2451957.36
2452149.36
2457243.98
2452531.89
JDUTC_1
2451957.37
2452149.36
2457243.99
2452531.90
JDUTC_2
2451957.37
2452149.36
2457244.00
2452531.91
JDUTC_3
NaN
2452149.36
NaN
NaN
JDUTC_4
NaN
2452149.36
NaN
NaN
JDUTC_5
NaN
2452149.36
NaN
NaN
JDUTC_6
1.23
2452149.37
NaN
NaN
JDUTC_7
NaN
NaN
NaN
NaN
JDUTC_8
NaN
NaN
NaN
NaN
JDUTC_9
NaN
NaN
NaN
NaN
And I create dict 'a' with this block of code:
a = {}
b=[]
for _, contents in tr.items():
b.clear()
for ind, val in enumerate(contents):
if np.isnan(val):
b.append(ind)
continue
else:
pass
print(_)
print(b)
a[_] = b
print(a)
Which gives me this output:
7128
[3, 4, 5, 7, 8, 9]
{7128: [3, 4, 5, 7, 8, 9]}
8719
[7, 8, 9]
{7128: [7, 8, 9], 8719: [7, 8, 9]}
14051
[3, 4, 5, 6, 7, 8, 9]
{7128: [3, 4, 5, 6, 7, 8, 9], 8719: [3, 4, 5, 6, 7, 8, 9], 14051: [3, 4, 5, 6, 7, 8, 9]}
14636
[3, 4, 5, 6, 7, 8, 9]
{7128: [3, 4, 5, 6, 7, 8, 9], 8719: [3, 4, 5, 6, 7, 8, 9], 14051: [3, 4, 5, 6, 7, 8, 9],
14636: [3, 4, 5, 6, 7, 8, 9]}
What I expect dict 'a' to look like is this:
{7128: [3, 4, 5, 7, 8, 9]
8719: [7, 8, 9]
14051: [3, 4, 5, 6, 7, 8, 9]
14636: [3, 4, 5, 6, 7, 8, 9]}
What I am doing wrong? Why is a[_] = b overwriting all the previous keys when print(_) is verifying that _ is always the next column label?
| [
"The problem is you are assigning same list to all keys.\na = {}\nb=[] # < --- You create one Array/list 'b'\nfor _, contents in tr.items():\n b.clear()\n for ind, val in enumerate(contents):\n if np.isnan(val):\n b.append(ind)\n continue\n else:\n pass\n print(_)\n print(b)\n a[_] = b # <-- assign same array to all keys.\n print(a)\n\nCheck my comment on the code above.\nb.clear()\n\nThis line just clears the same array, it does not create a new array.\nTo run the code as you intended, create a new array/list in side the loop.\na = {}\nfor _, contents in tr.items():\n b = [] # <--- new array/list is created\n for ind, val in enumerate(contents):\n if np.isnan(val):\n b.append(ind)\n continue\n else:\n pass\n print(_)\n print(b)\n a[_] = b # <--- Now you assign the new array 'b' to a[_]\n print(a)\n\n",
"With the correct name convention, I would change your code\nafter:\nimport numpy as np\nimport pandas as pd\n\nimport sys\nif sys.version_info[0] < 3:\n from StringIO import StringIO\nelse:\n from io import StringIO\n\ns = StringIO(\"\"\"idx 7128 8719 14051 14636\nJDUTC_0 2451957.36 2452149.36 2457243.98 2452531.89\nJDUTC_1 2451957.37 2452149.36 2457243.99 2452531.90\nJDUTC_2 2451957.37 2452149.36 2457244.00 2452531.91\nJDUTC_3 NaN 2452149.36 NaN NaN\nJDUTC_4 NaN 2452149.36 NaN NaN\nJDUTC_5 NaN 2452149.36 NaN NaN\nJDUTC_6 1.23 2452149.37 NaN NaN\nJDUTC_7 NaN NaN NaN NaN\nJDUTC_8 NaN NaN NaN NaN\nJDUTC_9 NaN NaN NaN NaN\"\"\")\n\ntr = pd.read_csv(s, sep=\"\\t\", index_col=0)\n\n(people should give minimal working code - but often forget to give e.g. the code to build the data frame etc. and the imports)\nto:\n\n\na = {}\nb = []\nfor name, values in tr.items():\n b.clear() # this is problematic as you know\n for ind, val in enumerate(values):\n if np.isnan(val):\n b.append(ind)\n continue\n else:\n pass\n a[name] = b\n\ncontinue and pass are not necessary - they just say \"go on\" with the loop.\nIn Python, you are not forced to give the else branch:\nfor name, values in tr.items():\n b.clear() # This is still problematic at this state.\n for ind, val in enumerate(values):\n if np.isnan(val):\n b.append(ind)\n a[name] = b\n\nSuch collection of data using for-loops are better done with list-comprehensions:\na = {}\nfor name, values in tr.items():\n b = [ind for ind, val in enumerate(values) if np.isnan(val)]\n a[name] = b\n# now the result is already correct!\n\nAnd finally, you can even build list-comprehensions for dictionaries -\nmaking this entire code a one-liner - but a readable one - when one is familiar with list comprehensions:\na = {name: [i for i, x in enumerate(vals) if np.isnan(x)] for name, vals in tr.items()}\n\nYou can see the result:\na\n# which returns:\n{'7128': [3, 4, 5, 7, 8, 9],\n '8719': [7, 8, 9],\n '14051': [3, 4, 5, 6, 7, 8, 9],\n '14636': [3, 4, 5, 6, 7, 8, 9]}\n\nList-comprehensions are going into the direction of Functional Programming (FP).\nWhich exactly deals with the problem of not to apply mutation (like the b.append() or b.clear() methods - because - as you have seen: your case is a demonstration of how easily a bug is generated when using mutation. - and would contribute to the discussion - why FP - while it at the first sight looks brain-unfriendly - is\nactually the more brain-friendly way to program.\nList comprehensions are the Pythonic form of \"map\" - and if you use a \"if\" inside list comprehensions - this is the Pythonic equivalent to \"filter\" which FP people know like a second brain for breathing.\n"
] | [
1,
1
] | [] | [] | [
"dictionary",
"for_loop",
"python"
] | stackoverflow_0074669044_dictionary_for_loop_python.txt |
Q:
How to mock *exec.Cmd / exec.Command()?
I need to mock exec.Command().
I can mock it using:
var rName string
var rArgs []string
mockExecCommand := func(name string, arg ...string) *exec.Cmd {
rName = name
rArgs = arg
return nil
}
However, this won't work in the actual code, as it complains about the nil pointer, since the returning exec.Cmd calls Run().
I tried to mock it like:
type mock exec.Cmd
func (m *mock) Run() error {
return nil
}
var rName string
var rArgs []string
mockExecCommand := func(name string, arg ...string) *exec.Cmd {
rName = name
rArgs = arg
m := mock{}
return &m
}
But it complains: cannot use &m (value of type *mock) as *exec.Cmd value in return statementcompilerIncompatibleAssign.
Is there any way to approach this? Is there a better way to mock exec.Command()?
The mocked function works if I return a "mock" command, although I'd prefer to control the Run() function too:
var rName string
var rArgs []string
mockExecCommand := func(name string, arg ...string) *exec.Cmd {
rName = name
rArgs = arg
return exec.Command("echo")
}
A:
While hijacking the test executable to run a specific function works, it would be more straightforward to just use regular dependency injection. No magic required.
Design an interface (e.g. CommandExecutor) that can run commands, then take one of those as your input to whatever function needs to run a command. You can then provide a mock implementation that satisfies the interface (hand-crafted, or generated using your tool of choice, like GoMock) during your tests. Provide the real implementation (which calls into the exec package) for your production code. Your mock implementation can even make assertions on the arguments so that you know the command is being "executed" correctly.
A:
There is actually a way to do this. All credit goes to this article. Check it out for an explanation on what's going on below:
func fakeExecCommand(command string, args...string) *exec.Cmd {
cs := []string{"-test.run=TestHelperProcess", "--", command}
cs = append(cs, args...)
cmd := exec.Command(os.Args[0], cs...)
cmd.Env = []string{"GO_WANT_HELPER_PROCESS=1"}
return cmd
}
func TestHelperProcess(t *testing.T){
if os.Getenv("GO_WANT_HELPER_PROCESS") != "1" {
return
}
os.Exit(0)
}
A:
The best way that I know of in go is to use polymorphism. You were on the right track. A detailed explanation is at https://github.com/schollii/go-test-mock-exec-command, which I created because when I searched for how to mock os/exec, all I could find was the env variable technique mentioned in another answer. That approach is absolutely not necessary, and as I mention in the readme of the git repo I linked to, all it takes is a bit of polymorphism.
The summary is basically this:
Create an interface class for exec.Cmd that has only the necessary methods to be used by your application (or module) code
Create a struct that implements that interface, eg it can just mention exec.Cmd
Create a package-level var (exported) that points to a function that returns the struct from step 2
Make your application code use that package-level var
Make your test create a new struct that implements that interface, but contains only outputs and exit codes, and make the test replace that package-level var by an instance of this new struct
It will look something like this in the application code:
type IShellCommand interface {
Run() error
}
type execShellCommand struct {
*exec.Cmd
}
func newExecShellCommander(name string, arg ...string) IShellCommand {
execCmd := exec.Command(name, arg...)
return execShellCommand{Cmd: execCmd}
}
// override this in tests to mock the git shell command
var shellCommander = newExecShellCommander
func myFuncThatUsesExecCmd() {
cmd := shellCommander("git", "rev-parse", "--abbrev-ref", "HEAD")
err := cmd.Run()
if err != nil {
// handle error
} else {
// process & handle output
}
}
On the test side it will look something like this:
type myShellCommand struct {
RunnerFunc func() error
}
func (sc myShellCommand) Run() error {
return sc.RunnerFunc()
}
func Test_myFuncThatUsesExecCmd(t *testing.T) {
// temporarily swap the shell commander
curShellCommander := shellCommander
defer func() { shellCommander = curShellCommander }()
shellCommander = func(name string, arg ...string) IShellCommand {
fmt.Printf("exec.Command() for %v called with %v and %v\n", t.Name(), name, arg)
return myShellCommand{
RunnerFunc: func() error {
return nil
},
}
}
// now that shellCommander is mocked, call the function that we want to test:
myFuncThatUsesExecCmd()
// do checks
}
| How to mock *exec.Cmd / exec.Command()? | I need to mock exec.Command().
I can mock it using:
var rName string
var rArgs []string
mockExecCommand := func(name string, arg ...string) *exec.Cmd {
rName = name
rArgs = arg
return nil
}
However, this won't work in the actual code, as it complains about the nil pointer, since the returning exec.Cmd calls Run().
I tried to mock it like:
type mock exec.Cmd
func (m *mock) Run() error {
return nil
}
var rName string
var rArgs []string
mockExecCommand := func(name string, arg ...string) *exec.Cmd {
rName = name
rArgs = arg
m := mock{}
return &m
}
But it complains: cannot use &m (value of type *mock) as *exec.Cmd value in return statementcompilerIncompatibleAssign.
Is there any way to approach this? Is there a better way to mock exec.Command()?
The mocked function works if I return a "mock" command, although I'd prefer to control the Run() function too:
var rName string
var rArgs []string
mockExecCommand := func(name string, arg ...string) *exec.Cmd {
rName = name
rArgs = arg
return exec.Command("echo")
}
| [
"While hijacking the test executable to run a specific function works, it would be more straightforward to just use regular dependency injection. No magic required.\nDesign an interface (e.g. CommandExecutor) that can run commands, then take one of those as your input to whatever function needs to run a command. You can then provide a mock implementation that satisfies the interface (hand-crafted, or generated using your tool of choice, like GoMock) during your tests. Provide the real implementation (which calls into the exec package) for your production code. Your mock implementation can even make assertions on the arguments so that you know the command is being \"executed\" correctly.\n",
"There is actually a way to do this. All credit goes to this article. Check it out for an explanation on what's going on below:\nfunc fakeExecCommand(command string, args...string) *exec.Cmd {\n cs := []string{\"-test.run=TestHelperProcess\", \"--\", command}\n cs = append(cs, args...)\n cmd := exec.Command(os.Args[0], cs...)\n cmd.Env = []string{\"GO_WANT_HELPER_PROCESS=1\"}\n return cmd\n}\n\nfunc TestHelperProcess(t *testing.T){\n if os.Getenv(\"GO_WANT_HELPER_PROCESS\") != \"1\" {\n return\n }\n os.Exit(0)\n}\n\n",
"The best way that I know of in go is to use polymorphism. You were on the right track. A detailed explanation is at https://github.com/schollii/go-test-mock-exec-command, which I created because when I searched for how to mock os/exec, all I could find was the env variable technique mentioned in another answer. That approach is absolutely not necessary, and as I mention in the readme of the git repo I linked to, all it takes is a bit of polymorphism.\nThe summary is basically this:\n\nCreate an interface class for exec.Cmd that has only the necessary methods to be used by your application (or module) code\nCreate a struct that implements that interface, eg it can just mention exec.Cmd\nCreate a package-level var (exported) that points to a function that returns the struct from step 2\nMake your application code use that package-level var\nMake your test create a new struct that implements that interface, but contains only outputs and exit codes, and make the test replace that package-level var by an instance of this new struct\n\nIt will look something like this in the application code:\ntype IShellCommand interface {\n Run() error\n}\n\ntype execShellCommand struct {\n *exec.Cmd\n}\n\nfunc newExecShellCommander(name string, arg ...string) IShellCommand {\n execCmd := exec.Command(name, arg...)\n return execShellCommand{Cmd: execCmd}\n}\n\n// override this in tests to mock the git shell command\nvar shellCommander = newExecShellCommander\n\nfunc myFuncThatUsesExecCmd() {\n cmd := shellCommander(\"git\", \"rev-parse\", \"--abbrev-ref\", \"HEAD\")\n err := cmd.Run()\n if err != nil {\n // handle error\n } else {\n // process & handle output\n }\n}\n\nOn the test side it will look something like this:\ntype myShellCommand struct {\n RunnerFunc func() error\n}\n\nfunc (sc myShellCommand) Run() error {\n return sc.RunnerFunc()\n}\n\nfunc Test_myFuncThatUsesExecCmd(t *testing.T) {\n // temporarily swap the shell commander\n curShellCommander := shellCommander\n defer func() { shellCommander = curShellCommander }()\n\n shellCommander = func(name string, arg ...string) IShellCommand {\n fmt.Printf(\"exec.Command() for %v called with %v and %v\\n\", t.Name(), name, arg)\n return myShellCommand{\n RunnerFunc: func() error {\n return nil\n },\n }\n }\n\n // now that shellCommander is mocked, call the function that we want to test:\n myFuncThatUsesExecCmd()\n // do checks\n }\n\n"
] | [
1,
0,
0
] | [
"\nHow to mock *exec.Cmd / exec.Command()?\n\nYou cannot. Come up with a non mock-based testing strategy.\n"
] | [
-2
] | [
"go",
"mocking"
] | stackoverflow_0071102318_go_mocking.txt |
Q:
Get HTML5 localStorage keys
I'm just wondering how to get all key values in localStorage.
I have tried to retrieve the values with a simple JavaScript loop
for (var i=1; i <= localStorage.length; i++) {
alert(localStorage.getItem(i))
}
But it works only if the keys are progressive numbers, starting at 1.
How do I get all the keys, in order to display all available data?
A:
for (var key in localStorage){
console.log(key)
}
EDIT: this answer is getting a lot of upvotes, so I guess it's a common question. I feel like I owe it to anyone who might stumble on my answer and think that it's "right" just because it was accepted to make an update. Truth is, the example above isn't really the right way to do this. The best and safest way is to do it like this:
for ( var i = 0, len = localStorage.length; i < len; ++i ) {
console.log( localStorage.getItem( localStorage.key( i ) ) );
}
A:
in ES2017 you can use:
Object.entries(localStorage)
A:
I like to create an easily visible object out of it like this.
Object.keys(localStorage).reduce(function(obj, str) {
obj[str] = localStorage.getItem(str);
return obj
}, {});
I do a similar thing with cookies as well.
document.cookie.split(';').reduce(function(obj, str){
var s = str.split('=');
obj[s[0].trim()] = s[1];
return obj;
}, {});
A:
function listAllItems(){
for (i=0; i<localStorage.length; i++)
{
key = localStorage.key(i);
alert(localStorage.getItem(key));
}
}
A:
You can use the localStorage.key(index) function to return the string representation, where index is the nth object you want to retrieve.
A:
You can get keys and values like this:
for (let [key, value] of Object.entries(localStorage)) {
console.log(`${key}: ${value}`);
}
A:
If the browser supports HTML5 LocalStorage it should also implement Array.prototype.map, enabling this:
Array.apply(0, new Array(localStorage.length)).map(function (o, i) {
return localStorage.key(i);
})
A:
Since the question mentioned finding the keys, I figured I'd mention that to show every key and value pair, you could do it like this (based on Kevin's answer):
for ( var i = 0, len = localStorage.length; i < len; ++i ) {
console.log( localStorage.key( i ) + ": " + localStorage.getItem( localStorage.key( i ) ) );
}
This will log the data in the format "key: value"
(Kevin: feel free to just take this info into the your answer if you want!)
A:
I agree with Kevin he has the best answer but sometimes when you have different keys in your local storage with the same values for example you want your public users to see how many times they have added their items into their baskets you need to show them the number of times as well then you ca use this:
var set = localStorage.setItem('key', 'value');
var element = document.getElementById('tagId');
for ( var i = 0, len = localStorage.length; i < len; ++i ) {
element.innerHTML = localStorage.getItem(localStorage.key(i)) + localStorage.key(i).length;
}
A:
This will print all the keys and values on localStorage:
ES6:
for (let i=0; i< localStorage.length; i++) {
let key = localStorage.key(i);
let value = localStorage[key];
console.log(`localStorage ${key}: ${value}`);
}
A:
You can create an object even more simply by using Object.assign:
// returns an object of all keys/values in localStorage
Object.assign({}, window.localStorage);
You can read more about it here at MDN.
The caniuse page says support is currently at about 95% of all browser share (IE being the odd one out-- what a surprise).
A:
Just type localStorage to developer console. It logs localStorage keys nicely formatted.
Sometimes the easiest answer is the best one :)
A:
For anyone looking for a jQuery solution, here is a quick example.
$.each(localStorage, function(key, str){
console.log(key + ": " + str);
});
A:
For anyone searching this trying to find localStorage keys...
The answer is simply:
Object.keys(localStorage);
| Get HTML5 localStorage keys | I'm just wondering how to get all key values in localStorage.
I have tried to retrieve the values with a simple JavaScript loop
for (var i=1; i <= localStorage.length; i++) {
alert(localStorage.getItem(i))
}
But it works only if the keys are progressive numbers, starting at 1.
How do I get all the keys, in order to display all available data?
| [
"for (var key in localStorage){\n console.log(key)\n}\n\nEDIT: this answer is getting a lot of upvotes, so I guess it's a common question. I feel like I owe it to anyone who might stumble on my answer and think that it's \"right\" just because it was accepted to make an update. Truth is, the example above isn't really the right way to do this. The best and safest way is to do it like this:\nfor ( var i = 0, len = localStorage.length; i < len; ++i ) {\n console.log( localStorage.getItem( localStorage.key( i ) ) );\n}\n\n",
"in ES2017 you can use:\nObject.entries(localStorage)\n\n",
"I like to create an easily visible object out of it like this.\nObject.keys(localStorage).reduce(function(obj, str) { \n obj[str] = localStorage.getItem(str); \n return obj\n}, {});\n\nI do a similar thing with cookies as well.\ndocument.cookie.split(';').reduce(function(obj, str){ \n var s = str.split('='); \n obj[s[0].trim()] = s[1];\n return obj;\n}, {});\n\n",
"function listAllItems(){ \n for (i=0; i<localStorage.length; i++) \n { \n key = localStorage.key(i); \n alert(localStorage.getItem(key));\n } \n}\n\n",
"You can use the localStorage.key(index) function to return the string representation, where index is the nth object you want to retrieve.\n",
"You can get keys and values like this:\nfor (let [key, value] of Object.entries(localStorage)) {\n console.log(`${key}: ${value}`);\n}\n\n",
"If the browser supports HTML5 LocalStorage it should also implement Array.prototype.map, enabling this:\nArray.apply(0, new Array(localStorage.length)).map(function (o, i) {\n return localStorage.key(i);\n})\n\n",
"Since the question mentioned finding the keys, I figured I'd mention that to show every key and value pair, you could do it like this (based on Kevin's answer):\nfor ( var i = 0, len = localStorage.length; i < len; ++i ) {\n console.log( localStorage.key( i ) + \": \" + localStorage.getItem( localStorage.key( i ) ) );\n}\n\nThis will log the data in the format \"key: value\"\n(Kevin: feel free to just take this info into the your answer if you want!)\n",
"I agree with Kevin he has the best answer but sometimes when you have different keys in your local storage with the same values for example you want your public users to see how many times they have added their items into their baskets you need to show them the number of times as well then you ca use this:\nvar set = localStorage.setItem('key', 'value');\nvar element = document.getElementById('tagId');\n\nfor ( var i = 0, len = localStorage.length; i < len; ++i ) {\n element.innerHTML = localStorage.getItem(localStorage.key(i)) + localStorage.key(i).length;\n}\n\n",
"This will print all the keys and values on localStorage:\nES6:\nfor (let i=0; i< localStorage.length; i++) {\n let key = localStorage.key(i);\n let value = localStorage[key];\n console.log(`localStorage ${key}: ${value}`);\n}\n\n",
"You can create an object even more simply by using Object.assign:\n// returns an object of all keys/values in localStorage\nObject.assign({}, window.localStorage);\n\nYou can read more about it here at MDN.\nThe caniuse page says support is currently at about 95% of all browser share (IE being the odd one out-- what a surprise).\n",
"Just type localStorage to developer console. It logs localStorage keys nicely formatted.\nSometimes the easiest answer is the best one :)\n",
"For anyone looking for a jQuery solution, here is a quick example.\n$.each(localStorage, function(key, str){\n console.log(key + \": \" + str);\n});\n\n",
"For anyone searching this trying to find localStorage keys...\nThe answer is simply:\nObject.keys(localStorage);\n\n"
] | [
331,
76,
33,
17,
10,
9,
8,
7,
2,
2,
1,
0,
0,
0
] | [
"For those mentioning using Object.keys(localStorage)... don't because it won't work in Firefox (ironically because Firefox is faithful to the spec). Consider this:\nlocalStorage.setItem(\"key\", \"value1\")\nlocalStorage.setItem(\"key2\", \"value2\")\nlocalStorage.setItem(\"getItem\", \"value3\")\nlocalStorage.setItem(\"setItem\", \"value4\")\n\nBecause key, getItem and setItem are prototypal methods Object.keys(localStorage) will only return [\"key2\"].\nYou are best to do something like this:\nlet t = [];\nfor (let i = 0; i < localStorage.length; i++) {\n t.push(localStorage.key(i));\n}\n\n"
] | [
-1
] | [
"html",
"javascript",
"key",
"local_storage"
] | stackoverflow_0008419354_html_javascript_key_local_storage.txt |
Q:
I need help merging two rows based on certain string character, the string is complaint
I am trying to calculate the fraction of the construction noise per zip code across NY city. The data is from NYC 311.
I am using dplyr and have grouped the data per zip.
However, I am finding difficulties merging the row for the complain column, I have to merge the data as per the string "construction" it appear anywhere meaning middle, front or end.
My solution, this is just the beginning
comp_types <- df %>% select(complaint_type,descriptor,incident_zip) %>%
group_by(incident_zip)
can you help me merge the row if unique value in descriptor contains any construction value.
A:
Can you clarify what you mean by "merging"? I don't think you actually want to merge because you only have one dataframe. The term "merging" is used to describe the joining of two dataframes.
See ?base::merge:
Merge two data frames by common columns or row names, or do other versions of database join operations.
If I understand correctly, you want to look into the descriptor variable and see if it contains the string "construction" anywhere in the cell, so you can determine if the person's complaint was construction-related; same for "music". I don't believe you need to use complaint_type since complaint_type never contains the string "construction" or "music"; only descriptor does.
You can use a combination of ifelse and grepl to create a new variable that indicates whether the complaint was construction-related, music-related, or other.
library(tidyverse)
library(janitor)
url <- "https://data.cityofnewyork.us/api/views/p5f6-bkga/rows.csv"
df <- read.csv(url, nrows = 10000) %>%
clean_names() %>%
select(complaint_type, descriptor, incident_zip)
comp_types <- df %>%
select(complaint_type, descriptor, incident_zip) %>%
group_by(incident_zip)
head(comp_types)
#> # A tibble: 6 × 3
#> # Groups: incident_zip [6]
#> complaint_type descriptor incident_zip
#> <chr> <chr> <int>
#> 1 Noise - Residential Banging/Pounding 11364
#> 2 Noise - Residential Loud Music/Party 11222
#> 3 Noise - Residential Banging/Pounding 10033
#> 4 Noise - Residential Loud Music/Party 11208
#> 5 Noise - Residential Loud Music/Party 10037
#> 6 Noise Noise: Construction Before/After Hours (NM1) 11238
table(df$complaint_type)
#>
#> Noise Noise - Commercial Noise - Helicopter
#> 555 591 145
#> Noise - House of Worship Noise - Park Noise - Residential
#> 20 72 5675
#> Noise - Street/Sidewalk Noise - Vehicle
#> 2040 902
df <- df %>%
mutate(descriptor_misc = ifelse(grepl("Construction", descriptor), "Construction",
ifelse(grepl("Music", descriptor), "Music", "Other")))
df %>%
group_by(descriptor_misc) %>%
count()
#> # A tibble: 3 × 2
#> # Groups: descriptor_misc [3]
#> descriptor_misc n
#> <chr> <int>
#> 1 Construction 328
#> 2 Music 6354
#> 3 Other 3318
head(df)
#> complaint_type descriptor incident_zip
#> 1 Noise - Residential Banging/Pounding 11364
#> 2 Noise - Residential Loud Music/Party 11222
#> 3 Noise - Residential Banging/Pounding 10033
#> 4 Noise - Residential Loud Music/Party 11208
#> 5 Noise - Residential Loud Music/Party 10037
#> 6 Noise Noise: Construction Before/After Hours (NM1) 11238
#> descriptor_misc
#> 1 Other
#> 2 Music
#> 3 Other
#> 4 Music
#> 5 Music
#> 6 Construction
| I need help merging two rows based on certain string character, the string is complaint | I am trying to calculate the fraction of the construction noise per zip code across NY city. The data is from NYC 311.
I am using dplyr and have grouped the data per zip.
However, I am finding difficulties merging the row for the complain column, I have to merge the data as per the string "construction" it appear anywhere meaning middle, front or end.
My solution, this is just the beginning
comp_types <- df %>% select(complaint_type,descriptor,incident_zip) %>%
group_by(incident_zip)
can you help me merge the row if unique value in descriptor contains any construction value.
| [
"Can you clarify what you mean by \"merging\"? I don't think you actually want to merge because you only have one dataframe. The term \"merging\" is used to describe the joining of two dataframes.\nSee ?base::merge:\n\nMerge two data frames by common columns or row names, or do other versions of database join operations.\n\nIf I understand correctly, you want to look into the descriptor variable and see if it contains the string \"construction\" anywhere in the cell, so you can determine if the person's complaint was construction-related; same for \"music\". I don't believe you need to use complaint_type since complaint_type never contains the string \"construction\" or \"music\"; only descriptor does.\nYou can use a combination of ifelse and grepl to create a new variable that indicates whether the complaint was construction-related, music-related, or other.\nlibrary(tidyverse)\nlibrary(janitor)\nurl <- \"https://data.cityofnewyork.us/api/views/p5f6-bkga/rows.csv\"\ndf <- read.csv(url, nrows = 10000) %>%\n clean_names() %>%\n select(complaint_type, descriptor, incident_zip)\n\ncomp_types <- df %>% \n select(complaint_type, descriptor, incident_zip) %>% \n group_by(incident_zip) \nhead(comp_types)\n#> # A tibble: 6 × 3\n#> # Groups: incident_zip [6]\n#> complaint_type descriptor incident_zip\n#> <chr> <chr> <int>\n#> 1 Noise - Residential Banging/Pounding 11364\n#> 2 Noise - Residential Loud Music/Party 11222\n#> 3 Noise - Residential Banging/Pounding 10033\n#> 4 Noise - Residential Loud Music/Party 11208\n#> 5 Noise - Residential Loud Music/Party 10037\n#> 6 Noise Noise: Construction Before/After Hours (NM1) 11238\n\ntable(df$complaint_type)\n#> \n#> Noise Noise - Commercial Noise - Helicopter \n#> 555 591 145 \n#> Noise - House of Worship Noise - Park Noise - Residential \n#> 20 72 5675 \n#> Noise - Street/Sidewalk Noise - Vehicle \n#> 2040 902\n\ndf <- df %>%\n mutate(descriptor_misc = ifelse(grepl(\"Construction\", descriptor), \"Construction\", \n ifelse(grepl(\"Music\", descriptor), \"Music\", \"Other\")))\n\ndf %>%\n group_by(descriptor_misc) %>%\n count()\n#> # A tibble: 3 × 2\n#> # Groups: descriptor_misc [3]\n#> descriptor_misc n\n#> <chr> <int>\n#> 1 Construction 328\n#> 2 Music 6354\n#> 3 Other 3318\n\nhead(df)\n#> complaint_type descriptor incident_zip\n#> 1 Noise - Residential Banging/Pounding 11364\n#> 2 Noise - Residential Loud Music/Party 11222\n#> 3 Noise - Residential Banging/Pounding 10033\n#> 4 Noise - Residential Loud Music/Party 11208\n#> 5 Noise - Residential Loud Music/Party 10037\n#> 6 Noise Noise: Construction Before/After Hours (NM1) 11238\n#> descriptor_misc\n#> 1 Other\n#> 2 Music\n#> 3 Other\n#> 4 Music\n#> 5 Music\n#> 6 Construction\n\n"
] | [
0
] | [] | [] | [
"character",
"dplyr",
"merge",
"r",
"string"
] | stackoverflow_0074650740_character_dplyr_merge_r_string.txt |
Q:
Firebase Cloud Messaging: "Topic Quota Exceeded"
I have a webapp and a Windows Service which communicate using Firebase Cloud Messaging. The webapp subscribes to a couple of Topics to receive messages, and Windows Service App sends messages to one of these Topics. In some cases it can be several messages per seconds, and it gives me this error:
FirebaseAdmin.Messaging.FirebaseMessagingException: Topic quota exceeded
I don't quite get it. Is there a limit to messages that can be sent to a specific topic, or what is the meaning?
I have found until now only info about topic names and subscription limits, but I actually couldn't find anything about "topic quota", except maybe this page of the docs (https://firebase.google.com/docs/cloud-messaging/concept-options#fanout_throttling) although I am not sure it refers to the same thing, and in case if and how it can be changed. In the Firebase Console I can't find anything either. Has anybody got an idea?
A:
"Topic quota exceeded" error is related to the number of messages that can be sent to a topic in a given time period. The exact limit and time period may vary depending on your Firebase plan.
The default FCM message rate limits are up to 100 messages per second (up to 10 million messages per day) for each project.
These limits may be increased by contacting Firebase support.
To avoid hitting these limits, you can try implementing a rate-limiting mechanism in your Windows Service app. This could involve tracking the number of messages sent to a topic and using a delay or throttling mechanism to ensure that the message rate does not exceed the limit.
Also, you can try using Firebase Cloud Functions to send messages to topics. Cloud Functions allow you to run code in response to events triggered by Firebase features, including FCM messages. This could help you avoid hitting the message rate limits, as Cloud Functions scale automatically in response to increased traffic.
| Firebase Cloud Messaging: "Topic Quota Exceeded" | I have a webapp and a Windows Service which communicate using Firebase Cloud Messaging. The webapp subscribes to a couple of Topics to receive messages, and Windows Service App sends messages to one of these Topics. In some cases it can be several messages per seconds, and it gives me this error:
FirebaseAdmin.Messaging.FirebaseMessagingException: Topic quota exceeded
I don't quite get it. Is there a limit to messages that can be sent to a specific topic, or what is the meaning?
I have found until now only info about topic names and subscription limits, but I actually couldn't find anything about "topic quota", except maybe this page of the docs (https://firebase.google.com/docs/cloud-messaging/concept-options#fanout_throttling) although I am not sure it refers to the same thing, and in case if and how it can be changed. In the Firebase Console I can't find anything either. Has anybody got an idea?
| [
"\"Topic quota exceeded\" error is related to the number of messages that can be sent to a topic in a given time period. The exact limit and time period may vary depending on your Firebase plan.\nThe default FCM message rate limits are up to 100 messages per second (up to 10 million messages per day) for each project.\nThese limits may be increased by contacting Firebase support.\nTo avoid hitting these limits, you can try implementing a rate-limiting mechanism in your Windows Service app. This could involve tracking the number of messages sent to a topic and using a delay or throttling mechanism to ensure that the message rate does not exceed the limit.\nAlso, you can try using Firebase Cloud Functions to send messages to topics. Cloud Functions allow you to run code in response to events triggered by Firebase features, including FCM messages. This could help you avoid hitting the message rate limits, as Cloud Functions scale automatically in response to increased traffic.\n"
] | [
0
] | [] | [] | [
"firebase",
"firebase_cloud_messaging"
] | stackoverflow_0073853231_firebase_firebase_cloud_messaging.txt |
Q:
Add toolbar button icon matplotlib
I want to add an icon to a custom button in a matplotlib figure toolbar. How can I do that? So far, I have the following code:
import matplotlib
matplotlib.rcParams["toolbar"] = "toolmanager"
import matplotlib.pyplot as plt
from matplotlib.backend_tools import ToolToggleBase
class NewTool(ToolToggleBase):
...[tool code]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot([1, 2, 3], label="legend")
ax.legend()
fig.canvas.manager.toolmanager.add_tool("newtool", NewTool)
fig.canvas.manager.toolbar.add_tool(toolmanager.get_tool("newtool"), "toolgroup")
fig.show()
For now, the only thing that it does is adding a new button (which do what I want) but the icon is only the tool's name i.e.: "newtool". How can I change this for a custom icon like a png image?
A:
The tool can have an attribute image, which denotes the path to a png image.
import matplotlib
matplotlib.rcParams["toolbar"] = "toolmanager"
import matplotlib.pyplot as plt
from matplotlib.backend_tools import ToolBase
class NewTool(ToolBase):
image = r"C:\path\to\hiker.png"
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot([1, 2, 3], label="legend")
ax.legend()
tm = fig.canvas.manager.toolmanager
tm.add_tool("newtool", NewTool)
fig.canvas.manager.toolbar.add_tool(tm.get_tool("newtool"), "toolgroup")
plt.show()
A:
I tried this solution, there is also a similar solution on matplotlib docs, but I >cannot reproduce it. I get the following error: tm.add_tool("newtool", NewTool) >AttributeError: 'NoneType' object has no attribute 'add_tool' Seems weird that >plt.figure() does not contain canvas. Any ideas? –
bad_locality
Jun 25, 2020 at 12:12
I do not know if it's help, but I had the same issues at first and then i realized that I had not activated the interactive matplotlib option (%matplotlib with Python - spyder). Therefore no toolbar are associated since it only create a static figure.
| Add toolbar button icon matplotlib | I want to add an icon to a custom button in a matplotlib figure toolbar. How can I do that? So far, I have the following code:
import matplotlib
matplotlib.rcParams["toolbar"] = "toolmanager"
import matplotlib.pyplot as plt
from matplotlib.backend_tools import ToolToggleBase
class NewTool(ToolToggleBase):
...[tool code]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot([1, 2, 3], label="legend")
ax.legend()
fig.canvas.manager.toolmanager.add_tool("newtool", NewTool)
fig.canvas.manager.toolbar.add_tool(toolmanager.get_tool("newtool"), "toolgroup")
fig.show()
For now, the only thing that it does is adding a new button (which do what I want) but the icon is only the tool's name i.e.: "newtool". How can I change this for a custom icon like a png image?
| [
"The tool can have an attribute image, which denotes the path to a png image.\nimport matplotlib\nmatplotlib.rcParams[\"toolbar\"] = \"toolmanager\"\nimport matplotlib.pyplot as plt\nfrom matplotlib.backend_tools import ToolBase\n\nclass NewTool(ToolBase):\n image = r\"C:\\path\\to\\hiker.png\"\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.plot([1, 2, 3], label=\"legend\")\nax.legend()\ntm = fig.canvas.manager.toolmanager\ntm.add_tool(\"newtool\", NewTool)\nfig.canvas.manager.toolbar.add_tool(tm.get_tool(\"newtool\"), \"toolgroup\")\nplt.show()\n\n\n",
"\nI tried this solution, there is also a similar solution on matplotlib docs, but I >cannot reproduce it. I get the following error: tm.add_tool(\"newtool\", NewTool) >AttributeError: 'NoneType' object has no attribute 'add_tool' Seems weird that >plt.figure() does not contain canvas. Any ideas? –\nbad_locality\nJun 25, 2020 at 12:12\n\nI do not know if it's help, but I had the same issues at first and then i realized that I had not activated the interactive matplotlib option (%matplotlib with Python - spyder). Therefore no toolbar are associated since it only create a static figure.\n"
] | [
7,
0
] | [] | [] | [
"matplotlib",
"python"
] | stackoverflow_0052971285_matplotlib_python.txt |
Q:
PowerShell Add-Type : Cannot add type. already exist
I'm using PowerShell script to run C# code directly in the script. I've run in to an error a particular error a few times. If I make any changes to the C# code in the PowerShell ISE and try to run it again I get the following error.
Add-Type : Cannot add type. The type name 'AlertsOnOff10.onOff' already exists.
At C:\Users\testUser\Desktop\test.ps1:80 char:1
+ Add-Type -TypeDefinition $Source -ReferencedAssemblies $Assem
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (AlertsOnOff10.onOff:String) [Add-Type], Exception
+ FullyQualifiedErrorId : TYPE_ALREADY_EXISTS,Microsoft.PowerShell.Commands.AddTypeCommand
The way I have been resolving this error is by changing the namespace and the command to call the C# method [AlertsOnOff10.onOff]::Main("off"). I there a way I can prevent this error from happening without having to change namespace and method call?
A:
For those who want to avoid the error or avoid loading the type if it's already been loaded use the following check:
if ("TrustAllCertsPolicy" -as [type]) {} else {
Add-Type "using System.Net;using System.Security.Cryptography.X509Certificates;public class TrustAllCertsPolicy : ICertificatePolicy {public bool CheckValidationResult(ServicePoint srvPoint, X509Certificate certificate, WebRequest request, int certificateProblem) {return true;}}"
[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy
}
I post this because you get the error OP posted if you make even superficial (e.g. formatting) changes to the C# code.
A:
To my knowledge there is no way to remove a type from a PowerShell session once it has been added.
The (annoying) workaround I would suggest is to write your code in one ISE session, and execute it in a completely different session (separate console window or separate ISE if you want to be able to debug).
This only matters if you're changing $Source though (actively developing the type definition). If that's not the part that's changing, then ignore the errors, or if it's a terminating error use -ErrorAction to change it.
A:
You can execute it as a job:
$cmd = {
$code = @'
using System;
namespace MyCode
{
public class Helper
{
public static string FormatText(string message)
{
return "Version 1: " + message;
}
}
}
'@
Add-Type -TypeDefinition $code -PassThru | Out-Null
Write-Output $( [MyCode.Helper]::FormatText("It Works!") )
}
$j = Start-Job -ScriptBlock $cmd
do
{
Receive-Job -Job $j
} while ( $j.State -eq "Running" )
A:
Adam Furmanek's blog has the simplest and best work around. This goes something like this below. If you want to see how to pass in parameters for that, you can see that https://samtran.me/2020/02/09/execute-c-code-with-parameters-using-powershell/
$id = get-random
$code = @"
using System;
namespace HelloWorld
{
public class Program$id
{
public static void Main(){
Console.WriteLine("Hello world again!");
}
}
}
"@
Add-Type -TypeDefinition $code -Language CSharp
Invoke-Expression "[HelloWorld.Program$id]::Main()"
A:
The simple solution is close Powershell and reopen it. Types you've added with Add-Type will be removed on closing, then run code again.
A:
Instead of
./YourScript.ps1
use this command
powershell.exe -ExecutionPolicy ByPass -Command "./YourScript.ps1"
| PowerShell Add-Type : Cannot add type. already exist | I'm using PowerShell script to run C# code directly in the script. I've run in to an error a particular error a few times. If I make any changes to the C# code in the PowerShell ISE and try to run it again I get the following error.
Add-Type : Cannot add type. The type name 'AlertsOnOff10.onOff' already exists.
At C:\Users\testUser\Desktop\test.ps1:80 char:1
+ Add-Type -TypeDefinition $Source -ReferencedAssemblies $Assem
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (AlertsOnOff10.onOff:String) [Add-Type], Exception
+ FullyQualifiedErrorId : TYPE_ALREADY_EXISTS,Microsoft.PowerShell.Commands.AddTypeCommand
The way I have been resolving this error is by changing the namespace and the command to call the C# method [AlertsOnOff10.onOff]::Main("off"). I there a way I can prevent this error from happening without having to change namespace and method call?
| [
"For those who want to avoid the error or avoid loading the type if it's already been loaded use the following check:\nif (\"TrustAllCertsPolicy\" -as [type]) {} else {\n Add-Type \"using System.Net;using System.Security.Cryptography.X509Certificates;public class TrustAllCertsPolicy : ICertificatePolicy {public bool CheckValidationResult(ServicePoint srvPoint, X509Certificate certificate, WebRequest request, int certificateProblem) {return true;}}\"\n [System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy\n}\n\nI post this because you get the error OP posted if you make even superficial (e.g. formatting) changes to the C# code.\n",
"To my knowledge there is no way to remove a type from a PowerShell session once it has been added.\nThe (annoying) workaround I would suggest is to write your code in one ISE session, and execute it in a completely different session (separate console window or separate ISE if you want to be able to debug).\nThis only matters if you're changing $Source though (actively developing the type definition). If that's not the part that's changing, then ignore the errors, or if it's a terminating error use -ErrorAction to change it.\n",
"You can execute it as a job:\n$cmd = { \n\n $code = @'\n using System;\n\n namespace MyCode\n {\n public class Helper\n {\n public static string FormatText(string message)\n {\n return \"Version 1: \" + message;\n }\n }\n }\n'@\n\n Add-Type -TypeDefinition $code -PassThru | Out-Null\n\n Write-Output $( [MyCode.Helper]::FormatText(\"It Works!\") )\n}\n\n$j = Start-Job -ScriptBlock $cmd\n\ndo \n{\n Receive-Job -Job $j\n\n} while ( $j.State -eq \"Running\" )\n\n",
"Adam Furmanek's blog has the simplest and best work around. This goes something like this below. If you want to see how to pass in parameters for that, you can see that https://samtran.me/2020/02/09/execute-c-code-with-parameters-using-powershell/\n$id = get-random\n$code = @\"\nusing System;\nnamespace HelloWorld\n{\n public class Program$id\n {\n public static void Main(){\n Console.WriteLine(\"Hello world again!\");\n }\n }\n}\n\"@\n\nAdd-Type -TypeDefinition $code -Language CSharp \nInvoke-Expression \"[HelloWorld.Program$id]::Main()\"\n\n",
"The simple solution is close Powershell and reopen it. Types you've added with Add-Type will be removed on closing, then run code again.\n",
"Instead of\n./YourScript.ps1\n\nuse this command\npowershell.exe -ExecutionPolicy ByPass -Command \"./YourScript.ps1\"\n\n"
] | [
17,
11,
4,
3,
0,
0
] | [] | [] | [
"c#",
"powershell",
"runtime_error"
] | stackoverflow_0025730978_c#_powershell_runtime_error.txt |
Q:
Best way to not wait for a task to be completed
Let's say we have an entity Team with fields Id, Name, Points and CompetitionId.
Based on this entity, I have a list saved in memory, with aggregate data for each team.
When I add some results, some lines in the table Teams, I want also to update this list, but not to wait for its result.
public async Task AddResults(List<Team> teams) {
await context.AddRange(teams);
await inMemoryService.SetRanking();
}
Inside of SetRankings method I get the teams lines from context and build the aggregate data. But I don't want to wait for that to be finished because is a long process (take ~ 10 minutes and will be increased each time). For that, I tried two methods:
1: to not use await keyword:
_ = inMemoryService.SetRanking(); this works only because I'll not wait for the task to be completed. BUT, the new aggregated list from memory will be created on the another thread (I think), and when I'll try to get the data, I'll receive the old one.
2: using ConfigureAwait with false value:
await inMemoryService.SetRanking().ConfigureAwait(false) here, the request is still locked until this task is completed.
How can I solve this? thx
| Best way to not wait for a task to be completed | Let's say we have an entity Team with fields Id, Name, Points and CompetitionId.
Based on this entity, I have a list saved in memory, with aggregate data for each team.
When I add some results, some lines in the table Teams, I want also to update this list, but not to wait for its result.
public async Task AddResults(List<Team> teams) {
await context.AddRange(teams);
await inMemoryService.SetRanking();
}
Inside of SetRankings method I get the teams lines from context and build the aggregate data. But I don't want to wait for that to be finished because is a long process (take ~ 10 minutes and will be increased each time). For that, I tried two methods:
1: to not use await keyword:
_ = inMemoryService.SetRanking(); this works only because I'll not wait for the task to be completed. BUT, the new aggregated list from memory will be created on the another thread (I think), and when I'll try to get the data, I'll receive the old one.
2: using ConfigureAwait with false value:
await inMemoryService.SetRanking().ConfigureAwait(false) here, the request is still locked until this task is completed.
How can I solve this? thx
| [] | [] | [
"One solution could be to run the SetRanking method in a separate background thread. This way, the main thread will not have to wait for it to complete, and you can continue processing other tasks.\nTo do this, you can use the Task.Run method, which will run the specified method in a separate thread and return a Task object that you can use to monitor the progress of the operation. For example:\npublic async Task AddResults(List<Team> teams) {\n await context.AddRange(teams);\n\n // Start the SetRanking method in a separate background thread\n var task = Task.Run(() => inMemoryService.SetRanking());\n\n // You can continue processing other tasks here, while SetRanking runs in the background\n // ...\n\n // If you want to wait for SetRanking to complete, you can use the await keyword\n await task;\n}\n\nAnother option could be to use the Task.Factory.StartNew method, which provides more options for configuring the background thread. For example, you can specify the TaskScheduler to use, or set the CancellationToken that can be used to cancel the operation.\npublic async Task AddResults(List<Team> teams) {\n await context.AddRange(teams);\n\n // Start the SetRanking method in a separate background thread\n var task = Task.Factory.StartNew(() => inMemoryService.SetRanking(), CancellationToken.None, TaskCreationOptions.None, TaskScheduler.Default);\n\n // You can continue processing other tasks here, while SetRanking runs in the background\n // ...\n\n // If you want to wait for SetRanking to complete, you can use the await keyword\n await task;\n}\n\nYou can also use the Task.WhenAll method to run multiple tasks in parallel and wait for all of them to complete. This can be useful if you have multiple tasks that you want to run in parallel and wait for all of them to complete before continuing. For example:\npublic async Task AddResults(List<Team> teams) {\n await context.AddRange(teams);\n\n // Start multiple tasks in parallel\n var task1 = Task.Run(() => inMemoryService.SetRanking());\n var task2 = Task.Run(() => otherService.DoSomethingElse());\n var task3 = Task.Run(() => yetAnotherService.DoSomethingElse());\n\n // Wait for all tasks to complete\n await Task.WhenAll(task1, task2, task3);\n}\n\nOne option is to use the Task.ContinueWith method to schedule a continuation for the SetRanking task. This continuation will be executed when the SetRanking task completes, and it can do any additional processing that you need. Here is an example:\npublic async Task AddResults(List<Team> teams) {\n await context.AddRange(teams);\n\n // Execute the SetRanking method in a separate thread\n var setRankingTask = Task.Run(() => inMemoryService.SetRanking());\n\n // Schedule a continuation that will be executed when the SetRanking task completes\n setRankingTask.ContinueWith(t => {\n // Perform any additional processing here\n });\n\n // Move on from the AddResults method without waiting for the SetRanking task to complete\n}\n\nIn this example, the AddResults method will move on without waiting for the SetRanking task to complete. The continuation will be executed when the SetRanking task completes, allowing you to perform any additional processing that you need. However, keep in mind that the continuation will be executed in a separate thread, so you may need to handle any thread-safety issues in your code.\nI hope this helps\n"
] | [
-1
] | [
".net",
"async_await",
"asynchronous",
"c#"
] | stackoverflow_0074670857_.net_async_await_asynchronous_c#.txt |
Q:
MySqlConnection.Open() System.InvalidCastException: Object cannot be cast from DBNull to other types
I have simple connectionstring to MySql (MariaDB 5.5.5-10.11.0) written in c#:
MySqlConnection Database = new MySqlConnection("Server=127.0.0.1; Port=3306; Database=test; Uid=user; Pwd=MyPassword; Ssl Mode=Required; convert zero datetime=True;");
Everything works fine on two computers (Windows 10 and Windows 11). But when I try to launch this app on Windows Server 2022 I get this error:
System.InvalidCastException: Object cannot be cast from DBNull to other types.
at System.DBNull.System.IConvertible.ToInt32(IFormatProvider provider)
at System.Convert.ToInt32(Object value, IFormatProvider provider)
at MySql.Data.MySqlClient.Driver.LoadCharacterSets(MySqlConnection connection)
at MySql.Data.MySqlClient.Driver.Configure(MySqlConnection connection)
at MySql.Data.MySqlClient.MySqlConnection.Open()
at MariaDB.Program.StartAPI()
Error is thrown on Database.Open();
MariaDB is installed and running, Ssl is working, user's pemissions are granted, port is correct. Any ideas please?
Whole program:
using System;
using MySql.Data.MySqlClient;
namespace MariaDB
{
internal class Program
{
MySqlConnection Database = new MySqlConnection("Server=127.0.0.1; Port=3306; Database=test; Uid=user; Pwd=MyPassword; Ssl Mode=Required; convert zero datetime=True;");
static void Main(string[] args)
{
Program p = new Program();
p.OpenDB();
}
private void OpenDB()
{
Database.Open();
Console.WriteLine("Ok");
Console.ReadLine();
}
}
}
A:
This is caused by MariaDB 10.10.1 making ID field Nullable in Information_Schema.Collations and adding a bunch of Collations that have null for an ID.
https://jira.mariadb.org/browse/MDEV-27009
One possible workaround is to use MariaDB 10.9 or older.
| MySqlConnection.Open() System.InvalidCastException: Object cannot be cast from DBNull to other types | I have simple connectionstring to MySql (MariaDB 5.5.5-10.11.0) written in c#:
MySqlConnection Database = new MySqlConnection("Server=127.0.0.1; Port=3306; Database=test; Uid=user; Pwd=MyPassword; Ssl Mode=Required; convert zero datetime=True;");
Everything works fine on two computers (Windows 10 and Windows 11). But when I try to launch this app on Windows Server 2022 I get this error:
System.InvalidCastException: Object cannot be cast from DBNull to other types.
at System.DBNull.System.IConvertible.ToInt32(IFormatProvider provider)
at System.Convert.ToInt32(Object value, IFormatProvider provider)
at MySql.Data.MySqlClient.Driver.LoadCharacterSets(MySqlConnection connection)
at MySql.Data.MySqlClient.Driver.Configure(MySqlConnection connection)
at MySql.Data.MySqlClient.MySqlConnection.Open()
at MariaDB.Program.StartAPI()
Error is thrown on Database.Open();
MariaDB is installed and running, Ssl is working, user's pemissions are granted, port is correct. Any ideas please?
Whole program:
using System;
using MySql.Data.MySqlClient;
namespace MariaDB
{
internal class Program
{
MySqlConnection Database = new MySqlConnection("Server=127.0.0.1; Port=3306; Database=test; Uid=user; Pwd=MyPassword; Ssl Mode=Required; convert zero datetime=True;");
static void Main(string[] args)
{
Program p = new Program();
p.OpenDB();
}
private void OpenDB()
{
Database.Open();
Console.WriteLine("Ok");
Console.ReadLine();
}
}
}
| [
"This is caused by MariaDB 10.10.1 making ID field Nullable in Information_Schema.Collations and adding a bunch of Collations that have null for an ID.\nhttps://jira.mariadb.org/browse/MDEV-27009\nOne possible workaround is to use MariaDB 10.9 or older.\n"
] | [
0
] | [] | [] | [
"c#",
"mariadb_connector"
] | stackoverflow_0074060289_c#_mariadb_connector.txt |
Q:
Dart: How to use the `AES-256-CBC` method in encrypt package?
My PHP server uses the encrypt as follows.
openssl_encrypt('data', 'AES-256-CBC', '1234567890123456', 0, '1234567890123456')
the result is adVh7c/vcyascTS0Z669IA==.
My dart server uses encrypt package as follows.
import 'package:encrypt/encrypt.dart' as encrypt;
Encrypter(AES(encryptKey, mode: AESMode.cbc)).encrypt('data', iv: '1234567890123456').base64
final encrypt.Key encryptKey = encrypt.Key.fromUtf8('1234567890123456');
final encrypt.IV encryptIvKey = encrypt.IV.fromUtf8('1234567890123456');
final encrypt.Encrypter encrypter = encrypt.Encrypter(encrypt.AES(encryptKey, mode: encrypt.AESMode.cbc));
print(encrypter.encrypt('data', iv: encryptIvKey).base64);
The result is KQjJ76efmVlgGKDsj6dCog==.
These result values are different.
I saw the cipher method of PHP. If I change the cipher method in the PHP server from
AES-256-CBC
to
aes-128-cbc // or aes-128-cbc-hmac-sha1, aes-128-cbc-hmac-sha256
The result will be KQjJ76efmVlgGKDsj6dCog==. (same as the result from the dart server)
But editing files in the PHP server is the last choice.
What I can do in the dart server to make the result the same as the result from the PHP server (AES-256-CBC method)?
How to use the AES-256-CBC method in encrypt package?
If I must edit files in the PHP server, what method I should use?
The aes-128-cbc, aes-128-cbc-hmac-sha1 and aes-128-cbc-hmac-sha256 give the same result. Or some method better than this and it is available in encrypt package as follows in this image. Suggestion me, please.
A:
The summary from the comment in my post by @Topaco.
The aes-256-cbc cipher method requires a 32 bytes key.
Use the key with a string length of 32 or use the padRight(32, '\x00') function.
example:
final encrypt.Key encryptKey = encrypt.Key.fromUtf8('1234567890123456'.padRight(32, '\x00'));
Regarding aes-128-cbc, aes-128-cbc-hmac-sha1 and aes-128-cbc-hmac-sha256: Apply aes-128-cbc(ref)
| Dart: How to use the `AES-256-CBC` method in encrypt package? | My PHP server uses the encrypt as follows.
openssl_encrypt('data', 'AES-256-CBC', '1234567890123456', 0, '1234567890123456')
the result is adVh7c/vcyascTS0Z669IA==.
My dart server uses encrypt package as follows.
import 'package:encrypt/encrypt.dart' as encrypt;
Encrypter(AES(encryptKey, mode: AESMode.cbc)).encrypt('data', iv: '1234567890123456').base64
final encrypt.Key encryptKey = encrypt.Key.fromUtf8('1234567890123456');
final encrypt.IV encryptIvKey = encrypt.IV.fromUtf8('1234567890123456');
final encrypt.Encrypter encrypter = encrypt.Encrypter(encrypt.AES(encryptKey, mode: encrypt.AESMode.cbc));
print(encrypter.encrypt('data', iv: encryptIvKey).base64);
The result is KQjJ76efmVlgGKDsj6dCog==.
These result values are different.
I saw the cipher method of PHP. If I change the cipher method in the PHP server from
AES-256-CBC
to
aes-128-cbc // or aes-128-cbc-hmac-sha1, aes-128-cbc-hmac-sha256
The result will be KQjJ76efmVlgGKDsj6dCog==. (same as the result from the dart server)
But editing files in the PHP server is the last choice.
What I can do in the dart server to make the result the same as the result from the PHP server (AES-256-CBC method)?
How to use the AES-256-CBC method in encrypt package?
If I must edit files in the PHP server, what method I should use?
The aes-128-cbc, aes-128-cbc-hmac-sha1 and aes-128-cbc-hmac-sha256 give the same result. Or some method better than this and it is available in encrypt package as follows in this image. Suggestion me, please.
| [
"The summary from the comment in my post by @Topaco.\nThe aes-256-cbc cipher method requires a 32 bytes key.\nUse the key with a string length of 32 or use the padRight(32, '\\x00') function.\nexample:\nfinal encrypt.Key encryptKey = encrypt.Key.fromUtf8('1234567890123456'.padRight(32, '\\x00'));\n\nRegarding aes-128-cbc, aes-128-cbc-hmac-sha1 and aes-128-cbc-hmac-sha256: Apply aes-128-cbc(ref)\n"
] | [
0
] | [] | [] | [
"aes",
"dart",
"dart_server",
"encryption",
"php"
] | stackoverflow_0074506761_aes_dart_dart_server_encryption_php.txt |
Q:
Properly formed notification JSON
I using the following formatted json as my notification payload:
{
"to":"[TOKEN]",
"priority":"high",
"content-available":true,
"notification":{
"ACTION":"false",
"FIELD":"item1",
"TITLE":"Title Here",
"BODY":"This is the body"
}
}
The push works and is caught by the target. The payload is received as below (which is fine):
{
"data": {
"gcm.notification.TITLE": "Title Here",
"gcm.notification.ACTION": "false",
"gcm.notification.FIELD": "item1",
"gcm.notification.BODY": "This is the body"
},
"from": "[POINTER]",
"priority": "high",
"fcmMessageId": "07964eab-c1a5-46ac-a32d-c7c1a24fe28b"
}
BUT the notification display on my Android device is below:
I'm sure it's something easy, but I can't seem to find a good example of the proper format.
What am I missing?
A:
Did you complete the code from a codelab? Because "Background Message ..." looks like placeholders.
Whether or not it's from a codelab, please check the code in the service worker that listens to notifications in the background. Are you displaying notification data (title and body) that was obtained from the JSON payload or are you displaying some default placeholder text?
Please share the code of the service worker and other related code. If updating the code worked as I proposed, please edit this answer and add the working code. If your code however is correctly displaying notification data from Firebase Cloud Messaging, please indicate still, share the code and let's solve it.
| Properly formed notification JSON | I using the following formatted json as my notification payload:
{
"to":"[TOKEN]",
"priority":"high",
"content-available":true,
"notification":{
"ACTION":"false",
"FIELD":"item1",
"TITLE":"Title Here",
"BODY":"This is the body"
}
}
The push works and is caught by the target. The payload is received as below (which is fine):
{
"data": {
"gcm.notification.TITLE": "Title Here",
"gcm.notification.ACTION": "false",
"gcm.notification.FIELD": "item1",
"gcm.notification.BODY": "This is the body"
},
"from": "[POINTER]",
"priority": "high",
"fcmMessageId": "07964eab-c1a5-46ac-a32d-c7c1a24fe28b"
}
BUT the notification display on my Android device is below:
I'm sure it's something easy, but I can't seem to find a good example of the proper format.
What am I missing?
| [
"Did you complete the code from a codelab? Because \"Background Message ...\" looks like placeholders.\nWhether or not it's from a codelab, please check the code in the service worker that listens to notifications in the background. Are you displaying notification data (title and body) that was obtained from the JSON payload or are you displaying some default placeholder text?\nPlease share the code of the service worker and other related code. If updating the code worked as I proposed, please edit this answer and add the working code. If your code however is correctly displaying notification data from Firebase Cloud Messaging, please indicate still, share the code and let's solve it.\n"
] | [
0
] | [] | [] | [
"firebase_cloud_messaging"
] | stackoverflow_0074671031_firebase_cloud_messaging.txt |
Q:
How to apply contour to z matrix which has the same dimension as x- and y matrix
My dataset contains x, y coordinate and energy surface z. I need to turn those into this image. The problem is that z need to have the dimension of (x, y) but all three of them are 1D numpy array with length = 10201 (also some values in z are inf). I tried to turn z into a meshgrid with
Z, Z = np.meshgrid(z,z)
and then try
ax.contour(x, y, Z)
but the result is this. How should it do it?
I tried to turn z into a meshgrid, tried to remove all the rows which contain inf value in z, tried to turn all three of them into meshgrids
A:
To create a contour plot from your 1D arrays of x, y, and z coordinates, you can use NumPy's meshgrid function to create 2D grids from your 1D arrays, and then use the contour function from Matplotlib to create the contour plot.
First, you need to create 2D grids from your 1D arrays of x and y coordinates using NumPy's meshgrid function. You can do this by calling np.meshgrid(x, y), where x and y are your 1D arrays of x and y coordinates, respectively. This will return two 2D grids, one for the x coordinates and one for the y coordinates.
Next, you can use the contour function from Matplotlib to create the contour plot. You can do this by calling ax.contour(x, y, z), where ax is the axes object that you want to draw the contour plot on, x and y are the 2D grids of x and y coordinates that you created using meshgrid, and z is your 1D array of z coordinates. This will create a contour plot with x and y coordinates on the x- and y-axes, respectively, and z values as the contour levels.
One thing to keep in mind is that if you have any inf values in your z array, they will cause the contour function to throw an error. In this case, you will need to remove the inf values from your z array before creating the contour plot. You can do this by using NumPy's isinf function to find the indices of the inf values in your z array, and then use these indices to select only the non-inf values from your z array.
Here is an example of how you can use these steps to create a contour plot from your 1D arrays of x, y, and z coordinates:
import numpy as np
import matplotlib.pyplot as plt
# 1D arrays of x, y, and z coordinates
x = ...
y = ...
z = ...
# Create 2D grids of x and y coordinates
X, Y = np.meshgrid(x, y)
# Remove inf values from z array
z_noninf = z[~np.isinf(z)]
# Create figure and axes object
fig, ax = plt.subplots()
# Create contour plot
ax.contour(X, Y, z_noninf)
# Add x and y labels
ax.set_xlabel('x')
ax.set_ylabel('y')
# Show the plot
plt.show()
I hope this helps!
| How to apply contour to z matrix which has the same dimension as x- and y matrix | My dataset contains x, y coordinate and energy surface z. I need to turn those into this image. The problem is that z need to have the dimension of (x, y) but all three of them are 1D numpy array with length = 10201 (also some values in z are inf). I tried to turn z into a meshgrid with
Z, Z = np.meshgrid(z,z)
and then try
ax.contour(x, y, Z)
but the result is this. How should it do it?
I tried to turn z into a meshgrid, tried to remove all the rows which contain inf value in z, tried to turn all three of them into meshgrids
| [
"To create a contour plot from your 1D arrays of x, y, and z coordinates, you can use NumPy's meshgrid function to create 2D grids from your 1D arrays, and then use the contour function from Matplotlib to create the contour plot.\nFirst, you need to create 2D grids from your 1D arrays of x and y coordinates using NumPy's meshgrid function. You can do this by calling np.meshgrid(x, y), where x and y are your 1D arrays of x and y coordinates, respectively. This will return two 2D grids, one for the x coordinates and one for the y coordinates.\nNext, you can use the contour function from Matplotlib to create the contour plot. You can do this by calling ax.contour(x, y, z), where ax is the axes object that you want to draw the contour plot on, x and y are the 2D grids of x and y coordinates that you created using meshgrid, and z is your 1D array of z coordinates. This will create a contour plot with x and y coordinates on the x- and y-axes, respectively, and z values as the contour levels.\nOne thing to keep in mind is that if you have any inf values in your z array, they will cause the contour function to throw an error. In this case, you will need to remove the inf values from your z array before creating the contour plot. You can do this by using NumPy's isinf function to find the indices of the inf values in your z array, and then use these indices to select only the non-inf values from your z array.\nHere is an example of how you can use these steps to create a contour plot from your 1D arrays of x, y, and z coordinates:\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# 1D arrays of x, y, and z coordinates\nx = ...\ny = ...\nz = ...\n\n# Create 2D grids of x and y coordinates\nX, Y = np.meshgrid(x, y)\n\n# Remove inf values from z array\nz_noninf = z[~np.isinf(z)]\n\n# Create figure and axes object\nfig, ax = plt.subplots()\n\n# Create contour plot\nax.contour(X, Y, z_noninf)\n\n# Add x and y labels\nax.set_xlabel('x')\nax.set_ylabel('y')\n\n# Show the plot\nplt.show()\n\nI hope this helps!\n"
] | [
0
] | [] | [] | [
"contour",
"matplotlib",
"python"
] | stackoverflow_0074671161_contour_matplotlib_python.txt |
Q:
Angular on click open image in full screen
I create component gallery
I want to open the image in full screen when clicking.
I think just need to create 3 functions for 3 images in gallery.component.ts
Here is my code in gallery.component.html:
<div class="col-lg-4 col-md-4 col-xs-4 thumb mb-4">
<a class="thumbnail" href="#">
<img class="img-responsive rounded" src="../../assets/images/small/01.Gg1ZD.jpg" alt="">
</a>
</div>
<div class="col-lg-4 col-md-4 col-xs-4 thumb mb-4">
<a class="thumbnail" href="#">
<img class="img-responsive rounded" src="../../assets/images/small/02.wWNQN.jpg" alt="">
</a>
</div>
<div class="col-lg-4 col-md-4 col-xs-4 thumb mb-4">
<a class="thumbnail" href="#">
<img class="img-responsive rounded" src="../../assets/images/small/03.ZFJbM.jpg" alt="">
</a>
</div>
Is it possible to do it without a library?
A:
To open images in full-screen, you can use library like ng-image-fullscreen-view
| Angular on click open image in full screen | I create component gallery
I want to open the image in full screen when clicking.
I think just need to create 3 functions for 3 images in gallery.component.ts
Here is my code in gallery.component.html:
<div class="col-lg-4 col-md-4 col-xs-4 thumb mb-4">
<a class="thumbnail" href="#">
<img class="img-responsive rounded" src="../../assets/images/small/01.Gg1ZD.jpg" alt="">
</a>
</div>
<div class="col-lg-4 col-md-4 col-xs-4 thumb mb-4">
<a class="thumbnail" href="#">
<img class="img-responsive rounded" src="../../assets/images/small/02.wWNQN.jpg" alt="">
</a>
</div>
<div class="col-lg-4 col-md-4 col-xs-4 thumb mb-4">
<a class="thumbnail" href="#">
<img class="img-responsive rounded" src="../../assets/images/small/03.ZFJbM.jpg" alt="">
</a>
</div>
Is it possible to do it without a library?
| [
"To open images in full-screen, you can use library like ng-image-fullscreen-view\n"
] | [
0
] | [] | [] | [
"angular",
"function",
"html",
"image",
"typescript"
] | stackoverflow_0074668963_angular_function_html_image_typescript.txt |
Q:
I use gets to get the age of a person (20), then puts age + 40, and prints 4060
This is a really basic question, I know, but I cannot find out why I am getting this error. Below is the code:
puts "How old are you?"
age = gets.chomp.to_i
puts "In 10 years you will be:"
puts age + 10
puts "In 20 years you will be:"
puts age + 20
puts "In 30 years you will be:"
puts age + 30
puts "In 40 years you will be:"
puts age + 40
Here is what the code looks like when I run it in irb:
ex1.rb(main):005:0> puts "How old are you?"
How old are you?
=> nil
ex1.rb(main):006:0> age = gets.chomp.to_i
20
=> 20
ex1.rb(main):007:0> puts "In 10 years you will be:"
In 10 years you will be:
=> nil
ex1.rb(main):008:0> puts age + 10
30
=> nil
ex1.rb(main):009:0> puts "In 20 years you will be:"
In 20 years you will be:
=> nil
ex1.rb(main):010:0> puts age + 20
40
=> nil
ex1.rb(main):011:0> puts "In 30 years you will be:"
In 30 years you will be:
=> nil
ex1.rb(main):012:0> puts age + 30
50
=> nil
ex1.rb(main):013:0> puts "In 40 years you will be:"
In 40 years you will be:
=> nil
ex1.rb(main):014:0> puts age + 4060
=> nil
I cannot figure out why I am getting that last line of code. As you can see, the rest of the program runs correctly, and only adds the two integers together, however, that last line of code seems to add the two integers, then add that product to the number forty as a string. I attempted to google this answer, but haven't found much, could there be something else wrong here?
I have tried passing through various integers, and it always shows 40 with the number plus forty alongside it. For example, if I pass 30, it will show 4070; if I pass 50 it will show 4090. If I pass 30, I would expect to get just 70, or if I pass 50, I would expect only 90.
A:
NB: There is probably a bug in your code you haven't shown us.
Your best bet is to be explicit with your conversions, and raise an exception if a value can't be successfully converted. For example:
def future_age current_age, years_from_now
Integer(current_age) + Integer(years_from_now)
end
age = gets.chomp
[10, 20, 30, 40].each do |years|
puts "In #{years} years you will be #{future_age age, years}."
end
The benefit of this approach is that an exception will be raised if either argument to #future_age can't be coerced to an Integer. That will at least point you to the source of your bug, or give you the correct answer. Either way, it will resolve your issue.
| I use gets to get the age of a person (20), then puts age + 40, and prints 4060 | This is a really basic question, I know, but I cannot find out why I am getting this error. Below is the code:
puts "How old are you?"
age = gets.chomp.to_i
puts "In 10 years you will be:"
puts age + 10
puts "In 20 years you will be:"
puts age + 20
puts "In 30 years you will be:"
puts age + 30
puts "In 40 years you will be:"
puts age + 40
Here is what the code looks like when I run it in irb:
ex1.rb(main):005:0> puts "How old are you?"
How old are you?
=> nil
ex1.rb(main):006:0> age = gets.chomp.to_i
20
=> 20
ex1.rb(main):007:0> puts "In 10 years you will be:"
In 10 years you will be:
=> nil
ex1.rb(main):008:0> puts age + 10
30
=> nil
ex1.rb(main):009:0> puts "In 20 years you will be:"
In 20 years you will be:
=> nil
ex1.rb(main):010:0> puts age + 20
40
=> nil
ex1.rb(main):011:0> puts "In 30 years you will be:"
In 30 years you will be:
=> nil
ex1.rb(main):012:0> puts age + 30
50
=> nil
ex1.rb(main):013:0> puts "In 40 years you will be:"
In 40 years you will be:
=> nil
ex1.rb(main):014:0> puts age + 4060
=> nil
I cannot figure out why I am getting that last line of code. As you can see, the rest of the program runs correctly, and only adds the two integers together, however, that last line of code seems to add the two integers, then add that product to the number forty as a string. I attempted to google this answer, but haven't found much, could there be something else wrong here?
I have tried passing through various integers, and it always shows 40 with the number plus forty alongside it. For example, if I pass 30, it will show 4070; if I pass 50 it will show 4090. If I pass 30, I would expect to get just 70, or if I pass 50, I would expect only 90.
| [
"NB: There is probably a bug in your code you haven't shown us.\nYour best bet is to be explicit with your conversions, and raise an exception if a value can't be successfully converted. For example:\ndef future_age current_age, years_from_now\n Integer(current_age) + Integer(years_from_now)\nend\n\nage = gets.chomp\n[10, 20, 30, 40].each do |years|\n puts \"In #{years} years you will be #{future_age age, years}.\"\nend\n\nThe benefit of this approach is that an exception will be raised if either argument to #future_age can't be coerced to an Integer. That will at least point you to the source of your bug, or give you the correct answer. Either way, it will resolve your issue.\n"
] | [
0
] | [] | [] | [
"ruby"
] | stackoverflow_0074670405_ruby.txt |
Q:
How to access Firebase double nested object properties?
I am trying to get all users that have a certain listId in their contactLists object. My firebase realtime database structure is as follows:
user:
john:
contactLists:
-NIOsvb: true,
I have other users in the DB and I want to get all that have -NIOsvb in their contactLists object.
This is the approach I tried (listId is passed as a parameter):
const snapshot = await get(query(ref(db, "users"), orderByChild("contactLists"), equalTo(listId)))
I expected to get all the user objects that have this id in their contactLists. However, the value of snapshot is null. Any suggestions would be appreciated, as I don't have a lot of experience with Firebase functions.
A:
It looks like there are a few issues with the query you are using to get the users with a specific listId in their contactLists. Here are a few suggestions to help you fix the query:
The orderByChild method should be used with a property name, not with a whole object. In your case, you can use orderByChild("contactLists.-NIOsvb") to order the users by the -NIOsvb property in their contactLists objects.
The equalTo method should be used with a value, not with a whole object. In your case, you can use equalTo(true) to match users that have a -NIOsvb property with the value true in their contactLists objects.
When you use the orderByChild and equalTo methods, you need to specify the property and value you want to query for in the correct order. In your case, you can use orderByChild("contactLists.-NIOsvb").equalTo(true) to query for users that have a -NIOsvb property with the value true in their contactLists objects.
Here is an example of how you can use these methods to fix your query:
const snapshot = await get(
query(
ref(db, "users"),
orderByChild("contactLists.-NIOsvb").equalTo(true)
)
);
This query will select all the documents from the users collection where the contactLists.-NIOsvb field has the value true. The query will return a QuerySnapshot object, which you can iterate over to access the matching documents.
A:
To get all users that have a certain listId in their contactLists object, you can use the orderByChild() and equalTo() methods of the Firebase Realtime Database query object. Your code should look something like this:
const snapshot = await get(query(ref(db, "users"), orderByChild("contactLists"), equalTo(listId)))
This code will retrieve a snapshot of the data at the users node in the database, and will only include users that have the listId in their contactLists object.
Note that the orderByChild() and equalTo() methods expect the name of the property to query as a string, and the value to match as a value. In your code, you are using orderByChild("contactLists"), which is trying to query a property called contactLists on each user object. However, this property is actually an object, not a property. To query the -NIOsvb property of the contactLists object, you should use the following code instead:
const snapshot = await get(query(ref(db, "users"), orderByChild("contactLists/-NIOsvb"), equalTo(true)))
This will query the -NIOsvb property of the contactLists object, and will only return users that have a value of true for this property.
It's also worth noting that, since you are querying the database for a specific value, you don't need to use the orderByChild() method. You can simply use the equalTo() method on its own, like this:
const snapshot = await get(query(ref(db, "users"), equalTo(true), "contactLists/-NIOsvb"))
This code will have the same effect as the previous example, but is simpler and more straightforward.
| How to access Firebase double nested object properties? | I am trying to get all users that have a certain listId in their contactLists object. My firebase realtime database structure is as follows:
user:
john:
contactLists:
-NIOsvb: true,
I have other users in the DB and I want to get all that have -NIOsvb in their contactLists object.
This is the approach I tried (listId is passed as a parameter):
const snapshot = await get(query(ref(db, "users"), orderByChild("contactLists"), equalTo(listId)))
I expected to get all the user objects that have this id in their contactLists. However, the value of snapshot is null. Any suggestions would be appreciated, as I don't have a lot of experience with Firebase functions.
| [
"It looks like there are a few issues with the query you are using to get the users with a specific listId in their contactLists. Here are a few suggestions to help you fix the query:\n\nThe orderByChild method should be used with a property name, not with a whole object. In your case, you can use orderByChild(\"contactLists.-NIOsvb\") to order the users by the -NIOsvb property in their contactLists objects.\nThe equalTo method should be used with a value, not with a whole object. In your case, you can use equalTo(true) to match users that have a -NIOsvb property with the value true in their contactLists objects.\nWhen you use the orderByChild and equalTo methods, you need to specify the property and value you want to query for in the correct order. In your case, you can use orderByChild(\"contactLists.-NIOsvb\").equalTo(true) to query for users that have a -NIOsvb property with the value true in their contactLists objects.\n\n\nHere is an example of how you can use these methods to fix your query:\n const snapshot = await get(\n query(\n ref(db, \"users\"),\n orderByChild(\"contactLists.-NIOsvb\").equalTo(true)\n )\n);\n\nThis query will select all the documents from the users collection where the contactLists.-NIOsvb field has the value true. The query will return a QuerySnapshot object, which you can iterate over to access the matching documents.\n",
"To get all users that have a certain listId in their contactLists object, you can use the orderByChild() and equalTo() methods of the Firebase Realtime Database query object. Your code should look something like this:\nconst snapshot = await get(query(ref(db, \"users\"), orderByChild(\"contactLists\"), equalTo(listId)))\n\nThis code will retrieve a snapshot of the data at the users node in the database, and will only include users that have the listId in their contactLists object.\nNote that the orderByChild() and equalTo() methods expect the name of the property to query as a string, and the value to match as a value. In your code, you are using orderByChild(\"contactLists\"), which is trying to query a property called contactLists on each user object. However, this property is actually an object, not a property. To query the -NIOsvb property of the contactLists object, you should use the following code instead:\nconst snapshot = await get(query(ref(db, \"users\"), orderByChild(\"contactLists/-NIOsvb\"), equalTo(true)))\n\nThis will query the -NIOsvb property of the contactLists object, and will only return users that have a value of true for this property.\nIt's also worth noting that, since you are querying the database for a specific value, you don't need to use the orderByChild() method. You can simply use the equalTo() method on its own, like this:\nconst snapshot = await get(query(ref(db, \"users\"), equalTo(true), \"contactLists/-NIOsvb\"))\n\nThis code will have the same effect as the previous example, but is simpler and more straightforward.\n"
] | [
0,
0
] | [] | [] | [
"firebase",
"firebase_realtime_database",
"javascript",
"reactjs"
] | stackoverflow_0074671123_firebase_firebase_realtime_database_javascript_reactjs.txt |
Q:
Exporting AEM experience fragments to Adobe Target automatically every time a related Content Fragment is updated
I have this unique requirement where each time a particular Content Fragment is updated in AEM, all the Experience Fragments referencing that particular Content Fragment need to be automatically exported to Adobe Target.
Thinking about using SQL2 query to retrieve XFs referencing a particular CF and then incorporating this into a workflow process. Also, wondering if I can leverage aem OOTB workflow process called "Export to Target" in this.
Not really sure of how to call this "Export to Target" process on each Experience Fragment that we need to export to Target or is this possible at all?
Wondering if anyone has ever come across this requirement and succeeded.
Highly appreciate any tips or suggestions in this regard. Many Thanks in advance.
A:
Whenever a Content Fragment is created or updated an OSGi event is triggered. All events are logged under http://localhost:4502/system/console/events. You could write a EventListener or EventHandler, get the path of the event, get the Resource and adapt it to com.adobe.cq.dam.cfm.ContentFragment. The topic for these events is "com/day/cq/dam" or in this constant.
From the adapted Class or Resource you can get informations about the model and if it's the model you want to process.
To find all references I would also create an oak index and use SQL2 query to find all references.
The query would be something like this:
select [jcr:path], [jcr:score], * from [nt:base] as a where contains(*, '"/content/dam/myReferencedModel"')
If you have all referencing XF's you can kick off any workflow via WorkflowService:
@Reference
private WorkflowService workflowService;
WorkflowSession wfSession = workflowService.getWorkflowSession(session);
WorkflowModel wfModel = wfSession.getModel("/var/workflow/models/mymodel");
WorkflowData wfData = wfSession.newWorkflowData("JCR_PATH", "/payload");
wfSession.startWorkflow(wfModel, wfData);
| Exporting AEM experience fragments to Adobe Target automatically every time a related Content Fragment is updated | I have this unique requirement where each time a particular Content Fragment is updated in AEM, all the Experience Fragments referencing that particular Content Fragment need to be automatically exported to Adobe Target.
Thinking about using SQL2 query to retrieve XFs referencing a particular CF and then incorporating this into a workflow process. Also, wondering if I can leverage aem OOTB workflow process called "Export to Target" in this.
Not really sure of how to call this "Export to Target" process on each Experience Fragment that we need to export to Target or is this possible at all?
Wondering if anyone has ever come across this requirement and succeeded.
Highly appreciate any tips or suggestions in this regard. Many Thanks in advance.
| [
"Whenever a Content Fragment is created or updated an OSGi event is triggered. All events are logged under http://localhost:4502/system/console/events. You could write a EventListener or EventHandler, get the path of the event, get the Resource and adapt it to com.adobe.cq.dam.cfm.ContentFragment. The topic for these events is \"com/day/cq/dam\" or in this constant.\nFrom the adapted Class or Resource you can get informations about the model and if it's the model you want to process.\nTo find all references I would also create an oak index and use SQL2 query to find all references.\nThe query would be something like this:\nselect [jcr:path], [jcr:score], * from [nt:base] as a where contains(*, '\"/content/dam/myReferencedModel\"') \n\nIf you have all referencing XF's you can kick off any workflow via WorkflowService:\n @Reference\n private WorkflowService workflowService;\n\n WorkflowSession wfSession = workflowService.getWorkflowSession(session);\n WorkflowModel wfModel = wfSession.getModel(\"/var/workflow/models/mymodel\");\n WorkflowData wfData = wfSession.newWorkflowData(\"JCR_PATH\", \"/payload\");\n wfSession.startWorkflow(wfModel, wfData);\n\n\n"
] | [
0
] | [] | [] | [
"adobe",
"aem",
"target"
] | stackoverflow_0074650524_adobe_aem_target.txt |
Q:
I copy code from jsfidddle and it doesn't work
I copy code (html CSS JavaScript) from jsfiddle in notepad and it doesn't work. In jsfiddle it works fine. What is my mistake?
I guess the JavaScript code needs the tags <script></script> which put them. In what else I may go wrong?
A:
jsFiddle does not contains the html, head or body tags which is important for parsing an HTML document.
All html codes in jsFiddle should be pasted inside <body></body> tag along with the css and js placed inside the html page in their respective style or script tags. It should work fine.
EDIT: After testing the jsFiddle locally found it to be working as expected. Please don't forget to include jQuery mark.js script before using the scripts in jsFiddle
mark.js
Cheers!
| I copy code from jsfidddle and it doesn't work | I copy code (html CSS JavaScript) from jsfiddle in notepad and it doesn't work. In jsfiddle it works fine. What is my mistake?
I guess the JavaScript code needs the tags <script></script> which put them. In what else I may go wrong?
| [
"jsFiddle does not contains the html, head or body tags which is important for parsing an HTML document.\nAll html codes in jsFiddle should be pasted inside <body></body> tag along with the css and js placed inside the html page in their respective style or script tags. It should work fine.\nEDIT: After testing the jsFiddle locally found it to be working as expected. Please don't forget to include jQuery mark.js script before using the scripts in jsFiddle\nmark.js\nCheers! \n"
] | [
0
] | [] | [] | [
"css",
"html",
"javascript"
] | stackoverflow_0074671104_css_html_javascript.txt |
Q:
Why is focus-visible applied on page load
If I understand correctly, focus-visible is only applied to an element if it is focused because of a keyboard interaction. However, in the following example, if I programmatically focus the element on page load, the focus ring shows up as well.
https://codesandbox.io/s/focus-on-page-load-jy595z?file=/src/index.js
Is this behavior expected? If so, what's the best way to disable it while keeping the page accessible (stop showing the focus ring on page load but still able to highlight the focused element with keyboard navigation)
A:
I don't know why the heuristics work this way, but I figured one can keep track of the first interaction, and style with that in mind.
document.body.classList.add('zero-interactions');
const clearZeroInteractions = () => {
document.body.classList.remove('zero-interactions');
document.removeEventListener('keydown', clearZeroInteractions);
document.removeEventListener('mousedown', clearZeroInteractions);
}
document.addEventListener('keydown', clearZeroInteractions);
document.addEventListener('mousedown', clearZeroInteractions);
button:focus-visible {
outline: none;
}
body:not(.zero-interactions) button:focus-visible {
outline: red solid 3px;
}
A:
The behavior is expected.
You can keep using your desired functionality and you can customize it to look as intended. that is, hide the outline on page load and show it for subsequent focuses.
Solution:
Add the following class:
.on-focus:focus { outline: none; }
Apply class on-focus to the element you want to initially focus:
< span tabindex="0" class="on-focus"> focus on page load </ span>
Using JavaScript, after the page is loaded:
const focus_element = document.querySelector("span");
focus_element.focus();
Now, on page load the desired element will focus and his outline is hidden. If you start pressing tab you will notice it begins from the span element.
What's left to do now is remove the class on-focus from the span element as soon as it looses focus.
You can do this by using an eventListener that runs only once then removes itself:
focus_element.addEventListener(
"blur",
(event) => event.target.classList.remove("on-focus"),
{ once: true }
);
A:
Yes, this behavior is expected. When an element is focused programmatically, the :focus-visible pseudo-class is not applied to it. This means that if you want to style the focus ring differently for keyboard- versus programmatically-focused elements, you can do so by using the :focus pseudo-class instead.
For example, you could use the following CSS to remove the focus ring when an element is focused programmatically, but still show it when it is focused via keyboard interaction:
:focus {
outline: none;
}
This will remove the focus ring for all elements that are focused programmatically, but will still show it for elements that are focused via keyboard interaction. Keep in mind that removing the focus ring altogether can make your page less accessible, as it can make it harder for keyboard users to see which element is currently focused. You may want to consider using a subtle focus style instead of completely removing the focus ring.
If you want to only remove the focus ring on page load and not for other programmatic focus changes, you can use JavaScript to add and remove a class that applies the outline: none style when the page is first loaded. For example:
document.addEventListener('DOMContentLoaded', () => {
// Add the 'no-focus-ring' class to the document body
document.body.classList.add('no-focus-ring');
// Remove the 'no-focus-ring' class when any element is focused
document.body.addEventListener('focus', () => {
document.body.classList.remove('no-focus-ring');
});
});
Then, in your CSS, you can use the .no-focus-ring class to remove the focus ring when the page is first loaded:
.no-focus-ring :focus {
outline: none;
}
This approach will remove the focus ring on page load, but will still show it when an element is focused via keyboard interaction or programmatic focus changes after the page has loaded.
| Why is focus-visible applied on page load | If I understand correctly, focus-visible is only applied to an element if it is focused because of a keyboard interaction. However, in the following example, if I programmatically focus the element on page load, the focus ring shows up as well.
https://codesandbox.io/s/focus-on-page-load-jy595z?file=/src/index.js
Is this behavior expected? If so, what's the best way to disable it while keeping the page accessible (stop showing the focus ring on page load but still able to highlight the focused element with keyboard navigation)
| [
"I don't know why the heuristics work this way, but I figured one can keep track of the first interaction, and style with that in mind.\ndocument.body.classList.add('zero-interactions');\nconst clearZeroInteractions = () => {\n document.body.classList.remove('zero-interactions');\n document.removeEventListener('keydown', clearZeroInteractions);\n document.removeEventListener('mousedown', clearZeroInteractions);\n}\ndocument.addEventListener('keydown', clearZeroInteractions);\ndocument.addEventListener('mousedown', clearZeroInteractions);\n\nbutton:focus-visible {\n outline: none;\n}\n\nbody:not(.zero-interactions) button:focus-visible {\n outline: red solid 3px;\n}\n\n",
"The behavior is expected.\nYou can keep using your desired functionality and you can customize it to look as intended. that is, hide the outline on page load and show it for subsequent focuses.\nSolution:\n\nAdd the following class:\n\n.on-focus:focus { outline: none; }\n\n\nApply class on-focus to the element you want to initially focus:\n\n< span tabindex=\"0\" class=\"on-focus\"> focus on page load </ span>\n\n\nUsing JavaScript, after the page is loaded:\n\nconst focus_element = document.querySelector(\"span\");\nfocus_element.focus();\n\n\n\nNow, on page load the desired element will focus and his outline is hidden. If you start pressing tab you will notice it begins from the span element.\nWhat's left to do now is remove the class on-focus from the span element as soon as it looses focus.\nYou can do this by using an eventListener that runs only once then removes itself:\nfocus_element.addEventListener(\n \"blur\",\n (event) => event.target.classList.remove(\"on-focus\"),\n { once: true }\n);\n\n",
"Yes, this behavior is expected. When an element is focused programmatically, the :focus-visible pseudo-class is not applied to it. This means that if you want to style the focus ring differently for keyboard- versus programmatically-focused elements, you can do so by using the :focus pseudo-class instead.\nFor example, you could use the following CSS to remove the focus ring when an element is focused programmatically, but still show it when it is focused via keyboard interaction:\n:focus {\n outline: none;\n}\n\nThis will remove the focus ring for all elements that are focused programmatically, but will still show it for elements that are focused via keyboard interaction. Keep in mind that removing the focus ring altogether can make your page less accessible, as it can make it harder for keyboard users to see which element is currently focused. You may want to consider using a subtle focus style instead of completely removing the focus ring.\nIf you want to only remove the focus ring on page load and not for other programmatic focus changes, you can use JavaScript to add and remove a class that applies the outline: none style when the page is first loaded. For example:\ndocument.addEventListener('DOMContentLoaded', () => {\n // Add the 'no-focus-ring' class to the document body\n document.body.classList.add('no-focus-ring');\n\n // Remove the 'no-focus-ring' class when any element is focused\n document.body.addEventListener('focus', () => {\n document.body.classList.remove('no-focus-ring');\n });\n});\n\nThen, in your CSS, you can use the .no-focus-ring class to remove the focus ring when the page is first loaded:\n.no-focus-ring :focus {\n outline: none;\n}\n\nThis approach will remove the focus ring on page load, but will still show it when an element is focused via keyboard interaction or programmatic focus changes after the page has loaded.\n"
] | [
0,
0,
0
] | [] | [] | [
"focus",
"javascript"
] | stackoverflow_0072245017_focus_javascript.txt |
Q:
Creating folder if it does not exist, move files there
I am python newbie and have read countless answers here and in other sources, on how to create folders if they do not exist and move files there. However, still I cannot bring it to work.
So what I want to do is the following:
Keep my downloads folder clean. I want to run the script, it is supposed to move all files to matching extension name folders. If the folder already exists, it does not have to create it.
Problems: I want to be able to run the script as often as I want while keeping the newly created folders there. However, then the whole os.listdir part does not work because folders have no file extensions. I tried to solve this by leaving out folders but it does not work as well.
I would appreciate any help!
from os import scandir
import os
import shutil
basepath = r"C:\Users\me\Downloads\test"
for entry in scandir(basepath):
if entry.is_dir():
continue
files = os.listdir(r"C:\Users\me\Downloads\test")
ext = [f.rsplit(".")[1] for f in files]
ext_final = set(ext)
try:
[os.makedirs(e) for e in ext_final]
except:
print("Folder already exists!")
for file in files:
for e in ext_final:
if file.rsplit(".")[1]==e:
shutil.move(file,e)
A:
os.makedirs has a switch to create a folder if it does not exist.
use it like this:
os.makedirs(foldern_name, exist_ok=True)
so just replace that try...except part of code which is this:
try:
[os.makedirs(e) for e in ext_final]
except:
print("Folder already exists!")
with this:
for e in ext_final:
os.makedirs(e, exist_os=True)
A:
I tried my own approach ... It is kind of ugly but it gets the job done.
import os
import shutil
def sort_folder(fpath: str) -> None:
dirs = []
files = []
filetypes = []
for item in os.listdir(fpath):
if os.path.isfile(f"{fpath}\{item}"):
files.append(item)
else:
dirs.append(item)
for file in files:
filetype = os.path.splitext(file)[1]
filetypes.append(filetype)
for filetype in set(filetypes):
if not os.path.isdir(f"{fpath}\{filetype}"):
os.mkdir(f"{fpath}\{filetype}")
for (file, filetype) in zip(files, filetypes):
shutil.move(f"{fpath}\{file}", f"{fpath}\{filetype}")
if __name__ == '__main__':
# running the script
sort_folder(fpath)
A:
There are a few issues with your code that are preventing it from working as expected. Here are a few suggestions to help you fix the problems:
You are using the scandir function to iterate over the files and directories in the basepath directory, but you are not using the entries returned by this function. Instead, you are using the os.listdir function to get a list of all the files in the directory. This means that your code is not processing the entries returned by the scandir function, and it is not checking if the entries are files or directories.
You are creating a list of file extensions using the files list, but this list contains both files and directories, so the resulting list of extensions will not be accurate. Instead, you should use the entry.name property of the entries returned by the scandir function to get the names of the files and directories, and then you can split the names to get the file extensions.
You are using the try and except statements to catch any errors that might occur when creating the directories for the file extensions. However, the try and except statements are not being used correctly. The try statement should be used to run the code that might throw an error, and the except statement should be used to handle the error if it occurs. In your code, the try and except statements are not being used to handle any errors, so they are not doing anything.
Here is a modified version of your code that fixes these problems and should work as expected:
import os
import shutil
basepath = r"C:\Users\me\Downloads\test"
for entry in scandir(basepath):
if entry.is_file():
file_ext = entry.name.rsplit(".")[1]
if not os.path.exists(file_ext):
os.makedirs(file_ext)
shutil.move(entry.path, file_ext)
This code uses the scandir function to iterate over the entries in the basepath directory, and it only processes the entries that are files (not directories). For each file, it gets the file extension from the file name, and it creates a directory with the same name as the file extension if it does not already exist. Then, it moves the file to the directory using the shutil.move function.
| Creating folder if it does not exist, move files there | I am python newbie and have read countless answers here and in other sources, on how to create folders if they do not exist and move files there. However, still I cannot bring it to work.
So what I want to do is the following:
Keep my downloads folder clean. I want to run the script, it is supposed to move all files to matching extension name folders. If the folder already exists, it does not have to create it.
Problems: I want to be able to run the script as often as I want while keeping the newly created folders there. However, then the whole os.listdir part does not work because folders have no file extensions. I tried to solve this by leaving out folders but it does not work as well.
I would appreciate any help!
from os import scandir
import os
import shutil
basepath = r"C:\Users\me\Downloads\test"
for entry in scandir(basepath):
if entry.is_dir():
continue
files = os.listdir(r"C:\Users\me\Downloads\test")
ext = [f.rsplit(".")[1] for f in files]
ext_final = set(ext)
try:
[os.makedirs(e) for e in ext_final]
except:
print("Folder already exists!")
for file in files:
for e in ext_final:
if file.rsplit(".")[1]==e:
shutil.move(file,e)
| [
"os.makedirs has a switch to create a folder if it does not exist.\nuse it like this:\nos.makedirs(foldern_name, exist_ok=True)\n\nso just replace that try...except part of code which is this:\n\ntry:\n\n [os.makedirs(e) for e in ext_final]\n\nexcept:\n\n print(\"Folder already exists!\")\n\nwith this:\nfor e in ext_final:\n os.makedirs(e, exist_os=True)\n\n",
"I tried my own approach ... It is kind of ugly but it gets the job done.\n\nimport os\nimport shutil\n\n\n\ndef sort_folder(fpath: str) -> None:\n\n dirs = []\n files = []\n filetypes = []\n\n for item in os.listdir(fpath):\n if os.path.isfile(f\"{fpath}\\{item}\"):\n files.append(item)\n else:\n dirs.append(item)\n\n for file in files:\n filetype = os.path.splitext(file)[1]\n filetypes.append(filetype)\n\n for filetype in set(filetypes):\n if not os.path.isdir(f\"{fpath}\\{filetype}\"):\n os.mkdir(f\"{fpath}\\{filetype}\")\n\n for (file, filetype) in zip(files, filetypes):\n shutil.move(f\"{fpath}\\{file}\", f\"{fpath}\\{filetype}\")\n\n\n\nif __name__ == '__main__':\n# running the script\n\n sort_folder(fpath)\n\n",
"There are a few issues with your code that are preventing it from working as expected. Here are a few suggestions to help you fix the problems:\n\nYou are using the scandir function to iterate over the files and directories in the basepath directory, but you are not using the entries returned by this function. Instead, you are using the os.listdir function to get a list of all the files in the directory. This means that your code is not processing the entries returned by the scandir function, and it is not checking if the entries are files or directories.\n\nYou are creating a list of file extensions using the files list, but this list contains both files and directories, so the resulting list of extensions will not be accurate. Instead, you should use the entry.name property of the entries returned by the scandir function to get the names of the files and directories, and then you can split the names to get the file extensions.\n\nYou are using the try and except statements to catch any errors that might occur when creating the directories for the file extensions. However, the try and except statements are not being used correctly. The try statement should be used to run the code that might throw an error, and the except statement should be used to handle the error if it occurs. In your code, the try and except statements are not being used to handle any errors, so they are not doing anything.\nHere is a modified version of your code that fixes these problems and should work as expected:\nimport os\nimport shutil\nbasepath = r\"C:\\Users\\me\\Downloads\\test\"\nfor entry in scandir(basepath):\nif entry.is_file():\nfile_ext = entry.name.rsplit(\".\")[1]\nif not os.path.exists(file_ext):\nos.makedirs(file_ext)\nshutil.move(entry.path, file_ext)\n\n\nThis code uses the scandir function to iterate over the entries in the basepath directory, and it only processes the entries that are files (not directories). For each file, it gets the file extension from the file name, and it creates a directory with the same name as the file extension if it does not already exist. Then, it moves the file to the directory using the shutil.move function.\n"
] | [
0,
0,
0
] | [] | [] | [
"operating_system",
"python",
"shutil"
] | stackoverflow_0074670080_operating_system_python_shutil.txt |
Q:
Visual Studio 2022 editorconfig not applied on cleanup
I'm having a bit of trouble understanding how .editorconfig should work.
I created the .editorconfig file at the solution level
enforced the file scoped namespaces in it
I correctly see the warning in my .cs file for the above rule
I would have expected that to be applied automatically when running Visual Studio's code cleanup, but nothing happens
Am i understanding something wrong? shouldn't vs code cleanup refactor files based on .editorconfig rules?
moreover, if i try to open the .editorconfig file i get an empty UI in VS.
What am i missing?
A:
Visual Studio's Code Cleanup feature runs a set of predefined tasks, as configured in the Code Cleanup profile. Most of these tasks correspond to specific IDE settings, some of which may be configured by .editorconfig.
Among those is Format Document, which uses takes a lot of the .editorconfig settings into account when applying formatting all in one big operation. However, Format Document does not make refactoring changes to the existing code. It wouldn't change the overall structure of the document.
There is a Code Fix (or lightbulb, or suggested action, or... it goes by lots of names) that will appear on the namespace block. That will provide a gesture to make the edit in that file, or across the entire project or solution (each file will be modified as applicable, as .editorconfig applies to directory hierarchies, and may not be present for all projects in the solution).
moreover, if I try to open the .editorconfig file I get an empty UI in VS..
This sounds like a bug and should be reported using the VS Feedback tool.
A:
I had a .editorconfig in a top-level directory from VS2019 that stopped working and wouldn't load correctly when I brought the solution to VS2022. In my case, it turned out that I needed to add
root = true
to the top of the .editorconfig file.
| Visual Studio 2022 editorconfig not applied on cleanup | I'm having a bit of trouble understanding how .editorconfig should work.
I created the .editorconfig file at the solution level
enforced the file scoped namespaces in it
I correctly see the warning in my .cs file for the above rule
I would have expected that to be applied automatically when running Visual Studio's code cleanup, but nothing happens
Am i understanding something wrong? shouldn't vs code cleanup refactor files based on .editorconfig rules?
moreover, if i try to open the .editorconfig file i get an empty UI in VS.
What am i missing?
| [
"Visual Studio's Code Cleanup feature runs a set of predefined tasks, as configured in the Code Cleanup profile. Most of these tasks correspond to specific IDE settings, some of which may be configured by .editorconfig.\nAmong those is Format Document, which uses takes a lot of the .editorconfig settings into account when applying formatting all in one big operation. However, Format Document does not make refactoring changes to the existing code. It wouldn't change the overall structure of the document.\nThere is a Code Fix (or lightbulb, or suggested action, or... it goes by lots of names) that will appear on the namespace block. That will provide a gesture to make the edit in that file, or across the entire project or solution (each file will be modified as applicable, as .editorconfig applies to directory hierarchies, and may not be present for all projects in the solution).\n\nmoreover, if I try to open the .editorconfig file I get an empty UI in VS..\n\nThis sounds like a bug and should be reported using the VS Feedback tool.\n",
"I had a .editorconfig in a top-level directory from VS2019 that stopped working and wouldn't load correctly when I brought the solution to VS2022. In my case, it turned out that I needed to add\nroot = true\n\nto the top of the .editorconfig file.\n"
] | [
1,
0
] | [] | [] | [
"editorconfig",
"visual_studio",
"visual_studio_2022"
] | stackoverflow_0072003973_editorconfig_visual_studio_visual_studio_2022.txt |
Q:
pgbouncer windows 10 - client_login_timeout(server_down) error
I am trying to setup pgbouncer on my local machine. I have standard (didnt change anything after installation) configuration file with entries:
postgres = host=127.0.0.1 port=5432
listen_addr = *
listen_port = 6432
auth_type = trust //also tested with md5 and plain
My postgresql (ver 9.4) is running on port 5432. When I execute
psql -U postgres -p 5432 -d postgres
i can successfully connect. Now i am trying to connect to pgbouncer
psql -U postgres -p 6432 -d postgres
after providing password pgbouncer cannot connect (it hangs for 60 sec) and then timeouts with error
psql: ERROR: client_login_timeout (server down)
Pgbouncer logs:
2017-05-05 00:17:27.084 14696 LOG File descriptor limit: -1 (H:-1), max_client_conn: 100, max fds possible: 130
2017-05-05 00:17:27.104 14696 LOG listening on ::/6432
2017-05-05 00:17:27.105 14696 LOG listening on 0.0.0.0:6432
2017-05-05 00:17:27.106 14696 LOG process up: pgbouncer 1.7.2, libevent 2.0.21-stable (win32), adns: evdns2, tls: OpenSSL 1.0.2k 26 Jan 2017
2017-05-05 00:18:27.104 14696 LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0 us
2017-05-05 00:18:51.852 14696 LOG C-009B8FE0: postgres/postgres@[::1]:55878 login attempt: db=postgres user=postgres tls=no
2017-05-05 00:18:51.854 14696 WARNING
2017-05-05 00:18:51.854 14696 LOG S-009EF248: postgres/[email protected]:5432 closing because: connect failed (age=0)
2017-05-05 00:19:06.929 14696 WARNING
2017-05-05 00:19:06.929 14696 LOG S-009EF248: postgres/[email protected]:5432 closing because: connect failed (age=0)
2017-05-05 00:19:21.949 14696 WARNING
2017-05-05 00:19:21.950 14696 LOG S-009EF248: postgres/[email protected]:5432 closing because: connect failed (age=0)
2017-05-05 00:19:27.105 14696 LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0 us
2017-05-05 00:19:36.969 14696 WARNING
2017-05-05 00:19:36.970 14696 LOG S-009EF248: postgres/[email protected]:5432 closing because: connect failed (age=0)
2017-05-05 00:19:51.990 14696 LOG C-009B8FE0: postgres/postgres@[::1]:55878 closing because: client_login_timeout (server down) (age=60)
2017-05-05 00:19:51.991 14696 WARNING C-009B8FE0: postgres/postgres@[::1]:55878 Pooler Error: client_login_timeout (server down)
2017-05-05 00:20:27.105 14696 LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0 us
hba.conf
host all all 127.0.0.1/32 md5
host all all ::1/128 md5
postgresql.conf
listen_addresses = '*'
What I am doing wrong?
EDIT 1:
Struggling to make this work I tried:
Connecting to other db versions: 9.5 and 9.6
Since I saw in logs that various ports are used I opened whole pgbouncer app in firewall, before that I opened only 6432 port
I thought maybe pgbouncer has problems connecting to localhost so I tried to connect to remote server
Even disabled antivirus
None of above worked. Always the same log shows up (port 5434 is 9.5 version):
2017-05-05 22:26:01.899 8008 LOG C-010C8FF0: postgres/postgres@[::1]:61687 login attempt: db=postgres user=postgres tls=no
2017-05-05 22:26:01.899 8008 LOG C-010C8FF0: postgres/postgres@[::1]:61687 closing because: client unexpected eof (age=0)
2017-05-05 22:26:04.753 8008 LOG C-010C8FF0: postgres/postgres@[::1]:61690 login attempt: db=postgres user=postgres tls=no
2017-05-05 22:26:04.753 8008 WARNING
2017-05-05 22:26:04.753 8008 LOG S-010FF258: postgres/[email protected]:5434 closing because: connect failed (age=0)
2017-05-05 22:26:19.803 8008 WARNING
2017-05-05 22:26:19.803 8008 LOG S-010FF258: postgres/[email protected]:5434 closing because: connect failed (age=0)
2017-05-05 22:26:35.086 8008 WARNING
2017-05-05 22:26:35.086 8008 LOG S-010FF258: postgres/[email protected]:5434 closing because: connect failed (age=0)
2017-05-05 22:26:41.581 8008 LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0 us
2017-05-05 22:26:50.359 8008 WARNING
2017-05-05 22:26:50.359 8008 LOG S-010FF258: postgres/[email protected]:5434 closing because: connect failed (age=0)
2017-05-05 22:27:04.961 8008 LOG C-010C8FF0: postgres/postgres@[::1]:61690 closing because: client_login_timeout (server down) (age=60)
2017-05-05 22:27:04.961 8008 WARNING C-010C8FF0: postgres/postgres@[::1]:61690 Pooler Error: client_login_timeout (server down)
Can someone explain why it tries to connect on port 61687? On this port it gets unexpected eof.
Here is the whole pgbouncer.ini (lines that are not commented out):
[databases]
postgres = host=127.0.0.1 port=5434 dbname=postgres
[pgbouncer]
logfile = C:\Program Files\PostgreSQL\PgBouncer\log\pgbouncer.log
pidfile = C:\Program Files\PostgreSQL\PgBouncer\log\pgbouncer.pid
listen_addr = *
listen_port = 6432
auth_type = md5
auth_file = C:\Program Files\PostgreSQL\PgBouncer\etc\userlist.txt
admin_users = postgres
stats_users = postgres
pool_mode = session
max_client_conn = 100
default_pool_size = 20
A:
For me it was an issue of Port number in the [Database] section.
The default (if not set) is 5432.
On my machine, Ubuntu 22 and PostgreSQL 15 the port is 5433.
I set it explicitly in the [Database] section, now it works perfectly.
| pgbouncer windows 10 - client_login_timeout(server_down) error | I am trying to setup pgbouncer on my local machine. I have standard (didnt change anything after installation) configuration file with entries:
postgres = host=127.0.0.1 port=5432
listen_addr = *
listen_port = 6432
auth_type = trust //also tested with md5 and plain
My postgresql (ver 9.4) is running on port 5432. When I execute
psql -U postgres -p 5432 -d postgres
i can successfully connect. Now i am trying to connect to pgbouncer
psql -U postgres -p 6432 -d postgres
after providing password pgbouncer cannot connect (it hangs for 60 sec) and then timeouts with error
psql: ERROR: client_login_timeout (server down)
Pgbouncer logs:
2017-05-05 00:17:27.084 14696 LOG File descriptor limit: -1 (H:-1), max_client_conn: 100, max fds possible: 130
2017-05-05 00:17:27.104 14696 LOG listening on ::/6432
2017-05-05 00:17:27.105 14696 LOG listening on 0.0.0.0:6432
2017-05-05 00:17:27.106 14696 LOG process up: pgbouncer 1.7.2, libevent 2.0.21-stable (win32), adns: evdns2, tls: OpenSSL 1.0.2k 26 Jan 2017
2017-05-05 00:18:27.104 14696 LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0 us
2017-05-05 00:18:51.852 14696 LOG C-009B8FE0: postgres/postgres@[::1]:55878 login attempt: db=postgres user=postgres tls=no
2017-05-05 00:18:51.854 14696 WARNING
2017-05-05 00:18:51.854 14696 LOG S-009EF248: postgres/[email protected]:5432 closing because: connect failed (age=0)
2017-05-05 00:19:06.929 14696 WARNING
2017-05-05 00:19:06.929 14696 LOG S-009EF248: postgres/[email protected]:5432 closing because: connect failed (age=0)
2017-05-05 00:19:21.949 14696 WARNING
2017-05-05 00:19:21.950 14696 LOG S-009EF248: postgres/[email protected]:5432 closing because: connect failed (age=0)
2017-05-05 00:19:27.105 14696 LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0 us
2017-05-05 00:19:36.969 14696 WARNING
2017-05-05 00:19:36.970 14696 LOG S-009EF248: postgres/[email protected]:5432 closing because: connect failed (age=0)
2017-05-05 00:19:51.990 14696 LOG C-009B8FE0: postgres/postgres@[::1]:55878 closing because: client_login_timeout (server down) (age=60)
2017-05-05 00:19:51.991 14696 WARNING C-009B8FE0: postgres/postgres@[::1]:55878 Pooler Error: client_login_timeout (server down)
2017-05-05 00:20:27.105 14696 LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0 us
hba.conf
host all all 127.0.0.1/32 md5
host all all ::1/128 md5
postgresql.conf
listen_addresses = '*'
What I am doing wrong?
EDIT 1:
Struggling to make this work I tried:
Connecting to other db versions: 9.5 and 9.6
Since I saw in logs that various ports are used I opened whole pgbouncer app in firewall, before that I opened only 6432 port
I thought maybe pgbouncer has problems connecting to localhost so I tried to connect to remote server
Even disabled antivirus
None of above worked. Always the same log shows up (port 5434 is 9.5 version):
2017-05-05 22:26:01.899 8008 LOG C-010C8FF0: postgres/postgres@[::1]:61687 login attempt: db=postgres user=postgres tls=no
2017-05-05 22:26:01.899 8008 LOG C-010C8FF0: postgres/postgres@[::1]:61687 closing because: client unexpected eof (age=0)
2017-05-05 22:26:04.753 8008 LOG C-010C8FF0: postgres/postgres@[::1]:61690 login attempt: db=postgres user=postgres tls=no
2017-05-05 22:26:04.753 8008 WARNING
2017-05-05 22:26:04.753 8008 LOG S-010FF258: postgres/[email protected]:5434 closing because: connect failed (age=0)
2017-05-05 22:26:19.803 8008 WARNING
2017-05-05 22:26:19.803 8008 LOG S-010FF258: postgres/[email protected]:5434 closing because: connect failed (age=0)
2017-05-05 22:26:35.086 8008 WARNING
2017-05-05 22:26:35.086 8008 LOG S-010FF258: postgres/[email protected]:5434 closing because: connect failed (age=0)
2017-05-05 22:26:41.581 8008 LOG Stats: 0 req/s, in 0 b/s, out 0 b/s,query 0 us
2017-05-05 22:26:50.359 8008 WARNING
2017-05-05 22:26:50.359 8008 LOG S-010FF258: postgres/[email protected]:5434 closing because: connect failed (age=0)
2017-05-05 22:27:04.961 8008 LOG C-010C8FF0: postgres/postgres@[::1]:61690 closing because: client_login_timeout (server down) (age=60)
2017-05-05 22:27:04.961 8008 WARNING C-010C8FF0: postgres/postgres@[::1]:61690 Pooler Error: client_login_timeout (server down)
Can someone explain why it tries to connect on port 61687? On this port it gets unexpected eof.
Here is the whole pgbouncer.ini (lines that are not commented out):
[databases]
postgres = host=127.0.0.1 port=5434 dbname=postgres
[pgbouncer]
logfile = C:\Program Files\PostgreSQL\PgBouncer\log\pgbouncer.log
pidfile = C:\Program Files\PostgreSQL\PgBouncer\log\pgbouncer.pid
listen_addr = *
listen_port = 6432
auth_type = md5
auth_file = C:\Program Files\PostgreSQL\PgBouncer\etc\userlist.txt
admin_users = postgres
stats_users = postgres
pool_mode = session
max_client_conn = 100
default_pool_size = 20
| [
"For me it was an issue of Port number in the [Database] section.\nThe default (if not set) is 5432.\nOn my machine, Ubuntu 22 and PostgreSQL 15 the port is 5433.\nI set it explicitly in the [Database] section, now it works perfectly.\n"
] | [
0
] | [] | [] | [
"pgbouncer",
"postgresql"
] | stackoverflow_0043793775_pgbouncer_postgresql.txt |
Q:
Run Webpack Dev Server programatically and wait for bundle to finish
I am trying to do something which I feel is very simple, yet the internet cannot give me a simple answer.
I want to run Webpack Dev Server programmatically. I want to wait for the bundle to finish BEFORE I call listen.
Here is some code.
const webpack = require('webpack');
const WebpackDevServer = require('webpack-dev-server');
const config = require('./webpack.config');
const compiler = webpack(config);
const server = new WebpackDevServer(options, compiler);
// Wait here. I can do a promise, I can do just about anything. I JUST want to wait here.
return {
start: () => {
return new Promise((resolve, reject) => {
server.listen(port, 'localhost', async (err) => {
if (err) {
reject(err)
return;
}
resolve();
});
});
}
};
There has to be a way to have a simple callback from the compiler that says if it was successful or not. Maybe something from the dev server indicating it is ready and there were no bundling issues.
I have seen a LOT about jacking into plugins. TBH I did not understand and it all looked jenky at best.
If that is my only option can/will someone please explains to be the various parts of the what is going on when you "tap" into something. I'm sorry to the Webpack team, but the documentation could be better.
Also I could NOT find any information on running the compiler or the dev server programmatically. What I have above is what I have cobbled together from what I have found only from various online posts.
A:
I'm not sure if this is the correct way to do do this but here is what I ended up doing.
import webpack from 'webpack';
import WebpackDevServer from 'webpack-dev-server';
import Promise from 'bluebird';
const compiler = webpack(config);
const server = new WebpackDevServer(options, compiler);
await Promise.fromCallback((cb) => {
server.listen(port, 'localhost', cb);
});
await Promise.fromCallback((cb) => {
server.middleware.waitUntilValid((stats) => {
if (stats.hasErrors()) {
cb(stats.compilation.getErrors());
} else {
cb();
}
});
});
I got here by finding that the middleware allows to wait until valid. Valid is a terrible word here. In this context Valid means "done". If you want to know if the bundling was successful you need to check if the stats params "hasErrors()".
The dev server does not initialize the middleware until the server is started. So you have to start it first THEN access the middleware and it's waitUntilValid callback.
| Run Webpack Dev Server programatically and wait for bundle to finish | I am trying to do something which I feel is very simple, yet the internet cannot give me a simple answer.
I want to run Webpack Dev Server programmatically. I want to wait for the bundle to finish BEFORE I call listen.
Here is some code.
const webpack = require('webpack');
const WebpackDevServer = require('webpack-dev-server');
const config = require('./webpack.config');
const compiler = webpack(config);
const server = new WebpackDevServer(options, compiler);
// Wait here. I can do a promise, I can do just about anything. I JUST want to wait here.
return {
start: () => {
return new Promise((resolve, reject) => {
server.listen(port, 'localhost', async (err) => {
if (err) {
reject(err)
return;
}
resolve();
});
});
}
};
There has to be a way to have a simple callback from the compiler that says if it was successful or not. Maybe something from the dev server indicating it is ready and there were no bundling issues.
I have seen a LOT about jacking into plugins. TBH I did not understand and it all looked jenky at best.
If that is my only option can/will someone please explains to be the various parts of the what is going on when you "tap" into something. I'm sorry to the Webpack team, but the documentation could be better.
Also I could NOT find any information on running the compiler or the dev server programmatically. What I have above is what I have cobbled together from what I have found only from various online posts.
| [
"I'm not sure if this is the correct way to do do this but here is what I ended up doing.\nimport webpack from 'webpack';\nimport WebpackDevServer from 'webpack-dev-server';\nimport Promise from 'bluebird';\n\nconst compiler = webpack(config);\n\nconst server = new WebpackDevServer(options, compiler);\n\nawait Promise.fromCallback((cb) => {\n server.listen(port, 'localhost', cb);\n});\n\nawait Promise.fromCallback((cb) => {\n server.middleware.waitUntilValid((stats) => {\n if (stats.hasErrors()) {\n cb(stats.compilation.getErrors());\n } else {\n cb();\n }\n });\n});\n\nI got here by finding that the middleware allows to wait until valid. Valid is a terrible word here. In this context Valid means \"done\". If you want to know if the bundling was successful you need to check if the stats params \"hasErrors()\".\nThe dev server does not initialize the middleware until the server is started. So you have to start it first THEN access the middleware and it's waitUntilValid callback.\n"
] | [
0
] | [] | [] | [
"webpack",
"webpack_dev_server"
] | stackoverflow_0074648018_webpack_webpack_dev_server.txt |
Q:
How to read from a written file?
I need to use changed text document (from a function changed_document) in a function called lines. But I cannot just simply use the changed list or string. It shows me an error that "AttributeError: 'str' object has no attribute 'readlines'". So I've tried to write the changed text in to a new text file and then read it to use in the line function. But it doesn't work. I cannot read that newly written text file. It prints just empty lines.
def reading():
doc = open("C:/Users/s.txt", "r", encoding= 'utf-8')
docu = doc
return docu
def longest_word_place(document):
words = document.read().split()
i = 0
max = 0
max_place = 0
for i in range(len(words)):
if len(words[i]) > max:
max = len(words[i])
max_place = i
return max_place
def changed_document (document):
list = []
for line in document:
for symbol in line:
if symbol.isnumeric():
symbol = ' '
if symbol in "#,.;«\³][:¡|>^' '<?+=_-)(*&^%$£!`":
symbol = ' '
list.append(symbol)
document_changed =''.join([str(item) for item in list])
return document_changed
def lines(document):
lines = document.readlines()
max_word = ''
max_line = 0
for line_index, every_line in enumerate(lines, 1):
line_words = every_line.strip().split()
for each_word in line_words:
if len(each_word) > len(max_word):
max_word = each_word
max_line = line_index
print(f"{max_word = }, {max_line = }")
document = reading()
ch_dok = changed_document(document)
text_file = open("C:/Users/changed_doc.txt", "w+", encoding= 'utf-8')
text_file.write(ch_dok)
text_file.close
doc1 = open("C:/Users/changed_doc.txt", "r", encoding= 'utf-8')
for line1 in doc1:
print(line1)
A:
In "text_file.close" you missed the parenthesis, so the file is not closed (just the function itself is returned, not called).
Perhaps this is the issue..?
| How to read from a written file? | I need to use changed text document (from a function changed_document) in a function called lines. But I cannot just simply use the changed list or string. It shows me an error that "AttributeError: 'str' object has no attribute 'readlines'". So I've tried to write the changed text in to a new text file and then read it to use in the line function. But it doesn't work. I cannot read that newly written text file. It prints just empty lines.
def reading():
doc = open("C:/Users/s.txt", "r", encoding= 'utf-8')
docu = doc
return docu
def longest_word_place(document):
words = document.read().split()
i = 0
max = 0
max_place = 0
for i in range(len(words)):
if len(words[i]) > max:
max = len(words[i])
max_place = i
return max_place
def changed_document (document):
list = []
for line in document:
for symbol in line:
if symbol.isnumeric():
symbol = ' '
if symbol in "#,.;«\³][:¡|>^' '<?+=_-)(*&^%$£!`":
symbol = ' '
list.append(symbol)
document_changed =''.join([str(item) for item in list])
return document_changed
def lines(document):
lines = document.readlines()
max_word = ''
max_line = 0
for line_index, every_line in enumerate(lines, 1):
line_words = every_line.strip().split()
for each_word in line_words:
if len(each_word) > len(max_word):
max_word = each_word
max_line = line_index
print(f"{max_word = }, {max_line = }")
document = reading()
ch_dok = changed_document(document)
text_file = open("C:/Users/changed_doc.txt", "w+", encoding= 'utf-8')
text_file.write(ch_dok)
text_file.close
doc1 = open("C:/Users/changed_doc.txt", "r", encoding= 'utf-8')
for line1 in doc1:
print(line1)
| [
"In \"text_file.close\" you missed the parenthesis, so the file is not closed (just the function itself is returned, not called).\nPerhaps this is the issue..?\n"
] | [
0
] | [] | [] | [
"file_writing",
"function",
"list",
"python_3.x",
"text_files"
] | stackoverflow_0074669744_file_writing_function_list_python_3.x_text_files.txt |
Q:
ERROR 5292 --- [ restartedMain] o.s.boot.SpringApplication : Application run failed
I am getting this error from long time and I didn't understand what to do. I try to fix it by adding dependencies. But It's not working and giving same error again and again. Please help me. Here is my log.
18:36:52.182 [Thread-0] DEBUG
org.springframework.boot.devtools.restart.classloader.RestartClassLoader - Created
RestartClassLoader
org.springframework.boot.devtools.restart.classloader.RestartClassLoader@2e89701c
2022-03-14 18:36:52.931 INFO 5292 --- [ restartedMain] com.nominajava.NominaJavaApplication
: Starting NominaJavaApplication using Java 1.8.0_251 on DESKTOP-G2A4MMS with PID 5292
(C:\Users\AlejandroPC\Desktop\Infocent\Backend\Nomina V11 backend\semilla-nomina-java-
v11\nomina-java\target\classes started by AlejandroPC in C:\Users\AlejandroPC\Desktop\Infocent\Backend\Nomina V11 backend\semilla-nomina-java-v11)
2022-03-14 18:36:52.932 INFO 5292 --- [ restartedMain] com.nominajava.NominaJavaApplication : No active profile set, falling back to default profiles: default
2022-03-14 18:36:53.048 INFO 5292 --- [ restartedMain] o.s.b.devtools.restart.ChangeableUrls : The Class-Path manifest attribute in C:\Users\AlejandroPC\.m2\repository\com\oracle\database\jdbc\ojdbc8\21.3.0.0\ojdbc8-21.3.0.0.jar referenced one or more files that do not exist: file:/C:/Users/AlejandroPC/.m2/repository/com/oracle/database/jdbc/ojdbc8/21.3.0.0/oraclepki.jar
2022-03-14 18:36:53.048 INFO 5292 --- [ restartedMain] .e.DevToolsPropertyDefaultsPostProcessor : Devtools property defaults active! Set 'spring.devtools.add-properties' to 'false' to disable
2022-03-14 18:36:54.497 ERROR 5292 --- [ restartedMain] o.s.boot.SpringApplication : Application run failed
java.lang.IllegalStateException: Error processing condition on org.springframework.boot.autoconfigure.context.PropertyPlaceholderAutoConfiguration.propertySourcesPlaceholderConfigurer
at org.springframework.boot.autoconfigure.condition.SpringBootCondition.matches(SpringBootCondition.java:60) ~[spring-boot-autoconfigure-2.6.3.jar:2.6.3]
at org.springframework.context.annotation.ConditionEvaluator.shouldSkip(ConditionEvaluator.java:108) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForBeanMethod(ConfigurationClassBeanDefinitionReader.java:193) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForConfigurationClass(ConfigurationClassBeanDefinitionReader.java:153) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitions(ConfigurationClassBeanDefinitionReader.java:129) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:343) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:247) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanDefinitionRegistryPostProcessors(PostProcessorRegistrationDelegate.java:311) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:112) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:746) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:564) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:732) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:414) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:302) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1303) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1292) [spring-boot-2.6.3.jar:2.6.3]
at com.nominajava.NominaJavaApplication.main(NominaJavaApplication.java:17) [classes/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_251]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_251]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_251]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_251]
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) [spring-boot-devtools-2.6.3.jar:2.6.3]
Caused by: java.lang.IllegalStateException: Failed to introspect Class [org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration] from ClassLoader [sun.misc.Launcher$AppClassLoader@18b4aac2]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:481) ~[spring-core-5.3.15.jar:5.3.15]
at org.springframework.util.ReflectionUtils.doWithMethods(ReflectionUtils.java:358) ~[spring-core-5.3.15.jar:5.3.15]
at org.springframework.util.ReflectionUtils.getUniqueDeclaredMethods(ReflectionUtils.java:414) ~[spring-core-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.lambda$getTypeForFactoryMethod$2(AbstractAutowireCapableBeanFactory.java:765) ~[spring-beans-5.3.15.jar:5.3.15]
at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660) ~[na:1.8.0_251]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getTypeForFactoryMethod(AbstractAutowireCapableBeanFactory.java:764) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.determineTargetType(AbstractAutowireCapableBeanFactory.java:703) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.predictBeanType(AbstractAutowireCapableBeanFactory.java:674) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractBeanFactory.isFactoryBean(AbstractBeanFactory.java:1670) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doGetBeanNamesForType(DefaultListableBeanFactory.java:570) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:542) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.collectBeanNamesForType(OnBeanCondition.java:238) ~[spring-boot-autoconfigure-2.6.3.jar:2.6.3]
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.getBeanNamesForType(OnBeanCondition.java:231) ~[spring-boot-autoconfigure-2.6.3.jar:2.6.3]
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.getBeanNamesForType(OnBeanCondition.java:221) ~[spring-boot-autoconfigure-2.6.3.jar:2.6.3]
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.getMatchingBeans(OnBeanCondition.java:169) ~[spring-boot-autoconfigure-2.6.3.jar:2.6.3]
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.getMatchOutcome(OnBeanCondition.java:144) ~[spring-boot-autoconfigure-2.6.3.jar:2.6.3]
at org.springframework.boot.autoconfigure.condition.SpringBootCondition.matches(SpringBootCondition.java:47) ~[spring-boot-autoconfigure-2.6.3.jar:2.6.3]
... 21 common frames omitted
Caused by: java.lang.NoClassDefFoundError: javax/servlet/Filter
at java.lang.Class.getDeclaredMethods0(Native Method) ~[na:1.8.0_251]
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) ~[na:1.8.0_251]
at java.lang.Class.getDeclaredMethods(Class.java:1975) ~[na:1.8.0_251]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:463) ~[spring-core-5.3.15.jar:5.3.15]
... 37 common frames omitted
Caused by: java.lang.ClassNotFoundException: javax.servlet.Filter
at java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[na:1.8.0_251]
at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[na:1.8.0_251]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) ~[na:1.8.0_251]
at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[na:1.8.0_251]
... 41 common frames omitted
2022-03-14 18:36:54.501 WARN 5292 --- [ restartedMain] o.s.boot.SpringApplication : Unable to close ApplicationContext
java.lang.IllegalStateException: Failed to introspect Class [org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration] from ClassLoader [sun.misc.Launcher$AppClassLoader@18b4aac2]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:481) ~[spring-core-5.3.15.jar:5.3.15]
at org.springframework.util.ReflectionUtils.doWithMethods(ReflectionUtils.java:358) ~[spring-core-5.3.15.jar:5.3.15]
at org.springframework.util.ReflectionUtils.getUniqueDeclaredMethods(ReflectionUtils.java:414) ~[spring-core-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.lambda$getTypeForFactoryMethod$2(AbstractAutowireCapableBeanFactory.java:765) ~[spring-beans-5.3.15.jar:5.3.15]
at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660) ~[na:1.8.0_251]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getTypeForFactoryMethod(AbstractAutowireCapableBeanFactory.java:764) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.determineTargetType(AbstractAutowireCapableBeanFactory.java:703) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.predictBeanType(AbstractAutowireCapableBeanFactory.java:674) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractBeanFactory.isFactoryBean(AbstractBeanFactory.java:1670) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doGetBeanNamesForType(DefaultListableBeanFactory.java:570) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:542) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeansOfType(DefaultListableBeanFactory.java:667) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeansOfType(DefaultListableBeanFactory.java:659) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.context.support.AbstractApplicationContext.getBeansOfType(AbstractApplicationContext.java:1300) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.boot.SpringApplication.getExitCodeFromMappedException(SpringApplication.java:864) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.getExitCodeFromException(SpringApplication.java:852) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.handleExitCode(SpringApplication.java:839) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.handleRunFailure(SpringApplication.java:780) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1303) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1292) [spring-boot-2.6.3.jar:2.6.3]
at com.nominajava.NominaJavaApplication.main(NominaJavaApplication.java:17) [classes/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_251]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_251]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_251]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_251]
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) [spring-boot-devtools-2.6.3.jar:2.6.3]
Caused by: java.lang.NoClassDefFoundError: javax/servlet/Filter
at java.lang.Class.getDeclaredMethods0(Native Method) ~[na:1.8.0_251]
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) ~[na:1.8.0_251]
at java.lang.Class.getDeclaredMethods(Class.java:1975) ~[na:1.8.0_251]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:463) ~[spring-core-5.3.15.jar:5.3.15]
... 26 common frames omitted
Caused by: java.lang.ClassNotFoundException: javax.servlet.Filter
at java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[na:1.8.0_251]
at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[na:1.8.0_251]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) ~[na:1.8.0_251]
at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[na:1.8.0_251]
... 30 common frames omitted
Process finished with exit code 0
Here is my pom.xml file:
4.0.0 org.springframework.boot spring-boot-starter-parent 2.6.0 <java.version>8</java.version> org.springframework.boot spring-boot-starter-web
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.6.3</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<groupId>com.nomina-java</groupId>
<artifactId>nomina-java</artifactId>
<version>1.0</version>
<packaging>war</packaging>
<name>nomina-java</name>
<description>SemillaBackend para iniciar el proyecto</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<!-- https://mvnrepository.com/artifact/org.modelmapper/modelmapper -->
<dependency>
<groupId>org.modelmapper</groupId>
<artifactId>modelmapper</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.javassist</groupId>
<artifactId>javassist</artifactId>
<version>3.28.0-GA</version>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-core-asl</artifactId>
<version>1.9.2</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-mapper-asl</artifactId>
<version>1.9.2</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jdbc</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>com.oracle.database.jdbc</groupId>
<artifactId>ojdbc8</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt</artifactId>
<version>0.9.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
Please help.
A:
maybe you are missing this dependency:
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>servlet-api</artifactId>
<version>2.5</version>
<scope>provided</scope>
</dependency>
| ERROR 5292 --- [ restartedMain] o.s.boot.SpringApplication : Application run failed | I am getting this error from long time and I didn't understand what to do. I try to fix it by adding dependencies. But It's not working and giving same error again and again. Please help me. Here is my log.
18:36:52.182 [Thread-0] DEBUG
org.springframework.boot.devtools.restart.classloader.RestartClassLoader - Created
RestartClassLoader
org.springframework.boot.devtools.restart.classloader.RestartClassLoader@2e89701c
2022-03-14 18:36:52.931 INFO 5292 --- [ restartedMain] com.nominajava.NominaJavaApplication
: Starting NominaJavaApplication using Java 1.8.0_251 on DESKTOP-G2A4MMS with PID 5292
(C:\Users\AlejandroPC\Desktop\Infocent\Backend\Nomina V11 backend\semilla-nomina-java-
v11\nomina-java\target\classes started by AlejandroPC in C:\Users\AlejandroPC\Desktop\Infocent\Backend\Nomina V11 backend\semilla-nomina-java-v11)
2022-03-14 18:36:52.932 INFO 5292 --- [ restartedMain] com.nominajava.NominaJavaApplication : No active profile set, falling back to default profiles: default
2022-03-14 18:36:53.048 INFO 5292 --- [ restartedMain] o.s.b.devtools.restart.ChangeableUrls : The Class-Path manifest attribute in C:\Users\AlejandroPC\.m2\repository\com\oracle\database\jdbc\ojdbc8\21.3.0.0\ojdbc8-21.3.0.0.jar referenced one or more files that do not exist: file:/C:/Users/AlejandroPC/.m2/repository/com/oracle/database/jdbc/ojdbc8/21.3.0.0/oraclepki.jar
2022-03-14 18:36:53.048 INFO 5292 --- [ restartedMain] .e.DevToolsPropertyDefaultsPostProcessor : Devtools property defaults active! Set 'spring.devtools.add-properties' to 'false' to disable
2022-03-14 18:36:54.497 ERROR 5292 --- [ restartedMain] o.s.boot.SpringApplication : Application run failed
java.lang.IllegalStateException: Error processing condition on org.springframework.boot.autoconfigure.context.PropertyPlaceholderAutoConfiguration.propertySourcesPlaceholderConfigurer
at org.springframework.boot.autoconfigure.condition.SpringBootCondition.matches(SpringBootCondition.java:60) ~[spring-boot-autoconfigure-2.6.3.jar:2.6.3]
at org.springframework.context.annotation.ConditionEvaluator.shouldSkip(ConditionEvaluator.java:108) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForBeanMethod(ConfigurationClassBeanDefinitionReader.java:193) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForConfigurationClass(ConfigurationClassBeanDefinitionReader.java:153) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitions(ConfigurationClassBeanDefinitionReader.java:129) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:343) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:247) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanDefinitionRegistryPostProcessors(PostProcessorRegistrationDelegate.java:311) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:112) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:746) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:564) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:732) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:414) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:302) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1303) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1292) [spring-boot-2.6.3.jar:2.6.3]
at com.nominajava.NominaJavaApplication.main(NominaJavaApplication.java:17) [classes/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_251]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_251]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_251]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_251]
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) [spring-boot-devtools-2.6.3.jar:2.6.3]
Caused by: java.lang.IllegalStateException: Failed to introspect Class [org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration] from ClassLoader [sun.misc.Launcher$AppClassLoader@18b4aac2]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:481) ~[spring-core-5.3.15.jar:5.3.15]
at org.springframework.util.ReflectionUtils.doWithMethods(ReflectionUtils.java:358) ~[spring-core-5.3.15.jar:5.3.15]
at org.springframework.util.ReflectionUtils.getUniqueDeclaredMethods(ReflectionUtils.java:414) ~[spring-core-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.lambda$getTypeForFactoryMethod$2(AbstractAutowireCapableBeanFactory.java:765) ~[spring-beans-5.3.15.jar:5.3.15]
at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660) ~[na:1.8.0_251]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getTypeForFactoryMethod(AbstractAutowireCapableBeanFactory.java:764) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.determineTargetType(AbstractAutowireCapableBeanFactory.java:703) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.predictBeanType(AbstractAutowireCapableBeanFactory.java:674) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractBeanFactory.isFactoryBean(AbstractBeanFactory.java:1670) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doGetBeanNamesForType(DefaultListableBeanFactory.java:570) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:542) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.collectBeanNamesForType(OnBeanCondition.java:238) ~[spring-boot-autoconfigure-2.6.3.jar:2.6.3]
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.getBeanNamesForType(OnBeanCondition.java:231) ~[spring-boot-autoconfigure-2.6.3.jar:2.6.3]
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.getBeanNamesForType(OnBeanCondition.java:221) ~[spring-boot-autoconfigure-2.6.3.jar:2.6.3]
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.getMatchingBeans(OnBeanCondition.java:169) ~[spring-boot-autoconfigure-2.6.3.jar:2.6.3]
at org.springframework.boot.autoconfigure.condition.OnBeanCondition.getMatchOutcome(OnBeanCondition.java:144) ~[spring-boot-autoconfigure-2.6.3.jar:2.6.3]
at org.springframework.boot.autoconfigure.condition.SpringBootCondition.matches(SpringBootCondition.java:47) ~[spring-boot-autoconfigure-2.6.3.jar:2.6.3]
... 21 common frames omitted
Caused by: java.lang.NoClassDefFoundError: javax/servlet/Filter
at java.lang.Class.getDeclaredMethods0(Native Method) ~[na:1.8.0_251]
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) ~[na:1.8.0_251]
at java.lang.Class.getDeclaredMethods(Class.java:1975) ~[na:1.8.0_251]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:463) ~[spring-core-5.3.15.jar:5.3.15]
... 37 common frames omitted
Caused by: java.lang.ClassNotFoundException: javax.servlet.Filter
at java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[na:1.8.0_251]
at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[na:1.8.0_251]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) ~[na:1.8.0_251]
at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[na:1.8.0_251]
... 41 common frames omitted
2022-03-14 18:36:54.501 WARN 5292 --- [ restartedMain] o.s.boot.SpringApplication : Unable to close ApplicationContext
java.lang.IllegalStateException: Failed to introspect Class [org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration] from ClassLoader [sun.misc.Launcher$AppClassLoader@18b4aac2]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:481) ~[spring-core-5.3.15.jar:5.3.15]
at org.springframework.util.ReflectionUtils.doWithMethods(ReflectionUtils.java:358) ~[spring-core-5.3.15.jar:5.3.15]
at org.springframework.util.ReflectionUtils.getUniqueDeclaredMethods(ReflectionUtils.java:414) ~[spring-core-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.lambda$getTypeForFactoryMethod$2(AbstractAutowireCapableBeanFactory.java:765) ~[spring-beans-5.3.15.jar:5.3.15]
at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660) ~[na:1.8.0_251]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.getTypeForFactoryMethod(AbstractAutowireCapableBeanFactory.java:764) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.determineTargetType(AbstractAutowireCapableBeanFactory.java:703) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.predictBeanType(AbstractAutowireCapableBeanFactory.java:674) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.AbstractBeanFactory.isFactoryBean(AbstractBeanFactory.java:1670) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doGetBeanNamesForType(DefaultListableBeanFactory.java:570) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanNamesForType(DefaultListableBeanFactory.java:542) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeansOfType(DefaultListableBeanFactory.java:667) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeansOfType(DefaultListableBeanFactory.java:659) ~[spring-beans-5.3.15.jar:5.3.15]
at org.springframework.context.support.AbstractApplicationContext.getBeansOfType(AbstractApplicationContext.java:1300) ~[spring-context-5.3.15.jar:5.3.15]
at org.springframework.boot.SpringApplication.getExitCodeFromMappedException(SpringApplication.java:864) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.getExitCodeFromException(SpringApplication.java:852) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.handleExitCode(SpringApplication.java:839) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.handleRunFailure(SpringApplication.java:780) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1303) [spring-boot-2.6.3.jar:2.6.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1292) [spring-boot-2.6.3.jar:2.6.3]
at com.nominajava.NominaJavaApplication.main(NominaJavaApplication.java:17) [classes/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_251]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_251]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_251]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_251]
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) [spring-boot-devtools-2.6.3.jar:2.6.3]
Caused by: java.lang.NoClassDefFoundError: javax/servlet/Filter
at java.lang.Class.getDeclaredMethods0(Native Method) ~[na:1.8.0_251]
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) ~[na:1.8.0_251]
at java.lang.Class.getDeclaredMethods(Class.java:1975) ~[na:1.8.0_251]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:463) ~[spring-core-5.3.15.jar:5.3.15]
... 26 common frames omitted
Caused by: java.lang.ClassNotFoundException: javax.servlet.Filter
at java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[na:1.8.0_251]
at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[na:1.8.0_251]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) ~[na:1.8.0_251]
at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[na:1.8.0_251]
... 30 common frames omitted
Process finished with exit code 0
Here is my pom.xml file:
4.0.0 org.springframework.boot spring-boot-starter-parent 2.6.0 <java.version>8</java.version> org.springframework.boot spring-boot-starter-web
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.6.3</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<groupId>com.nomina-java</groupId>
<artifactId>nomina-java</artifactId>
<version>1.0</version>
<packaging>war</packaging>
<name>nomina-java</name>
<description>SemillaBackend para iniciar el proyecto</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<!-- https://mvnrepository.com/artifact/org.modelmapper/modelmapper -->
<dependency>
<groupId>org.modelmapper</groupId>
<artifactId>modelmapper</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.javassist</groupId>
<artifactId>javassist</artifactId>
<version>3.28.0-GA</version>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-core-asl</artifactId>
<version>1.9.2</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-mapper-asl</artifactId>
<version>1.9.2</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jdbc</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-validation</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>com.oracle.database.jdbc</groupId>
<artifactId>ojdbc8</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>io.jsonwebtoken</groupId>
<artifactId>jjwt</artifactId>
<version>0.9.1</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
Please help.
| [
"maybe you are missing this dependency:\n<dependency>\n <groupId>javax.servlet</groupId>\n <artifactId>servlet-api</artifactId>\n <version>2.5</version>\n <scope>provided</scope>\n</dependency>\n\n"
] | [
0
] | [] | [] | [
"java",
"spring",
"spring_boot"
] | stackoverflow_0071475040_java_spring_spring_boot.txt |
Q:
Voice input navigation
I'm approached to make a voice input symbol to a site page that gets order through ones voice and execute the order, for example, envision somebody saying landing page and it consequently explore to the landing page.
I don't know how to go about it, please I need your help .
A:
To create a voice input feature for a website that can execute commands based on spoken input, you will need to use a combination of web development technologies and speech recognition APIs. Here are the basic steps for creating such a feature:
1- Use HTML and CSS to create the user interface for your voice input feature. This will include elements such as a button to initiate voice input, a text field to display the recognized speech, and any other UI elements that you want to include.
2- Use JavaScript to handle the logic and functionality of your voice input feature. This will include code to initiate the voice input, handle the recognition of the spoken words, and execute the appropriate commands based on the recognized input.
3- Use a speech recognition API to handle the actual recognition of the spoken words. There are several APIs available that can provide this functionality, such as the Web Speech API provided by the browser, or third-party APIs such as Google Cloud Speech-to-Text. These APIs will provide the necessary functions and capabilities for recognizing the spoken words and returning the recognized text to your JavaScript code.
4- Test and debug your voice input feature to ensure that it works correctly and consistently. This will involve testing the recognition of different commands and words, as well as ensuring that the appropriate actions are taken based on the recognized input.
These are the basic steps for creating a voice input feature for a website, but there are many other details and considerations that you will need to take into account when building this feature. For more information and resources on creating voice input features for websites, I recommend checking out the official documentation and tutorials for the Web Speech API and other speech recognition APIs. These will provide more detailed information on the process of creating and
| Voice input navigation | I'm approached to make a voice input symbol to a site page that gets order through ones voice and execute the order, for example, envision somebody saying landing page and it consequently explore to the landing page.
I don't know how to go about it, please I need your help .
| [
"To create a voice input feature for a website that can execute commands based on spoken input, you will need to use a combination of web development technologies and speech recognition APIs. Here are the basic steps for creating such a feature:\n1- Use HTML and CSS to create the user interface for your voice input feature. This will include elements such as a button to initiate voice input, a text field to display the recognized speech, and any other UI elements that you want to include.\n2- Use JavaScript to handle the logic and functionality of your voice input feature. This will include code to initiate the voice input, handle the recognition of the spoken words, and execute the appropriate commands based on the recognized input.\n3- Use a speech recognition API to handle the actual recognition of the spoken words. There are several APIs available that can provide this functionality, such as the Web Speech API provided by the browser, or third-party APIs such as Google Cloud Speech-to-Text. These APIs will provide the necessary functions and capabilities for recognizing the spoken words and returning the recognized text to your JavaScript code.\n4- Test and debug your voice input feature to ensure that it works correctly and consistently. This will involve testing the recognition of different commands and words, as well as ensuring that the appropriate actions are taken based on the recognized input.\nThese are the basic steps for creating a voice input feature for a website, but there are many other details and considerations that you will need to take into account when building this feature. For more information and resources on creating voice input features for websites, I recommend checking out the official documentation and tutorials for the Web Speech API and other speech recognition APIs. These will provide more detailed information on the process of creating and\n"
] | [
0
] | [] | [] | [
"artificial_intelligence",
"frontend",
"javascript"
] | stackoverflow_0074671040_artificial_intelligence_frontend_javascript.txt |
Q:
Git Credential Manager Won't Authenticate Me
I am running WSL2 and trying to get Git Credential Manager (GCM) set up so that I don't have to always copy-paste my Github Personal Access Token into my terminal. Once I added the credential manager I was unable to access my remote repositories, this is what my .gitconfig looks like:
1 [user]
1 email = [email protected]
2 name = Name
3 [credential]
4 helper = /mnt/c/Program\\ Files/Git/mingw64/libexec/git-core/git-credential-wincred.exe
Now when I do a git pull on the remote repository Git is telling me that it cannot be found. It's not clear to me why GCM is blocking me now, but would you have any recommendations for next steps?
A:
git-credential-wincred.exe is the old legacy credential helper.
GCM is git-credential-manager-core.exe
helper = manager-core.exe
(it will be manager.exe with Git 2.39+)
Make sure your $PATH includes /mnt/c/Program Files/Git/mingw64/bin/
Then this would work (under a WSL2 bash session):
printf "host=github.com\nprotocol=https" | git-credential-manager-core.exe get
# or
printf "host=github.com\nprotocol=https" | git credential-manager-core.exe get
^^^
However, this would not:
printf "host=github.com\nprotocol=https" | git credential-manager-core get
^^^
fatal: 'credential-manager-core' appears to be a git command,
but we were not able to execute it.
Maybe git-credential-manager-core is broken?
A:
If you are using Git Credential Manager (GCM) to manage your Git credentials, it is likely that you are encountering an issue with the configuration of your Git settings. There are a few possible reasons why GCM might be preventing you from accessing your remote repositories. Here are a few possible solutions to try:
Make sure that you have installed GCM correctly and that it is running on your system. To do this, you can try running the git credential-manager version command in your terminal. If GCM is installed and running, you should see the version number displayed in the output.
Check the configuration of your Git settings to ensure that the path to GCM is correct. In your .gitconfig file, the helper value should be set to the path of the git-credential-manager.exe file on your system. For example, if GCM is installed in the C:\Program Files\Git\mingw64\libexec\git-core directory, the helper value should be set to /mnt/c/Program\ Files/Git/mingw64/libexec/git-core/git-credential-manager.exe.
If you are still unable to access your remote repositories, you may need to clear your Git credentials. To do this, you can try running the git credential-manager reject command in your terminal. This will remove any saved credentials from your system, allowing you to enter new ones the next time you access a remote repository.
I hope these suggestions help! If you are still having trouble getting GCM to work, it might be worth reaching out to the Git community for further support.
| Git Credential Manager Won't Authenticate Me | I am running WSL2 and trying to get Git Credential Manager (GCM) set up so that I don't have to always copy-paste my Github Personal Access Token into my terminal. Once I added the credential manager I was unable to access my remote repositories, this is what my .gitconfig looks like:
1 [user]
1 email = [email protected]
2 name = Name
3 [credential]
4 helper = /mnt/c/Program\\ Files/Git/mingw64/libexec/git-core/git-credential-wincred.exe
Now when I do a git pull on the remote repository Git is telling me that it cannot be found. It's not clear to me why GCM is blocking me now, but would you have any recommendations for next steps?
| [
"git-credential-wincred.exe is the old legacy credential helper.\nGCM is git-credential-manager-core.exe\nhelper = manager-core.exe\n\n(it will be manager.exe with Git 2.39+)\nMake sure your $PATH includes /mnt/c/Program Files/Git/mingw64/bin/\nThen this would work (under a WSL2 bash session):\nprintf \"host=github.com\\nprotocol=https\" | git-credential-manager-core.exe get\n# or\nprintf \"host=github.com\\nprotocol=https\" | git credential-manager-core.exe get\n ^^^\n\nHowever, this would not:\nprintf \"host=github.com\\nprotocol=https\" | git credential-manager-core get\n ^^^\nfatal: 'credential-manager-core' appears to be a git command, \n but we were not able to execute it. \n Maybe git-credential-manager-core is broken?\n\n",
"If you are using Git Credential Manager (GCM) to manage your Git credentials, it is likely that you are encountering an issue with the configuration of your Git settings. There are a few possible reasons why GCM might be preventing you from accessing your remote repositories. Here are a few possible solutions to try:\nMake sure that you have installed GCM correctly and that it is running on your system. To do this, you can try running the git credential-manager version command in your terminal. If GCM is installed and running, you should see the version number displayed in the output.\nCheck the configuration of your Git settings to ensure that the path to GCM is correct. In your .gitconfig file, the helper value should be set to the path of the git-credential-manager.exe file on your system. For example, if GCM is installed in the C:\\Program Files\\Git\\mingw64\\libexec\\git-core directory, the helper value should be set to /mnt/c/Program\\ Files/Git/mingw64/libexec/git-core/git-credential-manager.exe.\nIf you are still unable to access your remote repositories, you may need to clear your Git credentials. To do this, you can try running the git credential-manager reject command in your terminal. This will remove any saved credentials from your system, allowing you to enter new ones the next time you access a remote repository.\nI hope these suggestions help! If you are still having trouble getting GCM to work, it might be worth reaching out to the Git community for further support.\n"
] | [
0,
0
] | [] | [] | [
"git",
"wsl_2"
] | stackoverflow_0074668578_git_wsl_2.txt |
Q:
How to delete the element which matches filter within the list in object of JSON
Sorry for my French and stupid question, but i need to DELETE some elements in JSON file while I don't now anything in JSON and JS
There is the structure:
`
{ "questions": [
{
"id": 1,
"quizId": 1,
"question": "Какому автомобилю разрешается остановка в зоне действия этих знаков?",
"correctAnswer": 4,
"image": "1.jpg",
"answers": [
"Красному.",
"Обоим автомобилям.",
"Ни одному.",
"Ни одному.",
"Желтому, обозначенному опознавательным знаком \"Инвалид\".",
"-"
]
},
{
"id": 2,
"quizId": 1,
"question": "По каким направлениям из числа обозначенных стрелками разрешается движение?",
"correctAnswer": 4,
"image": "2.jpg",
"answers": [
"Только по направлению А.",
"Только по направлению Б.",
"Только по направлению В.",
"-",
"-",
"-."
]
}
]
}
And I need to find the elements within answers array which equals to "-" and delete it.
So output should be:
{ "questions": [
{
"id": 1,
"quizId": 1,
"question": "Какому автомобилю разрешается остановка в зоне действия этих знаков?",
"correctAnswer": 4,
"image": "1.jpg",
"answers": [
"Красному.",
"Обоим автомобилям.",
"Ни одному.",
"Ни одному.",
"Желтому, обозначенному опознавательным знаком \"Инвалид\".",
]
},
{
"id": 2,
"quizId": 1,
"question": "По каким направлениям из числа обозначенных стрелками разрешается движение?",
"correctAnswer": 4,
"image": "2.jpg",
"answers": [
"Только по направлению А.",
"Только по направлению Б.",
"Только по направлению В."
]
}
]
}
I tried to solve the problem, but unsuccessfully.
Thanks for your answers!
A:
To accomplish this task, you could use some server-side scripting, I recommend NodeJS as it is directly related to JS.
You can see the documentation here: https://nodejs.dev/en/
Within the NodeJS project, you will have to use the File System module
https://nodejs.org/api/fs.html
Having the above configured, the code would be as follows:
import fs from "fs"
const file_path = “YOUR_JSON_FILE_PATH”
let json_file_content = JSON.parse(fs.readFileSync(file_path).toString());
for (let i in json_file_content.questions) {
json_file_content.questions[i].answers =
json_file_content.questions[i].answers.
filter(answer => (answer != "-" && answer != "-."))
}
fs.writeFileSync(file_path, JSON.stringify(json_file_content));
The above code can be described as follows:
The FileSystem module is imported
A constant is created which contains the path where your .json file is located
We create a variable “json_file_content” which will contain the content of your .json file
To get this content, do the following:
Get the content of your file using readFileSync
This content is converted to string type using toString
Subsequently, it is converted to JSON using JSON.parse
Having the json in a variable, simple JS code can now be applied, Inside that code what is happening is this:
All questions contained in the object are iterated
For each question, their answers are accessed
The answers are filtered, eliminating those that are equal to "-" and "-."
Finally:
The filtered json is converted to a string
The new string (JSON) is written into your file using writeFileSync
| How to delete the element which matches filter within the list in object of JSON | Sorry for my French and stupid question, but i need to DELETE some elements in JSON file while I don't now anything in JSON and JS
There is the structure:
`
{ "questions": [
{
"id": 1,
"quizId": 1,
"question": "Какому автомобилю разрешается остановка в зоне действия этих знаков?",
"correctAnswer": 4,
"image": "1.jpg",
"answers": [
"Красному.",
"Обоим автомобилям.",
"Ни одному.",
"Ни одному.",
"Желтому, обозначенному опознавательным знаком \"Инвалид\".",
"-"
]
},
{
"id": 2,
"quizId": 1,
"question": "По каким направлениям из числа обозначенных стрелками разрешается движение?",
"correctAnswer": 4,
"image": "2.jpg",
"answers": [
"Только по направлению А.",
"Только по направлению Б.",
"Только по направлению В.",
"-",
"-",
"-."
]
}
]
}
And I need to find the elements within answers array which equals to "-" and delete it.
So output should be:
{ "questions": [
{
"id": 1,
"quizId": 1,
"question": "Какому автомобилю разрешается остановка в зоне действия этих знаков?",
"correctAnswer": 4,
"image": "1.jpg",
"answers": [
"Красному.",
"Обоим автомобилям.",
"Ни одному.",
"Ни одному.",
"Желтому, обозначенному опознавательным знаком \"Инвалид\".",
]
},
{
"id": 2,
"quizId": 1,
"question": "По каким направлениям из числа обозначенных стрелками разрешается движение?",
"correctAnswer": 4,
"image": "2.jpg",
"answers": [
"Только по направлению А.",
"Только по направлению Б.",
"Только по направлению В."
]
}
]
}
I tried to solve the problem, but unsuccessfully.
Thanks for your answers!
| [
"To accomplish this task, you could use some server-side scripting, I recommend NodeJS as it is directly related to JS.\nYou can see the documentation here: https://nodejs.dev/en/\nWithin the NodeJS project, you will have to use the File System module\nhttps://nodejs.org/api/fs.html\nHaving the above configured, the code would be as follows:\nimport fs from \"fs\"\nconst file_path = “YOUR_JSON_FILE_PATH”\n\nlet json_file_content = JSON.parse(fs.readFileSync(file_path).toString());\n\n for (let i in json_file_content.questions) {\n json_file_content.questions[i].answers = \n json_file_content.questions[i].answers.\n filter(answer => (answer != \"-\" && answer != \"-.\"))\n }\n \nfs.writeFileSync(file_path, JSON.stringify(json_file_content));\n\nThe above code can be described as follows:\n\nThe FileSystem module is imported\nA constant is created which contains the path where your .json file is located\nWe create a variable “json_file_content” which will contain the content of your .json file\n\nTo get this content, do the following:\n\nGet the content of your file using readFileSync\nThis content is converted to string type using toString\nSubsequently, it is converted to JSON using JSON.parse\n\nHaving the json in a variable, simple JS code can now be applied, Inside that code what is happening is this:\n\nAll questions contained in the object are iterated\nFor each question, their answers are accessed\nThe answers are filtered, eliminating those that are equal to \"-\" and \"-.\"\n\nFinally:\n\nThe filtered json is converted to a string\nThe new string (JSON) is written into your file using writeFileSync\n\n"
] | [
1
] | [] | [] | [
"arrays",
"filter",
"javascript",
"json"
] | stackoverflow_0074670879_arrays_filter_javascript_json.txt |
Q:
How to put the product card next to each other instead of being put on a new line?
I'm creating a fake website for a school project. So far, I'm creating a products page where I'll list out the products I'm selling. I've copied a template from w3schools which has worked so far, however I want to create multiple of this same card that will be placed next to each other. What happens when I paste the same template is the card is put on a different line.
I've tried using float:left and copying and pasting the template again however the template is placed beneath the first one on a separate line. How would I place them next to each other? I'm not sure where to start. I know that I can create multiple next to each other but I don't know how I would apply it to this scenario. Any help would be appreciated.
Code I Copied
<!DOCTYPE html>
<html>
<head>
<style>
.card {
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2);
max-width: 300px;
margin: auto;
text-align: center;
font-family: arial;
}
.price {
color: grey;
font-size: 22px;
}
.card button {
border: none;
outline: 0;
padding: 12px;
color: white;
background-color: #000;
text-align: center;
cursor: pointer;
width: 100%;
font-size: 18px;
}
.card button:hover {
opacity: 0.7;
}
</style>
</head>
<body>
<h2 style="text-align:center">Product Card</h2>
<div class="card">
<img src="/w3images/jeans3.jpg" alt="Denim Jeans" style="width:100%">
<h1>Tailored Jeans</h1>
<p class="price">$19.99</p>
<p>Some text about the jeans. Super slim and comfy lorem ipsum lorem jeansum. Lorem jeamsun denim lorem jeansum.</p>
<p><button>Add to Cart</button></p>
</div>
</body>
</html>
My Code
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>Pet Store</title>
<style>
/* http://meyerweb.com/eric/tools/css/reset/
v2.0 | 20110126
License: none (public domain)
*/
html,
body,
div,
span,
applet,
object,
iframe,
h1,
h2,
h3,
h4,
h5,
h6,
p,
blockquote,
pre,
a,
abbr,
acronym,
address,
big,
cite,
code,
del,
dfn,
em,
img,
ins,
kbd,
q,
s,
samp,
small,
strike,
strong,
sub,
sup,
tt,
var,
b,
u,
i,
center,
dl,
dt,
dd,
ol,
ul,
li,
fieldset,
form,
label,
legend,
table,
caption,
tbody,
tfoot,
thead,
tr,
th,
td,
article,
aside,
canvas,
details,
embed,
figure,
figcaption,
footer,
header,
hgroup,
menu,
nav,
output,
ruby,
section,
summary,
time,
mark,
audio,
video {
margin: 0;
padding: 0;
border: 0;
font-size: 100%;
font: inherit;
vertical-align: baseline;
}
/* HTML5 display-role reset for older browsers */
article,
aside,
details,
figcaption,
figure,
footer,
header,
hgroup,
menu,
nav,
section {
display: block;
}
body {
line-height: 1;
}
ol,
ul {
list-style: none;
}
blockquote,
q {
quotes: none;
}
blockquote:before,
blockquote:after,
q:before,
q:after {
content: '';
content: none;
}
.button {
background-color: #9B3240;
border: none;
color: white;
padding: 15px 32px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 20px;
font-family: Segoe, "Segoe UI", "DejaVu Sans", "Trebuchet MS", Verdana, sans-serif;
}
.button:hover {
background-color: #D3B6C4;
}
.header {
background-color: #DB912B;
}
p {
font-family: Segoe, "Segoe UI", "DejaVu Sans", "Trebuchet MS", Verdana, sans-serif;
}
.blackFriday {
line-height: 40px;
height: 40px;
position: relative;
overflow: hidden;
background-color: antiquewhite z-index: 1;
font-family: Segoe, "Segoe UI", "DejaVu Sans", "Trebuchet MS", Verdana, sans-serif;
}
.fadein {
position: relative;
margin: 0;
width: 100%;
height: 50vh;
}
/*Animated Banner*/
.fadein img {
position: absolute;
inset: 0;
width: 100%;
height: 100%;
object-fit: cover;
-webkit-animation-name: fade;
-webkit-animation-iteration-count: infinite;
-webkit-animation-duration: 12s;
animation-name: fade;
animation-iteration-count: infinite;
animation-duration: 12s;
}
@-webkit-keyframes fade {
0% {
opacity: 0;
}
20% {
opacity: 1;
}
33% {
opacity: 1;
}
53% {
opacity: 0;
}
100% {
opacity: 0;
}
}
@keyframes fade {
0% {
opacity: 0;
}
20% {
opacity: 1;
}
33% {
opacity: 1;
}
53% {
opacity: 0;
}
100% {
opacity: 0;
}
}
#f1 {
background-color: lightblue;
}
#f2 {
-webkit-animation-delay: -8s;
background-color: yellow;
}
#f3 {
-webkit-animation-delay: -4s;
background-color: lightgreen;
}
/*Shop Bar*/
.shopBar {
background-color: #9B3240;
height: 75px;
width: 100%;
margin-right: auto;
margin-left: auto;
}
.Logo {
float:left;
}
.storeName {
float: left;
display: flex;
align-items: center;
height: 75px;
width: 25%;
color: white;
font-weight: bold;
}
.petstorename {
flex: 0 0 120px;
font-size: 40px;
padding-left:10px;
}
.holidays {
font-family: Segoe, "Segoe UI", "DejaVu Sans", "Trebuchet MS", Verdana, sans-serif;
font-variant: normal;
font-weight: bolder;
font-size: 81px;
/* [disabled]line-height: 64px; */
text-align: center;
padding:20px;
}
.rowspacing {
padding:15px;
}
.circlespace img {
padding: 5px;
position: 10px;
}
.specialdogdeals {
font-weight: bolder;
font-size: 33px;
margin-bottom: 100px;
}
.footer {
background-color: #D5C3C3;
height: 75 px;
width: 100%;
margin-top:100px;
}
.footer img {
padding:4px;
display:inline;
}
.buy {
height: 500px;
width: 100%;
}
.buy2 {
background-color:black;
float: left;
height: 500px;
width: 500px;
margin-left: 100px;
}
.buy3 {
float: left;
height: 500px;
width: 33%;
}
.buy4 {
float: right;
height: 500px;
width: 33%;
}
.buy5 {
background-color: yellow;
float:left;
height:500px;
}
/*Products*/
.card {
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2);
max-width: 300px;
margin: auto;
text-align: center;
font-family: arial;
}
.price {
color: grey;
font-size: 22px;
padding:20px;
}
.card button {
border: none;
outline: 0;
padding: 12px;
color: white;
background-color: #000;
text-align: center;
cursor: pointer;
width: 100%;
font-size: 18px;
}
.card button:hover {
opacity: 0.7;
}
.productName {
font-size:68px;
font-weight:lighter;
padding:10px;
}
.productDes {
padding:10px;
}
.newPrice {
font-size: 32px;
color:#272727;
font-weight:bold;
padding:10px;
}
</style>
</head>
<body>
<center>
<a href="blackfriday.html"><p class="blackFriday" style="background-color:#191616; color: white; font-weight:bold;">BLACK FRIDAY SALE!</p></a>
</center>
<div class="fadein">
<img id="f3" src="banner2.jpg" alt="">
<img id="f2" src="banner4.png" alt="">
<img id="f1" src="banner5.jpg" alt="">
</div>
<!--Shop Bar-->
<!--*<div class="shopBar">
<div class="Logo"><img src="Logo.png" height="75px;" alt=""/></div>
<div class="storeName"><p class="petstorename">Pet Store</p></div>
</div>-->
<!--Buttons-->
<center>
<p style="background-color:#9B3240; position:relative;"> <a href="index.html"><img class="Logo" src="Logo.png" height="55" alt=""/></a>
<a href="food.html"><button class="button">food</button></a>
<a href="toys.html"><button class="button">toys</button></a>
<a href="pharmacy.html"><button class="button">pharmacy</button></a>
<a href="holiday.html"><button class="button">holiday sale</button></a>
<a href="about.html"><button class="button">about us</button></a>
</p>
</center>
<!--Images-->
<!--Products-->
<p class="holidays">BLACK FRIDAY</p>
<center><p class="specialdogdeals">These Deals Won't Last Long!</p></center>
<div class="card">
<img src="collar.png" alt="Denim Jeans" style="width:100%">
<p class="productName">Leather Collar</p>
<p class="price">$<strike>9.99</strike>
<p class="newPrice">2.99</p></p>
<p class="productDes">A comfty collar made with natural materials.</p>
<p><button>Add to Cart</button></p>
<div class="card">
<img src="collar.png" alt="Denim Jeans" style="width:100%">
<p class="productName">Leather Collar</p>
<p class="price">$<strike>9.99</strike>
<p class="newPrice">2.99</p></p>
<p class="productDes">A comfty collar made with natural materials.</p>
<p><button>Add to Cart</button></p>
</div>
</div>
<!--Footer-->
<div class="footer">
<center>
<p style="font-weight: bold; line-height:75px;">
<a href="https://www.instagram.com/"><img src="instagram.png" width="25" height="25" alt=""/></a>
<a href="https://www.twitter.com/"><img src="twitter.png" width="25" height="25" alt=""/></a>
<a href="https://www.facebook.com"><img src="facebook.png" width="25" height="25" alt=""/></a>
<img src="support-dog-icon.png" width="35" style="margin-left:50px; margin-right:10px;" alt=""/>Contact (578) 239-8980
</p>
</center>
</div>
</b
Thanks.
A:
You will need to wrap your cards into another div, which will have the propertie display: flex;
in your html:
<div class="wrapper">
<div class="card">
<img src="collar.png" alt="Denim Jeans" style="width:100%">
<p class="productName">Leather Collar</p>
<p class="price">$<strike>9.99</strike>
<p class="newPrice">2.99</p></p>
<p class="productDes">A comfty collar made with natural materials.</p>
<p><button>Add to Cart</button></p>
</div>
<div class="card">
<img src="collar.png" alt="Denim Jeans" style="width:100%">
<p class="productName">Leather Collar</p>
<p class="price">$<strike>9.99</strike>
<p class="newPrice">2.99</p></p>
<p class="productDes">A comfty collar made with natural materials.</p>
<p><button>Add to Cart</button></p>
</div>
</div>
in your css:
.wrapper{
/* width: 100%; */
display:flex;
flex-direction:row;
justify-content:center; //there are several options to position
align-items:center;
flex-wrap: wrap; //if the display gets to small, the cards will automaticly position on a different line
}
.card {
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2);
max-width: 300px;
/* margin: auto; */ //no more need because parent controlls position.
text-align: center;
font-family: arial;
}
| How to put the product card next to each other instead of being put on a new line? | I'm creating a fake website for a school project. So far, I'm creating a products page where I'll list out the products I'm selling. I've copied a template from w3schools which has worked so far, however I want to create multiple of this same card that will be placed next to each other. What happens when I paste the same template is the card is put on a different line.
I've tried using float:left and copying and pasting the template again however the template is placed beneath the first one on a separate line. How would I place them next to each other? I'm not sure where to start. I know that I can create multiple next to each other but I don't know how I would apply it to this scenario. Any help would be appreciated.
Code I Copied
<!DOCTYPE html>
<html>
<head>
<style>
.card {
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2);
max-width: 300px;
margin: auto;
text-align: center;
font-family: arial;
}
.price {
color: grey;
font-size: 22px;
}
.card button {
border: none;
outline: 0;
padding: 12px;
color: white;
background-color: #000;
text-align: center;
cursor: pointer;
width: 100%;
font-size: 18px;
}
.card button:hover {
opacity: 0.7;
}
</style>
</head>
<body>
<h2 style="text-align:center">Product Card</h2>
<div class="card">
<img src="/w3images/jeans3.jpg" alt="Denim Jeans" style="width:100%">
<h1>Tailored Jeans</h1>
<p class="price">$19.99</p>
<p>Some text about the jeans. Super slim and comfy lorem ipsum lorem jeansum. Lorem jeamsun denim lorem jeansum.</p>
<p><button>Add to Cart</button></p>
</div>
</body>
</html>
My Code
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>Pet Store</title>
<style>
/* http://meyerweb.com/eric/tools/css/reset/
v2.0 | 20110126
License: none (public domain)
*/
html,
body,
div,
span,
applet,
object,
iframe,
h1,
h2,
h3,
h4,
h5,
h6,
p,
blockquote,
pre,
a,
abbr,
acronym,
address,
big,
cite,
code,
del,
dfn,
em,
img,
ins,
kbd,
q,
s,
samp,
small,
strike,
strong,
sub,
sup,
tt,
var,
b,
u,
i,
center,
dl,
dt,
dd,
ol,
ul,
li,
fieldset,
form,
label,
legend,
table,
caption,
tbody,
tfoot,
thead,
tr,
th,
td,
article,
aside,
canvas,
details,
embed,
figure,
figcaption,
footer,
header,
hgroup,
menu,
nav,
output,
ruby,
section,
summary,
time,
mark,
audio,
video {
margin: 0;
padding: 0;
border: 0;
font-size: 100%;
font: inherit;
vertical-align: baseline;
}
/* HTML5 display-role reset for older browsers */
article,
aside,
details,
figcaption,
figure,
footer,
header,
hgroup,
menu,
nav,
section {
display: block;
}
body {
line-height: 1;
}
ol,
ul {
list-style: none;
}
blockquote,
q {
quotes: none;
}
blockquote:before,
blockquote:after,
q:before,
q:after {
content: '';
content: none;
}
.button {
background-color: #9B3240;
border: none;
color: white;
padding: 15px 32px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 20px;
font-family: Segoe, "Segoe UI", "DejaVu Sans", "Trebuchet MS", Verdana, sans-serif;
}
.button:hover {
background-color: #D3B6C4;
}
.header {
background-color: #DB912B;
}
p {
font-family: Segoe, "Segoe UI", "DejaVu Sans", "Trebuchet MS", Verdana, sans-serif;
}
.blackFriday {
line-height: 40px;
height: 40px;
position: relative;
overflow: hidden;
background-color: antiquewhite z-index: 1;
font-family: Segoe, "Segoe UI", "DejaVu Sans", "Trebuchet MS", Verdana, sans-serif;
}
.fadein {
position: relative;
margin: 0;
width: 100%;
height: 50vh;
}
/*Animated Banner*/
.fadein img {
position: absolute;
inset: 0;
width: 100%;
height: 100%;
object-fit: cover;
-webkit-animation-name: fade;
-webkit-animation-iteration-count: infinite;
-webkit-animation-duration: 12s;
animation-name: fade;
animation-iteration-count: infinite;
animation-duration: 12s;
}
@-webkit-keyframes fade {
0% {
opacity: 0;
}
20% {
opacity: 1;
}
33% {
opacity: 1;
}
53% {
opacity: 0;
}
100% {
opacity: 0;
}
}
@keyframes fade {
0% {
opacity: 0;
}
20% {
opacity: 1;
}
33% {
opacity: 1;
}
53% {
opacity: 0;
}
100% {
opacity: 0;
}
}
#f1 {
background-color: lightblue;
}
#f2 {
-webkit-animation-delay: -8s;
background-color: yellow;
}
#f3 {
-webkit-animation-delay: -4s;
background-color: lightgreen;
}
/*Shop Bar*/
.shopBar {
background-color: #9B3240;
height: 75px;
width: 100%;
margin-right: auto;
margin-left: auto;
}
.Logo {
float:left;
}
.storeName {
float: left;
display: flex;
align-items: center;
height: 75px;
width: 25%;
color: white;
font-weight: bold;
}
.petstorename {
flex: 0 0 120px;
font-size: 40px;
padding-left:10px;
}
.holidays {
font-family: Segoe, "Segoe UI", "DejaVu Sans", "Trebuchet MS", Verdana, sans-serif;
font-variant: normal;
font-weight: bolder;
font-size: 81px;
/* [disabled]line-height: 64px; */
text-align: center;
padding:20px;
}
.rowspacing {
padding:15px;
}
.circlespace img {
padding: 5px;
position: 10px;
}
.specialdogdeals {
font-weight: bolder;
font-size: 33px;
margin-bottom: 100px;
}
.footer {
background-color: #D5C3C3;
height: 75 px;
width: 100%;
margin-top:100px;
}
.footer img {
padding:4px;
display:inline;
}
.buy {
height: 500px;
width: 100%;
}
.buy2 {
background-color:black;
float: left;
height: 500px;
width: 500px;
margin-left: 100px;
}
.buy3 {
float: left;
height: 500px;
width: 33%;
}
.buy4 {
float: right;
height: 500px;
width: 33%;
}
.buy5 {
background-color: yellow;
float:left;
height:500px;
}
/*Products*/
.card {
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2);
max-width: 300px;
margin: auto;
text-align: center;
font-family: arial;
}
.price {
color: grey;
font-size: 22px;
padding:20px;
}
.card button {
border: none;
outline: 0;
padding: 12px;
color: white;
background-color: #000;
text-align: center;
cursor: pointer;
width: 100%;
font-size: 18px;
}
.card button:hover {
opacity: 0.7;
}
.productName {
font-size:68px;
font-weight:lighter;
padding:10px;
}
.productDes {
padding:10px;
}
.newPrice {
font-size: 32px;
color:#272727;
font-weight:bold;
padding:10px;
}
</style>
</head>
<body>
<center>
<a href="blackfriday.html"><p class="blackFriday" style="background-color:#191616; color: white; font-weight:bold;">BLACK FRIDAY SALE!</p></a>
</center>
<div class="fadein">
<img id="f3" src="banner2.jpg" alt="">
<img id="f2" src="banner4.png" alt="">
<img id="f1" src="banner5.jpg" alt="">
</div>
<!--Shop Bar-->
<!--*<div class="shopBar">
<div class="Logo"><img src="Logo.png" height="75px;" alt=""/></div>
<div class="storeName"><p class="petstorename">Pet Store</p></div>
</div>-->
<!--Buttons-->
<center>
<p style="background-color:#9B3240; position:relative;"> <a href="index.html"><img class="Logo" src="Logo.png" height="55" alt=""/></a>
<a href="food.html"><button class="button">food</button></a>
<a href="toys.html"><button class="button">toys</button></a>
<a href="pharmacy.html"><button class="button">pharmacy</button></a>
<a href="holiday.html"><button class="button">holiday sale</button></a>
<a href="about.html"><button class="button">about us</button></a>
</p>
</center>
<!--Images-->
<!--Products-->
<p class="holidays">BLACK FRIDAY</p>
<center><p class="specialdogdeals">These Deals Won't Last Long!</p></center>
<div class="card">
<img src="collar.png" alt="Denim Jeans" style="width:100%">
<p class="productName">Leather Collar</p>
<p class="price">$<strike>9.99</strike>
<p class="newPrice">2.99</p></p>
<p class="productDes">A comfty collar made with natural materials.</p>
<p><button>Add to Cart</button></p>
<div class="card">
<img src="collar.png" alt="Denim Jeans" style="width:100%">
<p class="productName">Leather Collar</p>
<p class="price">$<strike>9.99</strike>
<p class="newPrice">2.99</p></p>
<p class="productDes">A comfty collar made with natural materials.</p>
<p><button>Add to Cart</button></p>
</div>
</div>
<!--Footer-->
<div class="footer">
<center>
<p style="font-weight: bold; line-height:75px;">
<a href="https://www.instagram.com/"><img src="instagram.png" width="25" height="25" alt=""/></a>
<a href="https://www.twitter.com/"><img src="twitter.png" width="25" height="25" alt=""/></a>
<a href="https://www.facebook.com"><img src="facebook.png" width="25" height="25" alt=""/></a>
<img src="support-dog-icon.png" width="35" style="margin-left:50px; margin-right:10px;" alt=""/>Contact (578) 239-8980
</p>
</center>
</div>
</b
Thanks.
| [
"You will need to wrap your cards into another div, which will have the propertie display: flex;\nin your html:\n<div class=\"wrapper\">\n <div class=\"card\">\n <img src=\"collar.png\" alt=\"Denim Jeans\" style=\"width:100%\">\n <p class=\"productName\">Leather Collar</p>\n <p class=\"price\">$<strike>9.99</strike> \n <p class=\"newPrice\">2.99</p></p>\n <p class=\"productDes\">A comfty collar made with natural materials.</p>\n <p><button>Add to Cart</button></p>\n </div>\n \n \n <div class=\"card\">\n <img src=\"collar.png\" alt=\"Denim Jeans\" style=\"width:100%\">\n <p class=\"productName\">Leather Collar</p>\n <p class=\"price\">$<strike>9.99</strike> \n <p class=\"newPrice\">2.99</p></p>\n <p class=\"productDes\">A comfty collar made with natural materials.</p>\n <p><button>Add to Cart</button></p>\n </div>\n</div>\n\nin your css:\n .wrapper{\n /* width: 100%; */\n display:flex;\n flex-direction:row;\n justify-content:center; //there are several options to position\n align-items:center;\n flex-wrap: wrap; //if the display gets to small, the cards will automaticly position on a different line\n\n}\n.card {\n box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2);\n max-width: 300px;\n /* margin: auto; */ //no more need because parent controlls position.\n text-align: center;\n font-family: arial;\n}\n\n"
] | [
0
] | [] | [] | [
"css",
"html"
] | stackoverflow_0074671075_css_html.txt |
Q:
How to import and assert in one line?
How to write this in 1 line or at least avoid an intermediate variable/import name?
import root from '@/pages/_route';
const route: RouteDef = root;
I don't whether it matters but the import is an object with all required fields for the object type Route.
| How to import and assert in one line? | How to write this in 1 line or at least avoid an intermediate variable/import name?
import root from '@/pages/_route';
const route: RouteDef = root;
I don't whether it matters but the import is an object with all required fields for the object type Route.
| [] | [] | [
"const route: RouteDef = import('@/pages/_route');\n\nHowever, this approach is not recommended because it makes the code harder to read and understand. It is better to use a separate import statement and then assign the imported value to a variable with a descriptive name.\n"
] | [
-1
] | [
"typescript"
] | stackoverflow_0074671211_typescript.txt |
Q:
Time complexity of dependent nested loops
I was trying to find the time complexity of this nested loop
for (i = 1; i <= n; i++) {
for (j = 1; j <= n; j++) {
n--;
x++;
}
}
If there wasn't a n-- it would be n*n , O(n2) right?
But what if n reduces every time second loop runs?
What's the time complexity and big O of this nested loop?
If I consider n = 5, x equals 4, the second loop runs 4 time
A:
The time complexity of the code is O(n). n is reduced by half for every iteration of the outer loop.
So we have n/2 + n/4 + n/8 + n/16 + ... + n/2^k = O(n)
where k is the number of iterations of the outer loop (basically i).
Note that the time complexity is independent of x.
If there wasn't a n-- it would be n*n , O(n2) right?
Yes
A:
Another way to see it's O(n): You only enter the inner loop body if j <= n, and since j is positive, n must also be positive. But you decrease n every time, which you can only do O(n) times (where n is the starting value) and still have n positive.
| Time complexity of dependent nested loops | I was trying to find the time complexity of this nested loop
for (i = 1; i <= n; i++) {
for (j = 1; j <= n; j++) {
n--;
x++;
}
}
If there wasn't a n-- it would be n*n , O(n2) right?
But what if n reduces every time second loop runs?
What's the time complexity and big O of this nested loop?
If I consider n = 5, x equals 4, the second loop runs 4 time
| [
"The time complexity of the code is O(n). n is reduced by half for every iteration of the outer loop.\nSo we have n/2 + n/4 + n/8 + n/16 + ... + n/2^k = O(n)\nwhere k is the number of iterations of the outer loop (basically i).\nNote that the time complexity is independent of x.\n\nIf there wasn't a n-- it would be n*n , O(n2) right?\n\nYes\n",
"Another way to see it's O(n): You only enter the inner loop body if j <= n, and since j is positive, n must also be positive. But you decrease n every time, which you can only do O(n) times (where n is the starting value) and still have n positive.\n"
] | [
3,
1
] | [] | [] | [
"algorithm",
"complexity_theory",
"time_complexity"
] | stackoverflow_0074670582_algorithm_complexity_theory_time_complexity.txt |
Q:
How can I add additional functionality to a class?
class Button
Button(float x, float y, float width, float height,
sf::Font* font, std::string text,
sf::Color idleColor, sf::Color hoverColor, sf::Color activeColor, bool visible = 1);
this->buttons["START_GAME"] = new Button(500, 40, 215, 55, //
&this->font, "Начать игру",
sf::Color(255, 50, 50, 200), sf::Color(255, 0, 0, 200), sf::Color(0, 0, 255, 200));
Used a translator
I have a Button class, below is the first photo, and in it I list its components. So, in SFML there is a setOutlineColor function, I want to know how to bring it into this class. I will also attach an example implementation below.
For example: we have text GAME, using sfml we can create text Game.
Then do this: Game.SetOutlineColor(sf::green)
How can i bring it to my class?
I've tried many things and still can't solve the problem.
A:
To add the setOutlineColor function to your Button class, you will need to include the <SFML/Graphics.hpp> header at the top of your file, and then add a member variable of type sf::Color to your Button class to store the outline color. For example:
#include <SFML/Graphics.hpp>
class Button {
public:
Button(float x, float y, float width, float height,
sf::Font* font, std::string text,
sf::Color idleColor, sf::Color hoverColor, sf::Color activeColor, bool visible = 1)
: x(x), y(y), width(width), height(height), font(font), text(text),
idleColor(idleColor), hoverColor(hoverColor), activeColor(activeColor), visible(visible)
{
// ...
}
void setOutlineColor(const sf::Color& color) {
outlineColor = color;
}
private:
// Other member variables...
sf::Color outlineColor;
};
Once you've added the outlineColor member variable and the setOutlineColor function to your Button class, you can use it to set the outline color for the button. For example:
Button button(500, 40, 215, 55, &font, "Начать игру",
sf::Color(255, 50, 50, 200), sf::Color(255, 0, 0, 200), sf::Color(0, 0, 255, 200));
button.setOutlineColor(sf::Color::Green);
Keep in mind that you will also need to update your Button class's rendering code to actually use the outlineColor when rendering the button's text.
| How can I add additional functionality to a class? |
class Button
Button(float x, float y, float width, float height,
sf::Font* font, std::string text,
sf::Color idleColor, sf::Color hoverColor, sf::Color activeColor, bool visible = 1);
this->buttons["START_GAME"] = new Button(500, 40, 215, 55, //
&this->font, "Начать игру",
sf::Color(255, 50, 50, 200), sf::Color(255, 0, 0, 200), sf::Color(0, 0, 255, 200));
Used a translator
I have a Button class, below is the first photo, and in it I list its components. So, in SFML there is a setOutlineColor function, I want to know how to bring it into this class. I will also attach an example implementation below.
For example: we have text GAME, using sfml we can create text Game.
Then do this: Game.SetOutlineColor(sf::green)
How can i bring it to my class?
I've tried many things and still can't solve the problem.
| [
"To add the setOutlineColor function to your Button class, you will need to include the <SFML/Graphics.hpp> header at the top of your file, and then add a member variable of type sf::Color to your Button class to store the outline color. For example:\n#include <SFML/Graphics.hpp>\n\nclass Button {\npublic:\n Button(float x, float y, float width, float height,\n sf::Font* font, std::string text,\n sf::Color idleColor, sf::Color hoverColor, sf::Color activeColor, bool visible = 1)\n : x(x), y(y), width(width), height(height), font(font), text(text),\n idleColor(idleColor), hoverColor(hoverColor), activeColor(activeColor), visible(visible)\n {\n // ...\n }\n\n void setOutlineColor(const sf::Color& color) {\n outlineColor = color;\n }\n\nprivate:\n // Other member variables...\n sf::Color outlineColor;\n};\n\nOnce you've added the outlineColor member variable and the setOutlineColor function to your Button class, you can use it to set the outline color for the button. For example:\nButton button(500, 40, 215, 55, &font, \"Начать игру\",\n sf::Color(255, 50, 50, 200), sf::Color(255, 0, 0, 200), sf::Color(0, 0, 255, 200));\nbutton.setOutlineColor(sf::Color::Green);\n\nKeep in mind that you will also need to update your Button class's rendering code to actually use the outlineColor when rendering the button's text.\n"
] | [
2
] | [] | [] | [
"c++",
"sfml"
] | stackoverflow_0074671108_c++_sfml.txt |
Q:
How to reset placeholder in Chakra Select using react-hook-form
I mapped the countries in select. When I reset the select the placeholder is still the country name, but the value is reset to undefined.
const countries = [
{ label: 'France', value: 'FR' },
{ label: 'Germany', value: 'DE' },
];
const defaultValues = {
country: undefined,
};
const Select: FC<Props> = ({
options,
onSelect,
placeholder = 'Select option',
...props
}) => {
const handleSelect = useCallback(
(ev) => {
const { value } = ev.target;
const select = value ? options.find((opt) => opt.value === value) : null;
onSelect(select);
},
[options]
);
return (
<ChakraSelect
onChange={handleSelect}
placeholder={placeholder}
{...props}
>
{options?.map((opt) => (
<option key={opt.label} value={opt.value}>
{opt.label}
</option>
))}
</ChakraSelect>
);
};
export default Select;
<FormControl isInvalid={errors.country}>
<Controller
control={control}
name='country'
render={({ field: { onChange, value } }) => {
return (
<Select
onSelect={onChange}
options={countries}
placeholder='Select country'
value={value?.value}
/>
);
}}
rules={{
required: {
value: true,
message: 'Country is required',
},
}}
/>
Reset using react-hook-form:
reset({...defaultValues});
Here is the result after reset, value: undefined, but the placeholder is still the country name:
A:
You can simply set the placeholder prop to the desired default value. For example, if you want the placeholder to be "Select option" after the form is reset, you can add the following code to your Select component:
const Select: FC<Props> = ({
options,
onSelect,
placeholder = 'Select option', // default placeholder value
...props
}) => {
// ...
return (
<ChakraSelect
onChange={handleSelect}
placeholder={placeholder}
{...props}
>
{options?.map((opt) => (
<option key={opt.label} value={opt.value}>
{opt.label}
</option>
))}
</ChakraSelect>
);
};
Then, when you reset the form using react-hook-form, the placeholder in the Chakra Select will be reset to the default value of "Select option".
A:
It looks like the issue is that when the form is reset, the value of the Select component is not being updated. This is because the value prop of the Select component is being set to value.value, which is undefined after the form is reset.
To fix this, you could update the render prop of the Controller component to set the value prop of the Select component to the entire value object from the form's state, rather than just the value property of that object. This way, when the form is reset, the value prop of the Select component will also be reset to undefined.
Here is an example of how you could do this:
<FormControl isInvalid={errors.country}>
<Controller
control={control}
name="country"
render={({ field: { onChange, value } }) => {
return (
<Select
onSelect={onChange}
options={countries}
placeholder="Select country"
value={value} // Set the value prop to the entire value object
/>
);
}}
rules={{
required: {
value: true,
message: "Country is required",
},
}}
/>
</FormControl>
After making this change, when the form is reset, the value prop of the Select component will be reset to undefined, and the placeholder will be displayed.
| How to reset placeholder in Chakra Select using react-hook-form | I mapped the countries in select. When I reset the select the placeholder is still the country name, but the value is reset to undefined.
const countries = [
{ label: 'France', value: 'FR' },
{ label: 'Germany', value: 'DE' },
];
const defaultValues = {
country: undefined,
};
const Select: FC<Props> = ({
options,
onSelect,
placeholder = 'Select option',
...props
}) => {
const handleSelect = useCallback(
(ev) => {
const { value } = ev.target;
const select = value ? options.find((opt) => opt.value === value) : null;
onSelect(select);
},
[options]
);
return (
<ChakraSelect
onChange={handleSelect}
placeholder={placeholder}
{...props}
>
{options?.map((opt) => (
<option key={opt.label} value={opt.value}>
{opt.label}
</option>
))}
</ChakraSelect>
);
};
export default Select;
<FormControl isInvalid={errors.country}>
<Controller
control={control}
name='country'
render={({ field: { onChange, value } }) => {
return (
<Select
onSelect={onChange}
options={countries}
placeholder='Select country'
value={value?.value}
/>
);
}}
rules={{
required: {
value: true,
message: 'Country is required',
},
}}
/>
Reset using react-hook-form:
reset({...defaultValues});
Here is the result after reset, value: undefined, but the placeholder is still the country name:
| [
"You can simply set the placeholder prop to the desired default value. For example, if you want the placeholder to be \"Select option\" after the form is reset, you can add the following code to your Select component:\nconst Select: FC<Props> = ({\n options,\n onSelect,\n placeholder = 'Select option', // default placeholder value\n ...props\n}) => {\n // ...\n\n return (\n <ChakraSelect\n onChange={handleSelect}\n placeholder={placeholder}\n {...props}\n >\n {options?.map((opt) => (\n <option key={opt.label} value={opt.value}>\n {opt.label}\n </option>\n ))}\n </ChakraSelect>\n );\n};\n\nThen, when you reset the form using react-hook-form, the placeholder in the Chakra Select will be reset to the default value of \"Select option\".\n",
"It looks like the issue is that when the form is reset, the value of the Select component is not being updated. This is because the value prop of the Select component is being set to value.value, which is undefined after the form is reset.\nTo fix this, you could update the render prop of the Controller component to set the value prop of the Select component to the entire value object from the form's state, rather than just the value property of that object. This way, when the form is reset, the value prop of the Select component will also be reset to undefined.\nHere is an example of how you could do this:\n<FormControl isInvalid={errors.country}>\n <Controller\n control={control}\n name=\"country\"\n render={({ field: { onChange, value } }) => {\n return (\n <Select\n onSelect={onChange}\n options={countries}\n placeholder=\"Select country\"\n value={value} // Set the value prop to the entire value object\n />\n );\n }}\n rules={{\n required: {\n value: true,\n message: \"Country is required\",\n },\n }}\n />\n</FormControl>\n\nAfter making this change, when the form is reset, the value prop of the Select component will be reset to undefined, and the placeholder will be displayed.\n"
] | [
0,
0
] | [] | [] | [
"chakra_ui",
"react_hook_form",
"reactjs"
] | stackoverflow_0074671027_chakra_ui_react_hook_form_reactjs.txt |
Q:
Regex match a string of 18 characters (4 digits + 14 letters uppercase)
please help me find a regex match this combination
here it is a few examples of strings i want , I hope it helps you
1st example "HBYVHDV86DBYF44CGB"
2nd example "NGCDV15DVDB81JHDBR"
3rd example "MOX48DVPLYBJHD63JH"
As you can see, there is something special , the four numbers are divided into two parts on the string .
1st example "_ 86 _ 44 _"
2nd example "_ 15 _ 81 _"
3rd example "_ 48 _ 63 _"
here it is an example of a problem
pgfbS63RKSFK63TNEABHHHHH
bhuhu56
PGSCS63RKSFK63TNEA
igi65TGHkj
pgfbS63RKSFK63TNEAB
PGSCS6R8KSFK63TNEA
PGSCS63RKSFKT15NEA
i did try this regex [a-zA-Z]+[0-9]+[a-zA-Z]+[0-9]+[a-zA-Z]+
here it is the result
pgfbS63RKSFK63TNEABHHHHH
PGSCS63RKSFK63TNEA
pgfbS63RKSFK63TNEAB
PGSCS6R8KSFK
PGSCS63RKSFKT15NEA
what i was expecting
PGSCS63RKSFK63TNEA
PGSCS63RKSFKT15NEA
A:
You can use look-ahead to test one of the conditions, like the total length of the input. The other conditions can be expressed close to what you proposed, but with double digits (\d\d) and end-of-input anchors (^ and $)
^(?=\w{18}$)[A-Za-z]+\d\d[A-Za-z]+\d\d[A-Za-z]+$
| Regex match a string of 18 characters (4 digits + 14 letters uppercase) | please help me find a regex match this combination
here it is a few examples of strings i want , I hope it helps you
1st example "HBYVHDV86DBYF44CGB"
2nd example "NGCDV15DVDB81JHDBR"
3rd example "MOX48DVPLYBJHD63JH"
As you can see, there is something special , the four numbers are divided into two parts on the string .
1st example "_ 86 _ 44 _"
2nd example "_ 15 _ 81 _"
3rd example "_ 48 _ 63 _"
here it is an example of a problem
pgfbS63RKSFK63TNEABHHHHH
bhuhu56
PGSCS63RKSFK63TNEA
igi65TGHkj
pgfbS63RKSFK63TNEAB
PGSCS6R8KSFK63TNEA
PGSCS63RKSFKT15NEA
i did try this regex [a-zA-Z]+[0-9]+[a-zA-Z]+[0-9]+[a-zA-Z]+
here it is the result
pgfbS63RKSFK63TNEABHHHHH
PGSCS63RKSFK63TNEA
pgfbS63RKSFK63TNEAB
PGSCS6R8KSFK
PGSCS63RKSFKT15NEA
what i was expecting
PGSCS63RKSFK63TNEA
PGSCS63RKSFKT15NEA
| [
"You can use look-ahead to test one of the conditions, like the total length of the input. The other conditions can be expressed close to what you proposed, but with double digits (\\d\\d) and end-of-input anchors (^ and $)\n^(?=\\w{18}$)[A-Za-z]+\\d\\d[A-Za-z]+\\d\\d[A-Za-z]+$\n\n"
] | [
0
] | [] | [] | [
"digits",
"letter",
"python",
"regex",
"uppercase"
] | stackoverflow_0074668785_digits_letter_python_regex_uppercase.txt |
Q:
I receive an Application error while trying to access the app in production
My Heroku App is not working anymore and is giving me back the attached error. Can someone help in troubleshooting? I have read about a plan upgrade needed in some cases, is this error related to upgrade needed?
The app have been working properly and suddenly started giving me back this error. Discovered today.
A:
Under the Overview tab what do you see next to:
web npm start
It should say On. If not you need to click "Configure Dynos" and turn it on.
| I receive an Application error while trying to access the app in production | My Heroku App is not working anymore and is giving me back the attached error. Can someone help in troubleshooting? I have read about a plan upgrade needed in some cases, is this error related to upgrade needed?
The app have been working properly and suddenly started giving me back this error. Discovered today.
| [
"Under the Overview tab what do you see next to:\n\nweb npm start\n\nIt should say On. If not you need to click \"Configure Dynos\" and turn it on.\n"
] | [
0
] | [] | [] | [
"build",
"tail"
] | stackoverflow_0074647916_build_tail.txt |
Q:
Module not found: Can't resolve 'bootstrap/dist/css/bootstrap-theme.css' in 'C:\react-form-validation-demo\src'
I am newbie in react.js. I have been following instruction in how-to-do-simple-form-validation-in-reactjs and I can run http://localhost:3000/
But after adding bootstrap in index.js, I got this error
Failed to compile ./src/index.js Module not found: Can't resolve
'bootstrap/dist/css/bootstrap-theme.css' in
'C:\react-form-validation-demo\src'
import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';
import 'bootstrap/dist/css/bootstrap.css';
import 'bootstrap/dist/css/bootstrap-theme.css';
import App from './App';
import registerServiceWorker from './registerServiceWorker';
ReactDOM.render(<App />, document.getElementById('root'));
registerServiceWorker();
If I remove these two lines below, it works:
import 'bootstrap/dist/css/bootstrap.css';
import 'bootstrap/dist/css/bootstrap-theme.css';
A:
In Bootstrap version 4, the file 'bootstrap-theme.css' has been removed, according to the issue on GitHub. See the change history for details.
If you want to continue using version 4, remove this import, else downgrade to version 3.
A:
This error is caused by a misconfiguration, version mismatch or corrupted bootstrap install.
If you've got bootstrap and react-bootstrap installed already via some variation of:
npm install --save bootstrap@^4.0.0-alpha.6 react-bootstrap@^0.32.1
(Check if your package.json contains "bootstrap" and "react-bootstrap" if your not sure.)
Just install a different version of bootstrap and rebuild your project. That should replace or add that file (bootstrap/dist/css/bootstrap-theme.css) to that folder.
A lower version of bootstrap worked for me as recommended in my create-react-app generated README.md:
npm install --save react-bootstrap bootstrap@3
Adding Bootstrap
You don’t have to use React
Bootstrap together with React but
it is a popular library for integrating Bootstrap with React apps. If
you need it, you can integrate it with Create React App by following
these steps:
Install React Bootstrap and Bootstrap from npm. React Bootstrap does
not include Bootstrap CSS so this needs to be installed as well:
sh npm install --save react-bootstrap bootstrap@3
Alternatively you may use yarn:
sh yarn add react-bootstrap bootstrap@3
A:
reinstall the bootstrap and don't forget to mention --save to the command to save it to the node modules this solved the problem for me.
sudo npm install bootstrap --save
A:
Adding the solution to the cause I ran into - it was my miss, but still unclear when reading the other answers.
If react-bootstrap is installed solo like this:
npm install --save react-bootstrap
then bootstrap will be missing. Try the following to correct:
npm install --save bootstrap
The correct way to install initially is as follows:
npm install --save react-bootstrap bootstrap
A:
Install the bootstrap using the below command.
npm install react-bootstrap bootstrap
Then include the below line in the index.js file.
import '../node_modules/bootstrap/dist/css/bootstrap.min.css';
A:
I got the same problem and this is how I resolved the problem.
First of all delete the node_modules directory from the project and then install bootstrap with below command.
npm install
npm i bootstrap
Then, you can add the below import to the index.js
import 'bootstrap/dist/css/bootstrap.min.css';
A:
npm install --save bootstrap@^4.0.0-alpha.6 react-bootstrap@^0.32.1
i face same problem but after installing above packages it work well,
may b you should also install same packages
A:
import '../node_modules/bootstrap/dist/css/bootstrap.min.css';
Source: github
A:
Remove
import 'bootstrap/dist/css/bootstrap-theme.css'
That worked for me.
A:
I faced the same issue, but my project was not created with create-react-app, mine was using webpack.
I can answer how I resolved it for webpack, maybe you can find the changes required for create-react-app after you go through this answer.
First I installed these two modules-
style-loader
css-loader
Then I modified the webpack.config.js file and added these lines inside the module.rules array.
{
test: /\.css$/,
loader: 'style-loader'
},
{
test: /\.css$/,
loader: 'css-loader',
query: {
modules: true,
localIdentName: '[name]__[local]___[hash:base64:5]'
}
}
This resolved the error in my project. You can also refer to this Github issue thread once, may be that may help you: https://github.com/facebook/create-react-app/issues/301
A:
Check if you have bootstrap in package.json file .
If it is there remove and use the below commands to reinstall or install if it is not there.
npm install --save bootstrap
A:
You can also check that your package.json imported name of dependancy are not the same.
A:
It is possible that when you installed bootstrap you were not in the directory that create-react-app created (that was the source of my problem). You can check with: npm list, which looks for locally installed packages.
Then scroll all the way to the top and see if you can find bootstrap listed there.
A:
Stop the npm server than restart it. It'll be solved if you installed the packages perfectly.
A:
Make sure bootstrap and react-bootstrap are installed.
npm install bootstrap react-bootstrap
A:
import '../node_modules/bootstrap/dist/css/bootstrap.min.css';
A:
I had simply reinstalled the previous Bootstrap version which I installed in my project folder and it worked.
A:
from the "resource/js/app.js" remove require('./bootstrap')
A:
Some times is writing error
bootstrap/dist/css/bootstrap.min.css
boostrap/dist/css/bootstrap.min.css
check this tip, sometimes happend
A:
You need to install both packages reactstrap and bootstrap by running these commands:
npm i reactstrap
npm i bootstrap
| Module not found: Can't resolve 'bootstrap/dist/css/bootstrap-theme.css' in 'C:\react-form-validation-demo\src' | I am newbie in react.js. I have been following instruction in how-to-do-simple-form-validation-in-reactjs and I can run http://localhost:3000/
But after adding bootstrap in index.js, I got this error
Failed to compile ./src/index.js Module not found: Can't resolve
'bootstrap/dist/css/bootstrap-theme.css' in
'C:\react-form-validation-demo\src'
import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';
import 'bootstrap/dist/css/bootstrap.css';
import 'bootstrap/dist/css/bootstrap-theme.css';
import App from './App';
import registerServiceWorker from './registerServiceWorker';
ReactDOM.render(<App />, document.getElementById('root'));
registerServiceWorker();
If I remove these two lines below, it works:
import 'bootstrap/dist/css/bootstrap.css';
import 'bootstrap/dist/css/bootstrap-theme.css';
| [
"In Bootstrap version 4, the file 'bootstrap-theme.css' has been removed, according to the issue on GitHub. See the change history for details.\nIf you want to continue using version 4, remove this import, else downgrade to version 3.\n",
"This error is caused by a misconfiguration, version mismatch or corrupted bootstrap install.\nIf you've got bootstrap and react-bootstrap installed already via some variation of:\n\nnpm install --save bootstrap@^4.0.0-alpha.6 react-bootstrap@^0.32.1\n\n(Check if your package.json contains \"bootstrap\" and \"react-bootstrap\" if your not sure.)\nJust install a different version of bootstrap and rebuild your project. That should replace or add that file (bootstrap/dist/css/bootstrap-theme.css) to that folder. \nA lower version of bootstrap worked for me as recommended in my create-react-app generated README.md:\n\nnpm install --save react-bootstrap bootstrap@3\nAdding Bootstrap\nYou don’t have to use React\n Bootstrap together with React but\n it is a popular library for integrating Bootstrap with React apps. If\n you need it, you can integrate it with Create React App by following\n these steps:\nInstall React Bootstrap and Bootstrap from npm. React Bootstrap does\n not include Bootstrap CSS so this needs to be installed as well:\nsh npm install --save react-bootstrap bootstrap@3\nAlternatively you may use yarn:\nsh yarn add react-bootstrap bootstrap@3\n\n",
"reinstall the bootstrap and don't forget to mention --save to the command to save it to the node modules this solved the problem for me.\nsudo npm install bootstrap --save \n\n",
"Adding the solution to the cause I ran into - it was my miss, but still unclear when reading the other answers.\nIf react-bootstrap is installed solo like this:\nnpm install --save react-bootstrap\n\nthen bootstrap will be missing. Try the following to correct:\nnpm install --save bootstrap\n\nThe correct way to install initially is as follows:\nnpm install --save react-bootstrap bootstrap\n\n",
"Install the bootstrap using the below command.\nnpm install react-bootstrap bootstrap\n\nThen include the below line in the index.js file.\nimport '../node_modules/bootstrap/dist/css/bootstrap.min.css';\n\n",
"I got the same problem and this is how I resolved the problem. \nFirst of all delete the node_modules directory from the project and then install bootstrap with below command.\nnpm install \nnpm i bootstrap\n\nThen, you can add the below import to the index.js\nimport 'bootstrap/dist/css/bootstrap.min.css';\n\n",
"npm install --save bootstrap@^4.0.0-alpha.6 react-bootstrap@^0.32.1\n\ni face same problem but after installing above packages it work well,\nmay b you should also install same packages \n",
"import '../node_modules/bootstrap/dist/css/bootstrap.min.css';\n\nSource: github\n",
"Remove\nimport 'bootstrap/dist/css/bootstrap-theme.css'\n\nThat worked for me.\n",
"I faced the same issue, but my project was not created with create-react-app, mine was using webpack. \nI can answer how I resolved it for webpack, maybe you can find the changes required for create-react-app after you go through this answer.\nFirst I installed these two modules- \nstyle-loader \ncss-loader\nThen I modified the webpack.config.js file and added these lines inside the module.rules array.\n\n\n{\r\n test: /\\.css$/,\r\n loader: 'style-loader'\r\n},\r\n{\r\n test: /\\.css$/,\r\n loader: 'css-loader',\r\n query: {\r\n modules: true,\r\n localIdentName: '[name]__[local]___[hash:base64:5]'\r\n }\r\n}\n\n\n\nThis resolved the error in my project. You can also refer to this Github issue thread once, may be that may help you: https://github.com/facebook/create-react-app/issues/301\n",
"Check if you have bootstrap in package.json file .\nIf it is there remove and use the below commands to reinstall or install if it is not there.\nnpm install --save bootstrap \n\n",
"\nYou can also check that your package.json imported name of dependancy are not the same.\n",
"It is possible that when you installed bootstrap you were not in the directory that create-react-app created (that was the source of my problem). You can check with: npm list, which looks for locally installed packages.\nThen scroll all the way to the top and see if you can find bootstrap listed there. \n",
"Stop the npm server than restart it. It'll be solved if you installed the packages perfectly.\n",
"Make sure bootstrap and react-bootstrap are installed.\nnpm install bootstrap react-bootstrap\n\n",
"import '../node_modules/bootstrap/dist/css/bootstrap.min.css';\n\n",
"I had simply reinstalled the previous Bootstrap version which I installed in my project folder and it worked.\n",
"from the \"resource/js/app.js\" remove require('./bootstrap')\n",
"Some times is writing error\nbootstrap/dist/css/bootstrap.min.css\nboostrap/dist/css/bootstrap.min.css\ncheck this tip, sometimes happend\n",
"You need to install both packages reactstrap and bootstrap by running these commands:\nnpm i reactstrap\nnpm i bootstrap\n\n"
] | [
24,
19,
19,
13,
8,
7,
5,
4,
3,
2,
2,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"reactjs"
] | stackoverflow_0048847885_reactjs.txt |
Q:
Is there a method in Julia to convert a UUID type to String type
I've recently started looking at Julia and I'm trying to export a Dataframe which contains a UUID type to Excel using the XLSL package. This results in a "ERROR: Unsupported datatype UUID for writing data to Excel file."
I've also tried converting a UUID to String which results in
ERROR: MethodError: no method matching String(::UUID)
Closest candidates are:
String(::String) at boot.jl:358
String(::Core.Compiler.LazyString) at strings/lazy.jl:46
String(::LazyString) at strings/lazy.jl:46
Is there anyway internally within Julia to support this or will I need to resort to some other form f data wrangling?
A:
julia> string(uuid1())
"56b663c0-7358-11ed-116a-abcf9be287ed"
works nicely, but String(uuid1()) doesn't (gives error in OP). So, I guess you can convert the UUID into a string as above and then save/use it.
For example, in a DataFrame context:
using UUIDs, DataFrames
df = DataFrame(id=[uuid1() for i=1:5])
eltype(df.id) == UUID
transform!(df, :id => ByRow(string) => :id)
eltype(df.id) == String
# now save DataFrame as usual
Suppose one wants a new XLSX file with some UUIDs:
using XLSX
XLSX.openxlsx("/tmp/new.xlsx", mode="w") do xf
sheet = xf[1]
someuuids = [uuid1() for i in 1:10]
sheet[1,:] = sheet[1,:] = string.(someuuids)
end
| Is there a method in Julia to convert a UUID type to String type | I've recently started looking at Julia and I'm trying to export a Dataframe which contains a UUID type to Excel using the XLSL package. This results in a "ERROR: Unsupported datatype UUID for writing data to Excel file."
I've also tried converting a UUID to String which results in
ERROR: MethodError: no method matching String(::UUID)
Closest candidates are:
String(::String) at boot.jl:358
String(::Core.Compiler.LazyString) at strings/lazy.jl:46
String(::LazyString) at strings/lazy.jl:46
Is there anyway internally within Julia to support this or will I need to resort to some other form f data wrangling?
| [
"julia> string(uuid1())\n\"56b663c0-7358-11ed-116a-abcf9be287ed\"\n\nworks nicely, but String(uuid1()) doesn't (gives error in OP). So, I guess you can convert the UUID into a string as above and then save/use it.\nFor example, in a DataFrame context:\nusing UUIDs, DataFrames\n\ndf = DataFrame(id=[uuid1() for i=1:5])\neltype(df.id) == UUID\n\ntransform!(df, :id => ByRow(string) => :id)\neltype(df.id) == String\n\n# now save DataFrame as usual\n\nSuppose one wants a new XLSX file with some UUIDs:\nusing XLSX\n\nXLSX.openxlsx(\"/tmp/new.xlsx\", mode=\"w\") do xf\n sheet = xf[1]\n someuuids = [uuid1() for i in 1:10]\n sheet[1,:] = sheet[1,:] = string.(someuuids)\nend\n\n"
] | [
1
] | [] | [] | [
"julia",
"string",
"type_conversion",
"uuid"
] | stackoverflow_0074671165_julia_string_type_conversion_uuid.txt |
Q:
How to assign to char from const char *
I have the following C code
int main (int argc, const char *argv[]){
int counter = 1;
char input[1000000];
while(argv[counter] != NULL){
input[counter] = argv[counter]
}
return 0;
}
This gives the warning incompatible pointer to integer conversion assigning to 'char' from 'const char *' [-Wint-conversion]
I can loop through and print out the values of argv[counter] so im confused why i can't set a variable equal to them.
A:
argv is an array of const string while input is an array of character. So your first step is to make input array of string by doing this char *input[1000000] and lastly you have to cast each const string of argv before assigning it to input by doing this input[counter] = (char *)argv[counter].
Edit Suggested By @David Ranieri
You can remove const from argv to avoid the casting.
ALSO
As @NoDakker said in the comment section. You have to increment counter to avoid infinite loop.
Solution
#include <stdlib.h>
int main (int argc, const char *argv[]){
int counter = 1;
char *input[1000000];
while(argv[counter] != NULL){
input[counter] = (char *)argv[counter];
counter++;
}
return 0;
}
| How to assign to char from const char * | I have the following C code
int main (int argc, const char *argv[]){
int counter = 1;
char input[1000000];
while(argv[counter] != NULL){
input[counter] = argv[counter]
}
return 0;
}
This gives the warning incompatible pointer to integer conversion assigning to 'char' from 'const char *' [-Wint-conversion]
I can loop through and print out the values of argv[counter] so im confused why i can't set a variable equal to them.
| [
"argv is an array of const string while input is an array of character. So your first step is to make input array of string by doing this char *input[1000000] and lastly you have to cast each const string of argv before assigning it to input by doing this input[counter] = (char *)argv[counter].\nEdit Suggested By @David Ranieri\nYou can remove const from argv to avoid the casting.\nALSO\nAs @NoDakker said in the comment section. You have to increment counter to avoid infinite loop.\nSolution\n#include <stdlib.h>\n\nint main (int argc, const char *argv[]){\n\nint counter = 1;\nchar *input[1000000];\n\nwhile(argv[counter] != NULL){\ninput[counter] = (char *)argv[counter];\ncounter++;\n}\n\nreturn 0;\n}\n\n"
] | [
1
] | [] | [] | [
"arrays",
"c",
"char",
"pointers",
"type_conversion"
] | stackoverflow_0074671142_arrays_c_char_pointers_type_conversion.txt |
Q:
Can I prevent an `AsyncGenerator` from yielding after its `return()` method has been invoked?
AsyncGenerator.prototype.return() - JavaScript | MDN states:
The return() method of an async generator acts as if a return statement is inserted in the generator's body at the current suspended position, which finishes the generator and allows the generator to perform any cleanup tasks when combined with a try...finally block.
Why then does the following code print 0–3 rather than only 0–2?
const delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
const values = (async function* delayedIntegers() {
let n = 0;
while (true) {
yield n++;
await delay(100);
}
})();
await Promise.all([
(async () => {
for await (const value of values) console.log(value);
})(),
(async () => {
await delay(250);
values.return();
})(),
]);
I tried adding log statements to better understand where the "current suspended position" is and from what I can tell when I call the return() method the AsyncGenerator instance isn't suspended (the body execution isn't at a yield statement) and instead of returning once reaching the yield statement the next value is yielded and then suspended at which point the "return" finally happens.
Is there any way to detect that the return() method has been invoked and not yield afterwards?
I can implement the AsyncIterator interface myself but then I lose the yield syntax supported by async generators:
const delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
const values = (() => {
let n = 0;
let done = false;
return {
[Symbol.asyncIterator]() {
return this;
},
async next() {
if (done) return { done, value: undefined };
if (n !== 0) {
await delay(100);
if (done) return { done, value: undefined };
}
return { done, value: n++ };
},
async return() {
done = true;
return { done, value: undefined };
},
};
})();
await Promise.all([
(async () => {
for await (const value of values) console.log(value);
})(),
(async () => {
await delay(250);
values.return();
})(),
]);
A:
It looks like you're correct—The values.return() method is called when the generator is not suspended at a yield statement, so it doesn't immediately stop the generator. Instead, the return method sets a flag that indicates that the generator should stop the next time it reaches a yield statement.
In the code you posted, the generator is continuously generating values, so it will eventually reach a yield statement and stop. That's why you see the numbers 0-3 printed, even though you only expected to see 0-2. There is no way to detect that the return method has been called without suspending the generator at a yield statement.
You can use the AbortSignal class to signal that the generator should stop generating values. You can use the AbortSignal class in your code like this:
const delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
const controller = new AbortController();
const { signal } = controller;
const values = (async function* delayedIntegers(signal) {
let n = 0;
while (true) {
if (signal.aborted) break;
yield n++;
await delay(100);
}
})(signal);
await Promise.all([
(async () => {
for await (const value of values) console.log(value);
})(),
(async () => {
await delay(250);
controller.abort();
})(),
]);
In this code, we added an AbortSignal object as an argument to the delayedIntegers generator function. We also added a check at the beginning of the generator's main loop to see if the aborted property of the signal object is true. If it is, the break statement is executed, which stops the generator immediately.
Finally, we call the abort() method of the AbortController object when we want to stop the generator. This causes the generator to stop immediately, so you should only see the numbers 0-2 printed.
As suggested by @jsejcksn, using the AbortSignal class is a more elegant solution than using a boolean flag, because it allows you to easily signal to the generator that it should stop generating values without relying on an external variable.
A:
Why does the code print 0–3 rather than only 0–2? From what I can tell, when I call the return() method, the AsyncGenerator instance isn't suspended (the body execution isn't at a yield statement) and instead of returning once reaching the yield statement the next value is yielded and then suspended at which point the "return" finally happens.
Yes, precisely this is what happens. The generator is already running because the for await … of loop did call its .next() method, and so the generator will complete that before considering the .return() call.
All the methods that you invoke on an async generator are queued. (In a sync generator, you'd get a "TypeError: Generator is already running" instead). One can demonstrate this by immediately calling next multiple times:
const values = (async function*() {
let i=0; while (true) {
await new Promise(r => { setTimeout(r, 1000); });
yield i++;
}
})();
values.next().then(console.log, console.error);
values.next().then(console.log, console.error);
values.next().then(console.log, console.error);
values.return('done').then(console.log, console.error);
values.next().then(console.log, console.error);
Is there any way to detect that the return() method has been invoked and not yield afterwards?
No, not from within the generator. And really you probably still should yield the value if you already expended the effort to produce it.
It sounds like what you want to do is to ignore the produced value when you want the generator to stop. You should do that in your for await … of loop - and you can also use it to stop the generator by using a break statement:
const delay = (ms) => new Promise((resolve) => {
setTimeout(resolve, ms);
});
async function* delayedIntegers() {
let n = 0;
while (true) {
yield n++;
await delay(1000);
}
}
(async function main() {
const start = Date.now();
const values = delayedIntegers();
for await (const value of values) {
if (Date.now() - start > 2500) {
console.log('done:', value);
break;
}
console.log(value);
}
})();
But if you really want to abort the generator from the outside, you need an out-of-band channel to signal the cancellation. You can use an AbortSignal for this:
const delay = (ms, signal) => new Promise((resolve, reject) => {
function done() {
resolve();
signal?.removeEventListener("abort", stop);
}
function stop() {
reject(this.reason);
clearTimeout(handle);
}
signal?.throwIfAborted();
const handle = setTimeout(done, ms);
signal?.addEventListener("abort", stop);
});
async function* delayedIntegers(signal) {
let n = 0;
while (true) {
yield n++;
await delay(1000, signal);
}
}
(async function main() {
try {
const values = delayedIntegers(AbortSignal.timeout(2500));
for await (const value of values) {
console.log(value);
}
} catch(e) {
if (e.name != "TimeoutError") throw e;
console.log("done");
}
})();
This will actually permit to stop the generator during the timeout, not after the full second has elapsed.
A:
The return() method of an async generator behaves as if a return statement is inserted in the generator function at the current suspended position. This means that if the generator function is currently suspended at a yield statement, it will not execute the subsequent code in the function, including any further yield statements, and will instead immediately finish the generator function and return the value provided to the return() method.
In the code you provided, the values async generator is suspended at a yield statement when the return() method is called. However, because the return() method is called asynchronously, after a delay of 250ms, the generator function continues to execute and yields the next value before it is finally finished by the return() method. This is why the code prints 0-3 rather than 0-2.
To prevent the generator function from yielding further values after the return() method is called, you can add a check inside the generator function to see if the return() method has been called, and if so, immediately finish the generator function without yielding any more values. For example:
const delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
const values = (async function* delayedIntegers() {
let n = 0;
let shouldReturn = false;
while (true) {
if (shouldReturn) return;
yield n++;
await delay(100);
}
})();
// Register a callback that sets the shouldReturn flag when the return() method is called
values.return(() => { shouldReturn = true });
await Promise.all([
(async () => {
for await (const value of values) console.log(value);
})(),
(async () => {
await delay(250);
values.return();
})(),
]);
Alternatively, you can use a try...finally block to ensure that any necessary cleanup tasks are performed when the generator function is finished, regardless of whether it is finished normally or by calling the return() method. For example:
const delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
const values = (async function* delayedIntegers() {
let n = 0;
try {
while (true) {
yield n++;
await delay(100);
}
} finally {
// Perform any necessary cleanup tasks here
}
})();
await Promise.all([
(async () => {
for await (const value of values) console.log(value);
})(),
(async () => {
await delay(250);
values.return();
})(),
]);
You can also implement the async iterator interface yourself, which allows you to customize the behavior of the next(), return(), and throw() methods. This will give you more control over how the iterator behaves, but it does require more code than using an async generator. Here is an example of how you might do this:
const delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
const values = (() => {
let n = 0;
let done = false;
return {
[Symbol.asyncIterator]() {
return this;
},
async next() {
if (done) return { done, value: undefined };
if (n !== 0) {
await delay(100);
if (done) return { done, value: undefined };
}
return { done, value: n++ };
},
async return() {
done = true;
return { done, value: undefined };
},
};
})();
await Promise.all([
(async () => {
for await (const value of values) console.log(value);
})(),
| Can I prevent an `AsyncGenerator` from yielding after its `return()` method has been invoked? | AsyncGenerator.prototype.return() - JavaScript | MDN states:
The return() method of an async generator acts as if a return statement is inserted in the generator's body at the current suspended position, which finishes the generator and allows the generator to perform any cleanup tasks when combined with a try...finally block.
Why then does the following code print 0–3 rather than only 0–2?
const delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
const values = (async function* delayedIntegers() {
let n = 0;
while (true) {
yield n++;
await delay(100);
}
})();
await Promise.all([
(async () => {
for await (const value of values) console.log(value);
})(),
(async () => {
await delay(250);
values.return();
})(),
]);
I tried adding log statements to better understand where the "current suspended position" is and from what I can tell when I call the return() method the AsyncGenerator instance isn't suspended (the body execution isn't at a yield statement) and instead of returning once reaching the yield statement the next value is yielded and then suspended at which point the "return" finally happens.
Is there any way to detect that the return() method has been invoked and not yield afterwards?
I can implement the AsyncIterator interface myself but then I lose the yield syntax supported by async generators:
const delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
const values = (() => {
let n = 0;
let done = false;
return {
[Symbol.asyncIterator]() {
return this;
},
async next() {
if (done) return { done, value: undefined };
if (n !== 0) {
await delay(100);
if (done) return { done, value: undefined };
}
return { done, value: n++ };
},
async return() {
done = true;
return { done, value: undefined };
},
};
})();
await Promise.all([
(async () => {
for await (const value of values) console.log(value);
})(),
(async () => {
await delay(250);
values.return();
})(),
]);
| [
"It looks like you're correct—The values.return() method is called when the generator is not suspended at a yield statement, so it doesn't immediately stop the generator. Instead, the return method sets a flag that indicates that the generator should stop the next time it reaches a yield statement.\nIn the code you posted, the generator is continuously generating values, so it will eventually reach a yield statement and stop. That's why you see the numbers 0-3 printed, even though you only expected to see 0-2. There is no way to detect that the return method has been called without suspending the generator at a yield statement.\nYou can use the AbortSignal class to signal that the generator should stop generating values. You can use the AbortSignal class in your code like this:\nconst delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));\n\nconst controller = new AbortController();\nconst { signal } = controller;\n\nconst values = (async function* delayedIntegers(signal) {\n let n = 0;\n while (true) {\n if (signal.aborted) break;\n yield n++;\n await delay(100);\n }\n})(signal);\n\nawait Promise.all([\n (async () => {\n for await (const value of values) console.log(value);\n })(),\n (async () => {\n await delay(250);\n controller.abort();\n })(),\n]);\n\nIn this code, we added an AbortSignal object as an argument to the delayedIntegers generator function. We also added a check at the beginning of the generator's main loop to see if the aborted property of the signal object is true. If it is, the break statement is executed, which stops the generator immediately.\nFinally, we call the abort() method of the AbortController object when we want to stop the generator. This causes the generator to stop immediately, so you should only see the numbers 0-2 printed.\nAs suggested by @jsejcksn, using the AbortSignal class is a more elegant solution than using a boolean flag, because it allows you to easily signal to the generator that it should stop generating values without relying on an external variable.\n",
"\nWhy does the code print 0–3 rather than only 0–2? From what I can tell, when I call the return() method, the AsyncGenerator instance isn't suspended (the body execution isn't at a yield statement) and instead of returning once reaching the yield statement the next value is yielded and then suspended at which point the \"return\" finally happens.\n\nYes, precisely this is what happens. The generator is already running because the for await … of loop did call its .next() method, and so the generator will complete that before considering the .return() call.\nAll the methods that you invoke on an async generator are queued. (In a sync generator, you'd get a \"TypeError: Generator is already running\" instead). One can demonstrate this by immediately calling next multiple times:\n\n\nconst values = (async function*() {\n let i=0; while (true) {\n await new Promise(r => { setTimeout(r, 1000); });\n yield i++;\n }\n})();\nvalues.next().then(console.log, console.error);\nvalues.next().then(console.log, console.error);\nvalues.next().then(console.log, console.error);\nvalues.return('done').then(console.log, console.error);\nvalues.next().then(console.log, console.error);\n\n\n\n\nIs there any way to detect that the return() method has been invoked and not yield afterwards?\n\nNo, not from within the generator. And really you probably still should yield the value if you already expended the effort to produce it.\nIt sounds like what you want to do is to ignore the produced value when you want the generator to stop. You should do that in your for await … of loop - and you can also use it to stop the generator by using a break statement:\n\n\nconst delay = (ms) => new Promise((resolve) => {\n setTimeout(resolve, ms);\n});\n\nasync function* delayedIntegers() {\n let n = 0;\n while (true) {\n yield n++;\n await delay(1000);\n }\n}\n\n(async function main() {\n const start = Date.now();\n const values = delayedIntegers();\n for await (const value of values) {\n if (Date.now() - start > 2500) {\n console.log('done:', value);\n break;\n }\n console.log(value);\n }\n})();\n\n\n\nBut if you really want to abort the generator from the outside, you need an out-of-band channel to signal the cancellation. You can use an AbortSignal for this:\n\n\nconst delay = (ms, signal) => new Promise((resolve, reject) => {\n function done() {\n resolve();\n signal?.removeEventListener(\"abort\", stop);\n }\n function stop() {\n reject(this.reason);\n clearTimeout(handle);\n }\n signal?.throwIfAborted();\n const handle = setTimeout(done, ms);\n signal?.addEventListener(\"abort\", stop);\n});\n\nasync function* delayedIntegers(signal) {\n let n = 0;\n while (true) {\n yield n++;\n await delay(1000, signal);\n }\n}\n\n(async function main() {\n try {\n const values = delayedIntegers(AbortSignal.timeout(2500));\n for await (const value of values) {\n console.log(value);\n }\n } catch(e) {\n if (e.name != \"TimeoutError\") throw e;\n console.log(\"done\");\n }\n})();\n\n\n\nThis will actually permit to stop the generator during the timeout, not after the full second has elapsed.\n",
"The return() method of an async generator behaves as if a return statement is inserted in the generator function at the current suspended position. This means that if the generator function is currently suspended at a yield statement, it will not execute the subsequent code in the function, including any further yield statements, and will instead immediately finish the generator function and return the value provided to the return() method.\nIn the code you provided, the values async generator is suspended at a yield statement when the return() method is called. However, because the return() method is called asynchronously, after a delay of 250ms, the generator function continues to execute and yields the next value before it is finally finished by the return() method. This is why the code prints 0-3 rather than 0-2.\nTo prevent the generator function from yielding further values after the return() method is called, you can add a check inside the generator function to see if the return() method has been called, and if so, immediately finish the generator function without yielding any more values. For example:\nconst delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));\n\nconst values = (async function* delayedIntegers() {\n let n = 0;\n let shouldReturn = false;\n while (true) {\n if (shouldReturn) return;\n yield n++;\n await delay(100);\n }\n})();\n\n// Register a callback that sets the shouldReturn flag when the return() method is called\nvalues.return(() => { shouldReturn = true });\n\nawait Promise.all([\n (async () => {\n for await (const value of values) console.log(value);\n })(),\n (async () => {\n await delay(250);\n values.return();\n })(),\n]);\n\nAlternatively, you can use a try...finally block to ensure that any necessary cleanup tasks are performed when the generator function is finished, regardless of whether it is finished normally or by calling the return() method. For example:\nconst delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));\n\nconst values = (async function* delayedIntegers() {\n let n = 0;\n try {\n while (true) {\n yield n++;\n await delay(100);\n }\n } finally {\n // Perform any necessary cleanup tasks here\n }\n})();\n\nawait Promise.all([\n (async () => {\n for await (const value of values) console.log(value);\n })(),\n (async () => {\n await delay(250);\n values.return();\n })(),\n]);\n\nYou can also implement the async iterator interface yourself, which allows you to customize the behavior of the next(), return(), and throw() methods. This will give you more control over how the iterator behaves, but it does require more code than using an async generator. Here is an example of how you might do this:\nconst delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));\n\nconst values = (() => {\n let n = 0;\n let done = false;\n return {\n [Symbol.asyncIterator]() {\n return this;\n },\n async next() {\n if (done) return { done, value: undefined };\n if (n !== 0) {\n await delay(100);\n if (done) return { done, value: undefined };\n }\n return { done, value: n++ };\n },\n async return() {\n done = true;\n return { done, value: undefined };\n },\n };\n})();\n\nawait Promise.all([\n (async () => {\n for await (const value of values) console.log(value);\n })(),\n\n"
] | [
3,
3,
0
] | [] | [] | [
"async_await",
"generator",
"javascript"
] | stackoverflow_0074644618_async_await_generator_javascript.txt |
Q:
Copy constructor difference for std::unique_ptr
If my understanding is correct, the following declarations should both call the copy constructor of T which takes type of x as a parameter.
T t = x;
T t(x);
But when I do the same for std::unique_ptr<int> I get an error with the first declaration, while the second compiles and does what is expected.
std::unique_ptr<int> x = new int();
std::unique_ptr<int> x (new int());
Is there a difference in the two syntax for calling the copy constructor?
A:
Constructor of std::unique_ptr<> is explicit, which means, you need to write it in the first case:
std::unique_ptr<int> x = std::unique_ptr<int>(new int());
// or
auto x = std::unique_ptr<int>(new int());
// or make_unique()
A:
std::unique_ptr::unique_ptr( pointer p ) is an explicit constructor, so that form of initialization is not allowed. Initializing with = always requires a converting-constructor for implicit conversions.
A:
Yes, there is a difference in the two syntax for calling the copy constructor for std::unique_ptr. The first syntax, T t = x;, is called copy initialization. It calls the copy constructor of std::unique_ptr if it exists, but if the copy constructor is deleted or inaccessible (as is the case with std::unique_ptr), then the compiler will instead try to use the move constructor of std::unique_ptr. However, std::unique_ptr doesn't have a move constructor, so the compiler will throw an error.
On the other hand, the second syntax, T t(x);, is called direct initialization. It only calls the copy constructor of std::unique_ptr, and will not try to use the move constructor if the copy constructor is deleted or inaccessible. Since std::unique_ptr doesn't have a copy constructor, this code will also throw an error.
In general, it is best to avoid using copy initialization with std::unique_ptr to avoid this issue. Instead, you should use direct initialization, or use one of the other ways to initialize a std::unique_ptr, such as using the make_unique function.
For example:
std::unique_ptr<int> x = std::make_unique<int>();
std::unique_ptr<int> y(std::make_unique<int>());
Both of these declarations will properly initialize x and y with a new int object using the make_unique function, without trying to use the copy or move constructor of std::unique_ptr.
| Copy constructor difference for std::unique_ptr | If my understanding is correct, the following declarations should both call the copy constructor of T which takes type of x as a parameter.
T t = x;
T t(x);
But when I do the same for std::unique_ptr<int> I get an error with the first declaration, while the second compiles and does what is expected.
std::unique_ptr<int> x = new int();
std::unique_ptr<int> x (new int());
Is there a difference in the two syntax for calling the copy constructor?
| [
"Constructor of std::unique_ptr<> is explicit, which means, you need to write it in the first case:\nstd::unique_ptr<int> x = std::unique_ptr<int>(new int());\n// or\nauto x = std::unique_ptr<int>(new int());\n// or make_unique()\n\n",
"std::unique_ptr::unique_ptr( pointer p ) is an explicit constructor, so that form of initialization is not allowed. Initializing with = always requires a converting-constructor for implicit conversions.\n",
"Yes, there is a difference in the two syntax for calling the copy constructor for std::unique_ptr. The first syntax, T t = x;, is called copy initialization. It calls the copy constructor of std::unique_ptr if it exists, but if the copy constructor is deleted or inaccessible (as is the case with std::unique_ptr), then the compiler will instead try to use the move constructor of std::unique_ptr. However, std::unique_ptr doesn't have a move constructor, so the compiler will throw an error.\nOn the other hand, the second syntax, T t(x);, is called direct initialization. It only calls the copy constructor of std::unique_ptr, and will not try to use the move constructor if the copy constructor is deleted or inaccessible. Since std::unique_ptr doesn't have a copy constructor, this code will also throw an error.\nIn general, it is best to avoid using copy initialization with std::unique_ptr to avoid this issue. Instead, you should use direct initialization, or use one of the other ways to initialize a std::unique_ptr, such as using the make_unique function.\nFor example:\nstd::unique_ptr<int> x = std::make_unique<int>();\nstd::unique_ptr<int> y(std::make_unique<int>());\n\nBoth of these declarations will properly initialize x and y with a new int object using the make_unique function, without trying to use the copy or move constructor of std::unique_ptr.\n"
] | [
2,
2,
1
] | [] | [] | [
"c++",
"copy_constructor"
] | stackoverflow_0074671173_c++_copy_constructor.txt |
Q:
voila: preheated kernels only for specific notebook
I want to enable the preheated kernels in voila. E.g. I am doing:
voila --preheat_kernel=True --pool_size=14
However, I only want to have preheated kernels for a very specific notebook, and not the (many) others that I have in the same directory.
Is there a way to tell voila for which notebook to enable the kernels (or failing that,
to have 0 pool_size for the rest of the notebooks)?
I am currently using voila version 0.4.0
A:
RTFM...
The file where voila is run can contain a voila.json config file...
I have arrived at currently using this one :
{
"VoilaConfiguration": {
"preheat_kernel": true
},
"VoilaKernelManager": {
"preheat_blacklist": [
"*-No-Preheat.ipynb"
],
"kernel_pools_config": {
"demo1.ipynb": {
"pool_size": 4
},
"demo2.ipynb": {
"pool_size": 12
},
"default": {
"pool_size": 0
}
},
"fill_delay": 0
}
}
Here, I have found no behavioral difference between nodes with pool_size 0 and those black listed, I guess that would be significant for a non-zero default pool size.
These params could possibly be also supplied from command line, e.g.
voila --preheat_kernel=True --VoilaKernelManager.default_env_variables='{"FOO": "BAR"}'
| voila: preheated kernels only for specific notebook | I want to enable the preheated kernels in voila. E.g. I am doing:
voila --preheat_kernel=True --pool_size=14
However, I only want to have preheated kernels for a very specific notebook, and not the (many) others that I have in the same directory.
Is there a way to tell voila for which notebook to enable the kernels (or failing that,
to have 0 pool_size for the rest of the notebooks)?
I am currently using voila version 0.4.0
| [
"RTFM...\nThe file where voila is run can contain a voila.json config file...\nI have arrived at currently using this one :\n{\n \"VoilaConfiguration\": {\n \"preheat_kernel\": true\n },\n \"VoilaKernelManager\": {\n \"preheat_blacklist\": [\n \"*-No-Preheat.ipynb\"\n ],\n \"kernel_pools_config\": {\n \"demo1.ipynb\": {\n \"pool_size\": 4\n },\n \"demo2.ipynb\": {\n \"pool_size\": 12\n }, \n \"default\": {\n \"pool_size\": 0\n }\n },\n \"fill_delay\": 0\n }\n}\n\nHere, I have found no behavioral difference between nodes with pool_size 0 and those black listed, I guess that would be significant for a non-zero default pool size.\nThese params could possibly be also supplied from command line, e.g.\nvoila --preheat_kernel=True --VoilaKernelManager.default_env_variables='{\"FOO\": \"BAR\"}'\n\n"
] | [
0
] | [] | [] | [
"voila",
"voila_hotpooling"
] | stackoverflow_0074660837_voila_voila_hotpooling.txt |
Q:
Passing values from JavaScript to C# ASP.NET Core Razor Pages for SQL Querying
I am making a simple web-shop application prototype and need to pass shopping cart items (which are stored in localStorage) to our SQLServer. The localStorage is as follows
{"grootfigure":{"name":"Groot figure","tag":"grootfigure","price":600,"inCart":2},"owlfigure":{"name":"Owl figure","tag":"owlfigure","price":350,"inCart":4},"dragonfigure":{"name":"Dragon figure","tag":"dragonfigure","price":475,"inCart":5}}
The first idea was to pass the quantity of each product in cart to each counter variable in C# and then use a separate method in C# to run an SQL Query. But this seemed difficult to accomplish.
When I tried to pass variables between JS and C# by
function addOwl(){
@Globals.String = localStorage.getItem('productsInCart')
alert(@Globals.String)
}
I get this in the web browser console
Uncaught ReferenceError: addOwl is not defined at HTMLButtonElement.onclick (cart:71:68)
Any ideas how I can easily run SQL query from localStorage values?
Thank you
A:
It's not a good idea to pass values from your JavaScript code directly to C# and run an SQL query with it, as this can leave your application vulnerable to SQL injection attacks. Instead, you should consider using an AJAX call to send the data from your JavaScript code to a server-side endpoint, where you can validate and sanitize the data before using it in an SQL query.
Here's an example of how you might do this:
In your JavaScript code, create a function that gets the shopping cart data from localStorage and sends it to a server-side endpoint using an AJAX call:
function sendCartData() {
// Get the shopping cart data from localStorage
var cartData = localStorage.getItem('productsInCart');
// Use jQuery's $.ajax() method to send the data to a server-side endpoint
$.ajax({
url: '/your-server-endpoint',
type: 'POST',
data: { cartData: cartData },
success: function(response) {
// Handle the response from the server
}
});
}
On the server-side, create an endpoint that can handle the AJAX call and save the shopping cart data to your database. For example, if you're using ASP.NET Core, you might create an action method like this:
[HttpPost]
public IActionResult SaveCartData(string cartData) {
// Validate and sanitize the cart data before using it in an SQL query
// Use Entity Framework or another ORM to save the data to your database
using (var db = new YourDbContext()) {
// Create a new ShoppingCart object and populate it with the cart data
var cart = new ShoppingCart {
// Parse the cart data and populate the ShoppingCart object
};
// Save the ShoppingCart object to the database
db.ShoppingCarts.Add(cart);
db.SaveChanges();
}
// Return a success response to the client
return Json(new { success = true });
}
This approach allows you to safely save the shopping cart data to your database, without exposing your application to SQL injection attacks. You may need to adjust the code to fit the specific needs of your application, but this should give you a good starting point.
A:
You need to understand where, when and how both JavaScript and C# are executed in Razor Pages.
C# is executed on the server, before the output is sent to the browser. Once the server executes the C# code, it produces text (which may be plain text, HTML, CSS, JavaScript, etc), which it then sends out. It then discards the request.
By contrast, JavaScript is only executed in the browser, not on the server. It knows nothng about C#, and cannot call C# code directly.
Specifically, the line of code...
@Globals.String = localStorage.getItem('productsInCart')
..will be interpreted by the server-side C# as "get the value of the String property of a C# object named Globals and insert that into the output that will be sent to the browser."
Given that you probably don't have that, I would expect a compiler error at this point. If you're not getting that, then it sounds like you aren't giving us the full story.
Assuming it can find such an object with such a property, let's say it has the value jim, it will mean that the following text (that's very important, the server treats all of this as text, it doesn't know about JavaScript, and will not attempt to interpret it) will be sent to the browser...
function addOwl(){
jim = localStorage.getItem('productsInCart')
alert(jim)
}
This is almost certaily not what you want.
So, to answer your basic question, the way to send data from your JavaScript to the server, where it will be used in your C# is to use AJAX. There are other ways, but this is probably the simplest.
If you use jQuery, then it gives you JavaScript functions to make this relatively easy. You'll need to write some C# code to accept the AJAX request.
| Passing values from JavaScript to C# ASP.NET Core Razor Pages for SQL Querying | I am making a simple web-shop application prototype and need to pass shopping cart items (which are stored in localStorage) to our SQLServer. The localStorage is as follows
{"grootfigure":{"name":"Groot figure","tag":"grootfigure","price":600,"inCart":2},"owlfigure":{"name":"Owl figure","tag":"owlfigure","price":350,"inCart":4},"dragonfigure":{"name":"Dragon figure","tag":"dragonfigure","price":475,"inCart":5}}
The first idea was to pass the quantity of each product in cart to each counter variable in C# and then use a separate method in C# to run an SQL Query. But this seemed difficult to accomplish.
When I tried to pass variables between JS and C# by
function addOwl(){
@Globals.String = localStorage.getItem('productsInCart')
alert(@Globals.String)
}
I get this in the web browser console
Uncaught ReferenceError: addOwl is not defined at HTMLButtonElement.onclick (cart:71:68)
Any ideas how I can easily run SQL query from localStorage values?
Thank you
| [
"It's not a good idea to pass values from your JavaScript code directly to C# and run an SQL query with it, as this can leave your application vulnerable to SQL injection attacks. Instead, you should consider using an AJAX call to send the data from your JavaScript code to a server-side endpoint, where you can validate and sanitize the data before using it in an SQL query.\nHere's an example of how you might do this:\n\nIn your JavaScript code, create a function that gets the shopping cart data from localStorage and sends it to a server-side endpoint using an AJAX call:\n\n\n\nfunction sendCartData() {\n // Get the shopping cart data from localStorage\n var cartData = localStorage.getItem('productsInCart');\n\n // Use jQuery's $.ajax() method to send the data to a server-side endpoint\n $.ajax({\n url: '/your-server-endpoint',\n type: 'POST',\n data: { cartData: cartData },\n success: function(response) {\n // Handle the response from the server\n }\n });\n}\n\n\n\n\nOn the server-side, create an endpoint that can handle the AJAX call and save the shopping cart data to your database. For example, if you're using ASP.NET Core, you might create an action method like this:\n\n\n\n[HttpPost]\npublic IActionResult SaveCartData(string cartData) {\n // Validate and sanitize the cart data before using it in an SQL query\n\n // Use Entity Framework or another ORM to save the data to your database\n using (var db = new YourDbContext()) {\n // Create a new ShoppingCart object and populate it with the cart data\n var cart = new ShoppingCart {\n // Parse the cart data and populate the ShoppingCart object\n };\n\n // Save the ShoppingCart object to the database\n db.ShoppingCarts.Add(cart);\n db.SaveChanges();\n }\n\n // Return a success response to the client\n return Json(new { success = true });\n}\n\n\n\nThis approach allows you to safely save the shopping cart data to your database, without exposing your application to SQL injection attacks. You may need to adjust the code to fit the specific needs of your application, but this should give you a good starting point.\n",
"You need to understand where, when and how both JavaScript and C# are executed in Razor Pages.\nC# is executed on the server, before the output is sent to the browser. Once the server executes the C# code, it produces text (which may be plain text, HTML, CSS, JavaScript, etc), which it then sends out. It then discards the request.\nBy contrast, JavaScript is only executed in the browser, not on the server. It knows nothng about C#, and cannot call C# code directly.\nSpecifically, the line of code...\[email protected] = localStorage.getItem('productsInCart')\n\n..will be interpreted by the server-side C# as \"get the value of the String property of a C# object named Globals and insert that into the output that will be sent to the browser.\"\nGiven that you probably don't have that, I would expect a compiler error at this point. If you're not getting that, then it sounds like you aren't giving us the full story.\nAssuming it can find such an object with such a property, let's say it has the value jim, it will mean that the following text (that's very important, the server treats all of this as text, it doesn't know about JavaScript, and will not attempt to interpret it) will be sent to the browser...\nfunction addOwl(){\n jim = localStorage.getItem('productsInCart')\n alert(jim)\n\n \n }\n\nThis is almost certaily not what you want.\nSo, to answer your basic question, the way to send data from your JavaScript to the server, where it will be used in your C# is to use AJAX. There are other ways, but this is probably the simplest.\nIf you use jQuery, then it gives you JavaScript functions to make this relatively easy. You'll need to write some C# code to accept the AJAX request.\n"
] | [
1,
0
] | [] | [] | [
"c#",
"javascript",
"sql"
] | stackoverflow_0074665763_c#_javascript_sql.txt |
Q:
Setting default value for CKEditor in Flask App wtforms
I'm trying to set the default value for a CKEditor html text input based on an existing object. The use case is for updating an existing row using flask-sqlalchemy so the user doesn't have to retype all of the existing rich text. As the code is written, the default value for form.description is blank even when object.description is not.
The relevant part of the form looks something like this:
<form method="POST">
{{ ckeditor.load() }}
{{ form.hidden_tag }}
{{ form.name.label }}
{{ form.name(class="form-control", value=object.name) }}
{{ form.description.label }}
{{ form.description(class="form-control", value=object.description) }}
{{ edit_loc_form.submit(class="btn btn-primary") }}
</form>
{{ ckeditor.config(name='description') }}
Thanks!
A:
From the docs:
If you are using Flask-WTF/WTForms, it’s even more simple, just pass
the value to the form field’s data attribute:
@app.route('/edit')
def edit_post():
form = EditForm()
form.body.data = "default value" # default value entered here.
return render_template('edit.html', form=form)
| Setting default value for CKEditor in Flask App wtforms | I'm trying to set the default value for a CKEditor html text input based on an existing object. The use case is for updating an existing row using flask-sqlalchemy so the user doesn't have to retype all of the existing rich text. As the code is written, the default value for form.description is blank even when object.description is not.
The relevant part of the form looks something like this:
<form method="POST">
{{ ckeditor.load() }}
{{ form.hidden_tag }}
{{ form.name.label }}
{{ form.name(class="form-control", value=object.name) }}
{{ form.description.label }}
{{ form.description(class="form-control", value=object.description) }}
{{ edit_loc_form.submit(class="btn btn-primary") }}
</form>
{{ ckeditor.config(name='description') }}
Thanks!
| [
"From the docs:\n\nIf you are using Flask-WTF/WTForms, it’s even more simple, just pass\nthe value to the form field’s data attribute:\n\[email protected]('/edit')\ndef edit_post():\n form = EditForm()\n form.body.data = \"default value\" # default value entered here.\n return render_template('edit.html', form=form)\n\n"
] | [
0
] | [] | [] | [
"ckeditor",
"flask_sqlalchemy",
"flask_wtforms",
"python_3.x"
] | stackoverflow_0072722160_ckeditor_flask_sqlalchemy_flask_wtforms_python_3.x.txt |
Q:
Use Intel OneAPI with Anaconda
I am trying to use the Intel OneAPI while being activated in an Anaconda environment. If I create an Anaconda environment first, conda env list shows
# conda environments:
#
base /path/anaconda3
env_name * /path/anaconda3/envs/env_name
However, if I then source /opt/intel/oneapi/setvars.sh, conda env list shows
# conda environments:
#
/path/anaconda3
/path/anaconda3/envs/env_name
base * /opt/intel/oneapi/intelpython/latest
2021.4.0 /opt/intel/oneapi/intelpython/latest/envs/2021.4.0
and I cannot conda activate env_name anymore. I successfully set this up before on a different machine, and I believe that a correct setup should show for conda env list:
# conda environments:
#
base /path/anaconda3
env_name * /path/anaconda3/envs/env_name
/opt/intel/oneapi/intelpython/latest
/opt/intel/oneapi/intelpython/latest/envs/2021.3.0
Any idea on how to properly source the Intel One API environment vars while being activated in an Anaconda environment?
A:
Please try to use the Conda Clone Function to Add Packages as a Non-Root User.
The Intel oneAPI AI Analytics toolkit is installed in the inteloneapi folder, which requires root privileges to manage. You may wish to add and maintain new packages using Conda*, but you cannot do so without root access. Or, you may have root access but do not want to enter the root password every time you activate Conda.
To manage your environment without using root access, utilize the Conda clone functionality to clone the packages you need to a folder outside of the inteloneapi folder:
From the same terminal window where you ran setvars.sh, identify the Conda environments on your system:
conda env list
You will see results similar to this:
2. Use the clone function to clone the environment to a new folder. In the example below, the new environment is named usr_intelpython and the environment being cloned is named base.
conda create --name usr_intelpython --clone base
The clone details will appear.
If the command does not execute, you may not have access to the ~/.conda
folder.
To fix this, delete the .conda folder and execute this command again:
conda create --name usr_intelpython --clone base.
Activate the new environment to enable the ability to add packages.
conda activate usr_intelpython
Verify the new environment is active.
conda env list
A:
same problem, solved with the following command
conda activate /path/anaconda3/envs/env_name
| Use Intel OneAPI with Anaconda | I am trying to use the Intel OneAPI while being activated in an Anaconda environment. If I create an Anaconda environment first, conda env list shows
# conda environments:
#
base /path/anaconda3
env_name * /path/anaconda3/envs/env_name
However, if I then source /opt/intel/oneapi/setvars.sh, conda env list shows
# conda environments:
#
/path/anaconda3
/path/anaconda3/envs/env_name
base * /opt/intel/oneapi/intelpython/latest
2021.4.0 /opt/intel/oneapi/intelpython/latest/envs/2021.4.0
and I cannot conda activate env_name anymore. I successfully set this up before on a different machine, and I believe that a correct setup should show for conda env list:
# conda environments:
#
base /path/anaconda3
env_name * /path/anaconda3/envs/env_name
/opt/intel/oneapi/intelpython/latest
/opt/intel/oneapi/intelpython/latest/envs/2021.3.0
Any idea on how to properly source the Intel One API environment vars while being activated in an Anaconda environment?
| [
"Please try to use the Conda Clone Function to Add Packages as a Non-Root User.\nThe Intel oneAPI AI Analytics toolkit is installed in the inteloneapi folder, which requires root privileges to manage. You may wish to add and maintain new packages using Conda*, but you cannot do so without root access. Or, you may have root access but do not want to enter the root password every time you activate Conda.\nTo manage your environment without using root access, utilize the Conda clone functionality to clone the packages you need to a folder outside of the inteloneapi folder:\n\nFrom the same terminal window where you ran setvars.sh, identify the Conda environments on your system:\nconda env list\n\n\n\nYou will see results similar to this:\n\n2. Use the clone function to clone the environment to a new folder. In the example below, the new environment is named usr_intelpython and the environment being cloned is named base.\nconda create --name usr_intelpython --clone base\n\nThe clone details will appear.\n\nIf the command does not execute, you may not have access to the ~/.conda\nfolder.\nTo fix this, delete the .conda folder and execute this command again:\nconda create --name usr_intelpython --clone base.\n\nActivate the new environment to enable the ability to add packages.\n\nconda activate usr_intelpython\n\n\nVerify the new environment is active.\n\nconda env list\n\n\n",
"same problem, solved with the following command\nconda activate /path/anaconda3/envs/env_name\n"
] | [
1,
0
] | [] | [] | [
"anaconda",
"intel_oneapi"
] | stackoverflow_0069971072_anaconda_intel_oneapi.txt |
Q:
get shell from memcpy
I've been trying to get shell from this function but nothing seems to work
so what's happening here is that I have a variable of size 32 bytes and I'm trying to copy 600bytes to it
what I don't understand is, where my shell code will be executed is it inside the 32bytes or in the 600 - 32 bytes.
I can't give the whole working code as this is just a disassembly code from ghidra.
Any help what should I do? Thanks in advance.
void foo(void *param)
{
undefined varialbe [32];
memcpy(variable, param, 600);
return;
}
this is the shellcode I tried
\x31\xc0\x50\x68\x2f\x63\x61\x74\x68\x2f\x62\x69\x6e\x89\xe3\x50\x68\x2e\x74\x78\x74\x68\x66\x6c\x61\x67\x89\xe1\x50\x51\x53\x89\xe1\x31\xc0\x83\xc0\x0b\xcd\x80
I was expecting that I will get a shell on the system if I just input the above shell code. but all I get for not is segfaults.
I'm new to binary exploitations. so sorry if this is a stupid question and sorry for my english.
A:
It looks like you'r trying to overflow the memcpy function,
If you'r reaching RIP, it must be filled with an address that you can execute,
Classical technique is to have this address lower in the buffer, RIP is set to your payload by the return instruction at the end of the function.
You will need to deactivate modern features, which are address randomization and non executable stack. In case this securities are on on system that you are testing, you have to use a technique that is called ROP, which mean Return Oriented Payload.
This technique uses code allready present in the program. Well, bits of this code, by filling the stack will addresses of instructions that end up with "ret", you can come back to the stack and pick another instruction, this way you'r fill the registers as needed and then do a syscall.
You need to find links about these techniques.
A:
… where my shell code will be executed is it inside the 32bytes or in the 600 - 32 bytes.
The intent here is that the data from param will be copied into memory starting where variable is but going beyond that. Beyond that is data on the stack used to manage function calls, including the return address of the function. The idea is that carefully crafted data in param will put a new return address on the stack, and, when the foo function returns, program execution will jump to that new address.
Doing this requires particular knowledge about the program being attacked and the computing platform it executes on.
Any help what should I do?
Classroom exercises of this nature must be crafted specifically for the system the students are using. You must get the necessary information from your instructor.
| get shell from memcpy | I've been trying to get shell from this function but nothing seems to work
so what's happening here is that I have a variable of size 32 bytes and I'm trying to copy 600bytes to it
what I don't understand is, where my shell code will be executed is it inside the 32bytes or in the 600 - 32 bytes.
I can't give the whole working code as this is just a disassembly code from ghidra.
Any help what should I do? Thanks in advance.
void foo(void *param)
{
undefined varialbe [32];
memcpy(variable, param, 600);
return;
}
this is the shellcode I tried
\x31\xc0\x50\x68\x2f\x63\x61\x74\x68\x2f\x62\x69\x6e\x89\xe3\x50\x68\x2e\x74\x78\x74\x68\x66\x6c\x61\x67\x89\xe1\x50\x51\x53\x89\xe1\x31\xc0\x83\xc0\x0b\xcd\x80
I was expecting that I will get a shell on the system if I just input the above shell code. but all I get for not is segfaults.
I'm new to binary exploitations. so sorry if this is a stupid question and sorry for my english.
| [
"It looks like you'r trying to overflow the memcpy function,\nIf you'r reaching RIP, it must be filled with an address that you can execute,\nClassical technique is to have this address lower in the buffer, RIP is set to your payload by the return instruction at the end of the function.\nYou will need to deactivate modern features, which are address randomization and non executable stack. In case this securities are on on system that you are testing, you have to use a technique that is called ROP, which mean Return Oriented Payload.\nThis technique uses code allready present in the program. Well, bits of this code, by filling the stack will addresses of instructions that end up with \"ret\", you can come back to the stack and pick another instruction, this way you'r fill the registers as needed and then do a syscall.\nYou need to find links about these techniques.\n",
"\n… where my shell code will be executed is it inside the 32bytes or in the 600 - 32 bytes.\n\nThe intent here is that the data from param will be copied into memory starting where variable is but going beyond that. Beyond that is data on the stack used to manage function calls, including the return address of the function. The idea is that carefully crafted data in param will put a new return address on the stack, and, when the foo function returns, program execution will jump to that new address.\nDoing this requires particular knowledge about the program being attacked and the computing platform it executes on.\n\nAny help what should I do?\n\nClassroom exercises of this nature must be crafted specifically for the system the students are using. You must get the necessary information from your instructor.\n"
] | [
1,
1
] | [] | [] | [
"c",
"memcpy",
"overflow",
"shellcode"
] | stackoverflow_0074671048_c_memcpy_overflow_shellcode.txt |
Q:
I want to add custom ground material on my website using Meshphongmaterial
var groundMaterial = new THREE.MeshPhongMaterial( { color: 0xff43fee, specular: 0x360000 ,} );
groundMesh = new THREE.Mesh( new THREE.PlaneBufferGeometry( 2000, 2000 ), groundMaterial );
groundMesh.rotation.x = - Math.PI / 2;
scene.add( groundMesh );
I want to add grass texture down but not getting any possible solution this code is taken from https://github.com/schteppe/gpu-physics.js this following github repo.
A:
You need to create a Texture and send it as map property in the MechPhongMaterial constructor
| I want to add custom ground material on my website using Meshphongmaterial |
var groundMaterial = new THREE.MeshPhongMaterial( { color: 0xff43fee, specular: 0x360000 ,} );
groundMesh = new THREE.Mesh( new THREE.PlaneBufferGeometry( 2000, 2000 ), groundMaterial );
groundMesh.rotation.x = - Math.PI / 2;
scene.add( groundMesh );
I want to add grass texture down but not getting any possible solution this code is taken from https://github.com/schteppe/gpu-physics.js this following github repo.
| [
"You need to create a Texture and send it as map property in the MechPhongMaterial constructor\n"
] | [
0
] | [] | [] | [
"javascript",
"three.js",
"threejs_editor",
"unity_webgl",
"webgl"
] | stackoverflow_0074667560_javascript_three.js_threejs_editor_unity_webgl_webgl.txt |
Q:
How to permanently change the color of a selected row in a table in java
I would like to know how to add code in my action listener that would make the row I select or press on with my mouse change from the color red to the color white. I have tried getRowSelected() and tried to use the index but that ultimately only changes the row color when it is selected and it goes back to red. I have also attempted to user Renderer which is a newer concept to me but didn't know how to implement it the right way. Any help or guidance would be appreciated.
Tried getRowSelected() but the row color change was only temporary and went back to red once it was unselected. Tried Renderer but didn't know how to fully implement it as it is a new concept to me.
A:
One way to do this is to:
Create a JTable and table model with an invisible extra column, one that holds a boolean value, set to false initially. One way to make the column invisible is by removing the column from the JTable's TableColumnModel as suggested here by Swing expert Rob Camick and as per his previous answer to a Swing question on the topic.
Create a table cell renderer that colors the background of a row based on the value of the boolean mentioned above
Toggle the value of this boolean using either a mouse listener or a list selection listener.
For example:
import java.awt.BorderLayout;
import java.awt.Color;
import java.awt.Component;
import java.awt.event.MouseAdapter;
import java.awt.event.MouseEvent;
import javax.swing.*;
import javax.swing.table.*;
public class TestTableRowColor {
public static void main(String[] args) {
SwingUtilities.invokeLater(() -> {
ChangeRowColorPanel mainPanel = new ChangeRowColorPanel();
JFrame frame = new JFrame("GUI");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.add(mainPanel);
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
});
}
}
@SuppressWarnings("serial")
class ChangeRowColorPanel extends JPanel {
private static final String[] COLUMN_NAMES = { "One", "Two", "Three", "Selected" };
private DefaultTableModel model = new DefaultTableModel(COLUMN_NAMES, 0);
private JTable table = new JTable(model);
public ChangeRowColorPanel() {
TableColumnModel columnModel = table.getColumnModel();
columnModel.removeColumn(columnModel.getColumn(columnModel.getColumnCount() - 1));
table.setDefaultRenderer(Object.class, new RowColorRenderer());
table.addMouseListener(new MyMouse());
int max = 5;
for (int i = 0; i < max; i++) {
Object[] row = new Object[COLUMN_NAMES.length];
for (int j = 0; j < COLUMN_NAMES.length - 1; j++) {
row[j] = (int) (100 * Math.random());
}
row[COLUMN_NAMES.length - 1] = false;
model.addRow(row);
}
setLayout(new BorderLayout());
add(new JScrollPane(table));
}
}
class MyMouse extends MouseAdapter {
@Override
public void mousePressed(MouseEvent e) {
JTable table = (JTable) e.getSource();
TableModel model = table.getModel();
boolean selected = (boolean) model.getValueAt(table.getSelectedRow(), model.getColumnCount() - 1);
model.setValueAt(!selected, table.getSelectedRow(), model.getColumnCount() - 1);
table.repaint();
}
}
@SuppressWarnings("serial")
class RowColorRenderer extends DefaultTableCellRenderer {
private static final Color SELECTED_COLOR = Color.PINK;
public RowColorRenderer() {
setOpaque(true);
}
@Override
public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus,
int row, int column) {
Component renderer = super.getTableCellRendererComponent(table, value, isSelected, hasFocus, row, column);
TableModel model = table.getModel();
int selectedColumn = model.getColumnCount() - 1;
boolean selected = (boolean) model.getValueAt(row, selectedColumn);
Color background = selected ? SELECTED_COLOR : null;
renderer.setBackground(background);
return this;
}
}
A:
To change the color of a selected row in a table in Java, you can use a custom cell renderer to set the color of each cell in the row depending on whether it is selected or not.
Here is an example of how you could implement this in your code:
import java.awt.Color;
import javax.swing.*;
import javax.swing.table.*;
public class CustomerNotif extends JFrame {
private static final long serialVersionUID = 1L;
private JTable table;
public static void main(String[] args) {
EventQueue.invokeLater(new Runnable() {
public void run() {
try {
CustomerNotif frame = new CustomerNotif();
frame.setVisible(true);
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
public CustomerNotif() {
// Set up the frame and other UI components here...
// Create a custom cell renderer that sets the cell color based on whether the row is selected
DefaultTableCellRenderer cellRenderer = new DefaultTableCellRenderer() {
@Override
public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column) {
// Set the default cell color to white
setBackground(Color.WHITE);
// If the row is selected, set the cell color to red
if (isSelected) {
setBackground(Color.RED);
}
// Return the configured renderer
return super.getTableCellRendererComponent(table, value, isSelected, hasFocus, row, column);
}
};
// Set the cell renderer for each column in the table
for (int i = 0; i < table.getColumnCount(); i++) {
table.getColumnModel().getColumn(i).setCellRenderer(cellRenderer);
}
}
}
In this example, we create a custom DefaultTableCellRenderer that sets the background color of each cell in the table depending on whether the row is selected or not. The renderer sets the background color to white for unselected rows, and to red for selected rows.
To use this renderer, we iterate through each column in the table and set the cell renderer for each column to the custom renderer we just created. This will cause the custom renderer to be used to render the cells in each column, and the selected rows will be colored red.
I hope this helps! Let me know if you have any questions.
| How to permanently change the color of a selected row in a table in java | I would like to know how to add code in my action listener that would make the row I select or press on with my mouse change from the color red to the color white. I have tried getRowSelected() and tried to use the index but that ultimately only changes the row color when it is selected and it goes back to red. I have also attempted to user Renderer which is a newer concept to me but didn't know how to implement it the right way. Any help or guidance would be appreciated.
Tried getRowSelected() but the row color change was only temporary and went back to red once it was unselected. Tried Renderer but didn't know how to fully implement it as it is a new concept to me.
| [
"One way to do this is to:\n\nCreate a JTable and table model with an invisible extra column, one that holds a boolean value, set to false initially. One way to make the column invisible is by removing the column from the JTable's TableColumnModel as suggested here by Swing expert Rob Camick and as per his previous answer to a Swing question on the topic.\nCreate a table cell renderer that colors the background of a row based on the value of the boolean mentioned above\nToggle the value of this boolean using either a mouse listener or a list selection listener.\n\nFor example:\nimport java.awt.BorderLayout;\nimport java.awt.Color;\nimport java.awt.Component;\nimport java.awt.event.MouseAdapter;\nimport java.awt.event.MouseEvent;\n\nimport javax.swing.*;\nimport javax.swing.table.*;\n\npublic class TestTableRowColor {\n public static void main(String[] args) {\n SwingUtilities.invokeLater(() -> {\n ChangeRowColorPanel mainPanel = new ChangeRowColorPanel();\n\n JFrame frame = new JFrame(\"GUI\");\n frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n frame.add(mainPanel);\n frame.pack();\n frame.setLocationRelativeTo(null);\n frame.setVisible(true);\n });\n }\n\n}\n\n@SuppressWarnings(\"serial\")\nclass ChangeRowColorPanel extends JPanel {\n private static final String[] COLUMN_NAMES = { \"One\", \"Two\", \"Three\", \"Selected\" };\n private DefaultTableModel model = new DefaultTableModel(COLUMN_NAMES, 0);\n private JTable table = new JTable(model);\n\n public ChangeRowColorPanel() {\n TableColumnModel columnModel = table.getColumnModel();\n columnModel.removeColumn(columnModel.getColumn(columnModel.getColumnCount() - 1));\n table.setDefaultRenderer(Object.class, new RowColorRenderer());\n table.addMouseListener(new MyMouse());\n\n int max = 5;\n for (int i = 0; i < max; i++) {\n Object[] row = new Object[COLUMN_NAMES.length];\n for (int j = 0; j < COLUMN_NAMES.length - 1; j++) {\n row[j] = (int) (100 * Math.random());\n }\n row[COLUMN_NAMES.length - 1] = false;\n model.addRow(row);\n }\n\n setLayout(new BorderLayout());\n add(new JScrollPane(table));\n }\n}\n\nclass MyMouse extends MouseAdapter {\n @Override\n public void mousePressed(MouseEvent e) {\n JTable table = (JTable) e.getSource();\n TableModel model = table.getModel();\n boolean selected = (boolean) model.getValueAt(table.getSelectedRow(), model.getColumnCount() - 1);\n model.setValueAt(!selected, table.getSelectedRow(), model.getColumnCount() - 1);\n table.repaint();\n }\n}\n\n@SuppressWarnings(\"serial\")\nclass RowColorRenderer extends DefaultTableCellRenderer {\n private static final Color SELECTED_COLOR = Color.PINK;\n\n public RowColorRenderer() {\n setOpaque(true);\n }\n\n @Override\n public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus,\n int row, int column) {\n Component renderer = super.getTableCellRendererComponent(table, value, isSelected, hasFocus, row, column);\n TableModel model = table.getModel();\n int selectedColumn = model.getColumnCount() - 1;\n boolean selected = (boolean) model.getValueAt(row, selectedColumn);\n Color background = selected ? SELECTED_COLOR : null;\n renderer.setBackground(background);\n return this;\n }\n\n}\n\n",
"To change the color of a selected row in a table in Java, you can use a custom cell renderer to set the color of each cell in the row depending on whether it is selected or not.\nHere is an example of how you could implement this in your code:\nimport java.awt.Color;\nimport javax.swing.*;\nimport javax.swing.table.*;\n\npublic class CustomerNotif extends JFrame {\n private static final long serialVersionUID = 1L;\n private JTable table;\n public static void main(String[] args) {\n EventQueue.invokeLater(new Runnable() {\n public void run() {\n try {\n CustomerNotif frame = new CustomerNotif();\n frame.setVisible(true);\n } catch (Exception e) {\n e.printStackTrace();\n }\n }\n });\n }\n public CustomerNotif() {\n // Set up the frame and other UI components here...\n\n // Create a custom cell renderer that sets the cell color based on whether the row is selected\n DefaultTableCellRenderer cellRenderer = new DefaultTableCellRenderer() {\n @Override\n public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column) {\n // Set the default cell color to white\n setBackground(Color.WHITE);\n\n // If the row is selected, set the cell color to red\n if (isSelected) {\n setBackground(Color.RED);\n }\n\n // Return the configured renderer\n return super.getTableCellRendererComponent(table, value, isSelected, hasFocus, row, column);\n }\n };\n\n // Set the cell renderer for each column in the table\n for (int i = 0; i < table.getColumnCount(); i++) {\n table.getColumnModel().getColumn(i).setCellRenderer(cellRenderer);\n }\n }\n}\n\nIn this example, we create a custom DefaultTableCellRenderer that sets the background color of each cell in the table depending on whether the row is selected or not. The renderer sets the background color to white for unselected rows, and to red for selected rows.\nTo use this renderer, we iterate through each column in the table and set the cell renderer for each column to the custom renderer we just created. This will cause the custom renderer to be used to render the cells in each column, and the selected rows will be colored red.\nI hope this helps! Let me know if you have any questions.\n"
] | [
2,
0
] | [] | [] | [
"java",
"row",
"selection",
"swing",
"swingx"
] | stackoverflow_0074670988_java_row_selection_swing_swingx.txt |
Q:
when I run the following function I received this error IndexError: index 0 is out of bounds for axis 0 with size 0
index problem while running predication funcation
def predict_death(anaemia,high_blood_pressure,serum_creatinine,serum_sodium,smoking):
anaemia_index = np.where(X.columns==anaemia)[0][0]
x = np.zeros(len(X.columns))
x[0] =high_blood_pressure
x[1] = serum_creatinine
x[2] = serum_sodium
x[3] = smoking
if anaemia_index >= 0:
x[anaemia_index] = 1
return mode.predict([x])[0]
death = predict_death(1,1, 1.9, 137,1)
A:
t looks like you are trying to use the np.where function to find the index of a column in the X DataFrame by its name. However, the np.where function does not work like this - it returns the indices of elements in an array that match a specified condition.
To get the index of a column in a DataFrame by its name, you can use the DataFrame.columns.get_loc method instead. For example:
anaemia_index = X.columns.get_loc(anaemia)
This will return the index of the anaemia column in the X DataFrame. You can then use this index to access the corresponding column in the x array.
You can also simplify your code by using the DataFrame.loc method to select the columns you want in the x array, rather than creating the x array manually and setting the values of its elements one by one. For example:
x = X.loc[:, ["high_blood_pressure", "serum_creatinine", "serum_sodium", "smoking", anaemia]].values
This will create the x array by selecting the columns you want from the X DataFrame, and then converting the DataFrame to a NumPy array using the DataFrame.values attribute. You can then pass this array directly to the mode.predict method to get the predicted death outcome.
Here is an example of how your predict_death function could be implemented using these changes:
def predict_death(anaemia, high_blood_pressure, serum_creatinine, serum_sodium, smoking):
x = X.loc[:, ["high_blood_pressure", "serum_creatinine", "serum_sodium", "smoking", anaemia]].values
return mode.predict([x])[0]
death = predict_death("anaemia_yes", 1, 1.9, 137, 1)
Note that in this example, the anaemia argument to the predict_death function should be the name of the column in the X DataFrame that corresponds to the presence or absence of anaemia (e.g. "anaemia_yes" or "anaemia_no").
| when I run the following function I received this error IndexError: index 0 is out of bounds for axis 0 with size 0 | index problem while running predication funcation
def predict_death(anaemia,high_blood_pressure,serum_creatinine,serum_sodium,smoking):
anaemia_index = np.where(X.columns==anaemia)[0][0]
x = np.zeros(len(X.columns))
x[0] =high_blood_pressure
x[1] = serum_creatinine
x[2] = serum_sodium
x[3] = smoking
if anaemia_index >= 0:
x[anaemia_index] = 1
return mode.predict([x])[0]
death = predict_death(1,1, 1.9, 137,1)
| [
"t looks like you are trying to use the np.where function to find the index of a column in the X DataFrame by its name. However, the np.where function does not work like this - it returns the indices of elements in an array that match a specified condition.\nTo get the index of a column in a DataFrame by its name, you can use the DataFrame.columns.get_loc method instead. For example:\nanaemia_index = X.columns.get_loc(anaemia)\n\nThis will return the index of the anaemia column in the X DataFrame. You can then use this index to access the corresponding column in the x array.\nYou can also simplify your code by using the DataFrame.loc method to select the columns you want in the x array, rather than creating the x array manually and setting the values of its elements one by one. For example:\nx = X.loc[:, [\"high_blood_pressure\", \"serum_creatinine\", \"serum_sodium\", \"smoking\", anaemia]].values\n\nThis will create the x array by selecting the columns you want from the X DataFrame, and then converting the DataFrame to a NumPy array using the DataFrame.values attribute. You can then pass this array directly to the mode.predict method to get the predicted death outcome.\nHere is an example of how your predict_death function could be implemented using these changes:\ndef predict_death(anaemia, high_blood_pressure, serum_creatinine, serum_sodium, smoking):\n x = X.loc[:, [\"high_blood_pressure\", \"serum_creatinine\", \"serum_sodium\", \"smoking\", anaemia]].values\n return mode.predict([x])[0]\n\ndeath = predict_death(\"anaemia_yes\", 1, 1.9, 137, 1)\n\nNote that in this example, the anaemia argument to the predict_death function should be the name of the column in the X DataFrame that corresponds to the presence or absence of anaemia (e.g. \"anaemia_yes\" or \"anaemia_no\").\n"
] | [
0
] | [] | [] | [
"deep_learning",
"for_loop",
"function",
"machine_learning",
"python"
] | stackoverflow_0074670166_deep_learning_for_loop_function_machine_learning_python.txt |
Q:
Flask-SQLAlchemy Legacy vs New Query Interface
I am trying to update some queries in a web application because as stated in Flask-SQLAlchemy
You may see uses of Model.query or session.query to build queries. That query interface is
considered legacy in SQLAlchemy. Prefer using the session.execute(select(...)) instead.
I have a query:
subnets = db.session.query(Subnet).order_by(Subnet.id).all()
Which is translated into:
SELECT subnet.id AS subnet_id, subnet.name AS subnet_name, subnet.network AS subnet_network, subnet.access AS subnet_access, subnet.date_created AS subnet_date_created
FROM subnet ORDER BY subnet.id
And I take the subnets variable and loop it over in my view in two different locations. And it works.
However, when I try to update my query and use the new SQLAlchemy interface:
subnets = db.session.execute(db.select(Subnet).order_by(Subnet.id)).scalars()
I can only loop once and there is nothing left to loop over in the second loop?
How can I achieve the same result with the new query interface?
A:
As noted in the comments to the question, your second example is not directly comparable to your first example because your second example is missing the .all() at the end.
Side note:
session.scalars(select(Subnet).order_by(Subnet.id)).all()
is a convenient shorthand for
session.execute(select(Subnet).order_by(Subnet.id)).scalars().all()
and is the recommended approach for SQLAlchemy 1.4+.
| Flask-SQLAlchemy Legacy vs New Query Interface | I am trying to update some queries in a web application because as stated in Flask-SQLAlchemy
You may see uses of Model.query or session.query to build queries. That query interface is
considered legacy in SQLAlchemy. Prefer using the session.execute(select(...)) instead.
I have a query:
subnets = db.session.query(Subnet).order_by(Subnet.id).all()
Which is translated into:
SELECT subnet.id AS subnet_id, subnet.name AS subnet_name, subnet.network AS subnet_network, subnet.access AS subnet_access, subnet.date_created AS subnet_date_created
FROM subnet ORDER BY subnet.id
And I take the subnets variable and loop it over in my view in two different locations. And it works.
However, when I try to update my query and use the new SQLAlchemy interface:
subnets = db.session.execute(db.select(Subnet).order_by(Subnet.id)).scalars()
I can only loop once and there is nothing left to loop over in the second loop?
How can I achieve the same result with the new query interface?
| [
"As noted in the comments to the question, your second example is not directly comparable to your first example because your second example is missing the .all() at the end.\nSide note:\nsession.scalars(select(Subnet).order_by(Subnet.id)).all()\n\nis a convenient shorthand for\nsession.execute(select(Subnet).order_by(Subnet.id)).scalars().all()\n\nand is the recommended approach for SQLAlchemy 1.4+.\n"
] | [
1
] | [] | [] | [
"python",
"sqlalchemy"
] | stackoverflow_0074668995_python_sqlalchemy.txt |
Q:
mongod and mongo commands not working on windows 10
I've installed mongoDB on my windows 10 OS. Then I tried setting its database path to some directory by moving to it and typing mongod --datapath=data in cmd, where data is the folder which is to contain the db(I've used the relative path because I'm in that directory). But message comes that mongod is unrecognized command. After some searching I found that by specifying mongod path, i.e. "C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe" --datapath=data works. Similar thing happens for mongo.
I want to directly run mongod and mongo commands, I have seen people directly using it(without going to the directory or specifying the path).
A:
For a Windows installation, by default you have to use the full path to the exe unless you add it to the PATH.
To add it to the PATH:
01) Get path to bin, something like: C:\Program Files\MongoDB\Server\4.0\bin
02) Press the Windows key, type env, select Edit the system environment variables
03) On the Advanced tab, click Environment Variables
04) In the User variables for xxxx section, select path and then click the Edit... button
05) Click New and paste your path with a trailing slash, eg:
C:\Program Files\MongoDB\Server\4.0\bin\
06) Click OK, OK, OK and restart your command window.
Source
The examples you have seen are probably based on UNIX installations which I think by default install mongo as a service (which Windows doesn't) and that is what is called in those examples.
To simplify startup and configuration on Windows, you can also install it as a service. See the Mongo documentation here and the
"Configure Windows Service for MongoDB' section".
This will then allow you to start and stop Mongo by simply calling
net start MongoDB
Or
net stop MongoDB
A:
To add it to the PATH:
Add Mongo’s bin folder to the Path Environment Variable
Kindly check the link:
here
After adding bin folder to the path Environment Variable
then simply type mongo in terminal it will start working
A:
reference : Microsoft document
set your path like this
;C:\Program Files\MongoDB\Server\4.0\bin
this is worked for me.
A:
If installed MongoDB version is 6.0 or above, mongo command will not work on Powershell/cmd. If you run the command you will get the following error:
'mongo' is not recognized as an internal or external command,
operable program or batch file.
To run mongo commands, you have to install MongoDB Shell from
After installing the shell, extract the zip file, you can rename the extracted folder (mongosh-1.6.0-win32-x64) as "MongoDB Shell" and move that folder to Windows(:C) > Program Files
Now open the folder, go to bin and copy the path:
C:\Program Files\MongoDB Shell\mongosh-1.6.0-win32-x64\bin (or
C:\Program Files\mongosh-1.6.0-win32-x64\mongosh-1.6.0-win32-x64\bin)
Go to
Settings > System > About > Advanced system settings > Environment
Variables > Under System Variables, click on 'Path' then 'Edit' >
Click 'New' and paste the above copied path > Click 'Ok' 'Ok' 'Ok'
Now open Powershell/cmd, run the command 'mongosh'
You're all set to work with MongoDB
A:
Based on welshGaz answer above, I edited the User Path variable but it did not work for me yet. I wasn't able to access the System Path variables.
What I noticed from the errors on the command prompt is that it what missing the "C:\data\db" directory to store its files (I don't know what those files are for just yet). So I created that directory myself and it worked.
A:
Same problem here. I installed through the .msi file provided for windows X64bit. In the installer instructions from MongoDB (https://docs.mongodb.com/manual/tutorial/install-mongodb-on-windows/), I read that you can add C:\Program Files\MongoDB\Server\4.2\bin to the System Path. Then it asks to omit the full path to the the MongoDB binaries. That is where I think some information is missing. How are we supposed to omit the full path to the MongoDB binaries?
Currently I can get MongoDB to run mongod using:
"C:\Program Files\MongoDB\Server\4.2\bin\mongod.exe" --dbpath="c:\data\db"
For --dbpath="c:\data\db" you can replace "c:\data\db" with the path to your database.
I can also run mongo using:
"C:\Program Files\MongoDB\Server\4.2\bin\mongo.exe"
A:
Another reason to it if you enabled any property in YAML file and it is not formatted properly. YAML looks for specific syntax like colon":"+space" ".
E.g.-
security:
authorization: enabled
A:
use mongosh command from your terminal. mongo command no longer works for 6.0 and above.
if you are trying to connect from connection url eg mongodb://localhost:27017/yourdb try changing it to something like mongodb://127.0.0.1/yourdb
| mongod and mongo commands not working on windows 10 | I've installed mongoDB on my windows 10 OS. Then I tried setting its database path to some directory by moving to it and typing mongod --datapath=data in cmd, where data is the folder which is to contain the db(I've used the relative path because I'm in that directory). But message comes that mongod is unrecognized command. After some searching I found that by specifying mongod path, i.e. "C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe" --datapath=data works. Similar thing happens for mongo.
I want to directly run mongod and mongo commands, I have seen people directly using it(without going to the directory or specifying the path).
| [
"For a Windows installation, by default you have to use the full path to the exe unless you add it to the PATH. \nTo add it to the PATH: \n01) Get path to bin, something like: C:\\Program Files\\MongoDB\\Server\\4.0\\bin\n02) Press the Windows key, type env, select Edit the system environment variables\n03) On the Advanced tab, click Environment Variables\n04) In the User variables for xxxx section, select path and then click the Edit... button\n05) Click New and paste your path with a trailing slash, eg:\nC:\\Program Files\\MongoDB\\Server\\4.0\\bin\\\n06) Click OK, OK, OK and restart your command window.\nSource \nThe examples you have seen are probably based on UNIX installations which I think by default install mongo as a service (which Windows doesn't) and that is what is called in those examples.\nTo simplify startup and configuration on Windows, you can also install it as a service. See the Mongo documentation here and the \n\"Configure Windows Service for MongoDB' section\".\nThis will then allow you to start and stop Mongo by simply calling \nnet start MongoDB\n\nOr \nnet stop MongoDB\n\n",
"To add it to the PATH:\nAdd Mongo’s bin folder to the Path Environment Variable\nKindly check the link:\nhere\nAfter adding bin folder to the path Environment Variable\nthen simply type mongo in terminal it will start working\n",
"reference : Microsoft document\nset your path like this \n;C:\\Program Files\\MongoDB\\Server\\4.0\\bin\n\nthis is worked for me.\n",
"If installed MongoDB version is 6.0 or above, mongo command will not work on Powershell/cmd. If you run the command you will get the following error:\n'mongo' is not recognized as an internal or external command,\noperable program or batch file.\nTo run mongo commands, you have to install MongoDB Shell from\nAfter installing the shell, extract the zip file, you can rename the extracted folder (mongosh-1.6.0-win32-x64) as \"MongoDB Shell\" and move that folder to Windows(:C) > Program Files\nNow open the folder, go to bin and copy the path:\n\nC:\\Program Files\\MongoDB Shell\\mongosh-1.6.0-win32-x64\\bin (or\nC:\\Program Files\\mongosh-1.6.0-win32-x64\\mongosh-1.6.0-win32-x64\\bin)\n\nGo to\n\nSettings > System > About > Advanced system settings > Environment\nVariables > Under System Variables, click on 'Path' then 'Edit' >\nClick 'New' and paste the above copied path > Click 'Ok' 'Ok' 'Ok'\n\nNow open Powershell/cmd, run the command 'mongosh'\nYou're all set to work with MongoDB\n",
"Based on welshGaz answer above, I edited the User Path variable but it did not work for me yet. I wasn't able to access the System Path variables.\nWhat I noticed from the errors on the command prompt is that it what missing the \"C:\\data\\db\" directory to store its files (I don't know what those files are for just yet). So I created that directory myself and it worked.\n",
"Same problem here. I installed through the .msi file provided for windows X64bit. In the installer instructions from MongoDB (https://docs.mongodb.com/manual/tutorial/install-mongodb-on-windows/), I read that you can add C:\\Program Files\\MongoDB\\Server\\4.2\\bin to the System Path. Then it asks to omit the full path to the the MongoDB binaries. That is where I think some information is missing. How are we supposed to omit the full path to the MongoDB binaries?\nCurrently I can get MongoDB to run mongod using: \n\"C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\mongod.exe\" --dbpath=\"c:\\data\\db\"\nFor --dbpath=\"c:\\data\\db\" you can replace \"c:\\data\\db\" with the path to your database.\nI can also run mongo using:\n\"C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\mongo.exe\"\n",
"Another reason to it if you enabled any property in YAML file and it is not formatted properly. YAML looks for specific syntax like colon\":\"+space\" \".\nE.g.-\nsecurity:\n authorization: enabled\n\n",
"use mongosh command from your terminal. mongo command no longer works for 6.0 and above.\nif you are trying to connect from connection url eg mongodb://localhost:27017/yourdb try changing it to something like mongodb://127.0.0.1/yourdb\n"
] | [
50,
2,
1,
1,
0,
0,
0,
0
] | [] | [] | [
"mongodb"
] | stackoverflow_0044962540_mongodb.txt |
Q:
convert a List of objects to a Map in Kotlin
What is the most efficient way to convert a List of objects to a Map in Kotlin. Given a list of objects, a map is a structure that allows for efficient retrieval of values associated with a specific key.
A:
The function to do this is actually built in to Kotlin. It's called associateBy.
inline fun <T, K> Iterable<T>.associateBy(
keySelector: (T) -> K
): Map<K, T>
It takes a single argument, mapping an element of type T to the desired key of type K.
So, if you had a list of Person objects, and each person had a name you wanted to index by, you could write
personList.associateBy { it.name }
to get the map from names to people.
Note that there also exists a more general function called associate, which maps a T to a key and a value (where the value may or may not be equal to the original T value). This is useful if you want to perform a mapping and produce an associative container at the same time. But since it sounds like your use case involves indexing existing objects, associateBy is probably sufficient.
| convert a List of objects to a Map in Kotlin | What is the most efficient way to convert a List of objects to a Map in Kotlin. Given a list of objects, a map is a structure that allows for efficient retrieval of values associated with a specific key.
| [
"The function to do this is actually built in to Kotlin. It's called associateBy.\n\ninline fun <T, K> Iterable<T>.associateBy(\n keySelector: (T) -> K\n): Map<K, T>\n\n\nIt takes a single argument, mapping an element of type T to the desired key of type K.\nSo, if you had a list of Person objects, and each person had a name you wanted to index by, you could write\npersonList.associateBy { it.name }\n\nto get the map from names to people.\nNote that there also exists a more general function called associate, which maps a T to a key and a value (where the value may or may not be equal to the original T value). This is useful if you want to perform a mapping and produce an associative container at the same time. But since it sounds like your use case involves indexing existing objects, associateBy is probably sufficient.\n"
] | [
1
] | [] | [] | [
"kotlin"
] | stackoverflow_0074671169_kotlin.txt |
Q:
GeoIP2 Snowflake Java UDF Integration issue
I want to create a Java UDF in a snowflake worksheet in order to query GeoIp2 library and get the ISO code of a given IP. I have '@AWS_CSV_STAGE/lib/geoip2-2.8.0.jar','@AWS_CSV_STAGE/geodata/GeoLite2-City.mmdb'
already staged. How can i direct the function handler to the method that creates the Database Reader as explained here in the documentation for Java:
https://dev.maxmind.com/geoip/geolocate-an-ip/databases?lang=en#1-install-the-geoip2-client-library
in general how can i achieve this whole thing below in my udf?
File database = new File("/path/to/maxmind-database.mmdb")
DatabaseReader reader = new DatabaseReader.Builder(database).build();
InetAddress ipAddress = InetAddress.getByName("128.101.101.101");
CityResponse response = reader.city(ipAddress);
Country country = response.getCountry();
so far i wrote this but of course it's not working:
anyway i couldn't find much material about how to tackle this kind of problem.
CREATE OR REPLACE FUNCTION GEO()
returns varchar not null
language java
imports = ('@AWS_CSV_STAGE/lib/geoip2-2.8.0.jar','@AWS_CSV_STAGE/geodata/GeoLite2-City.mmdb')
handler = 'DatabaseReader.Builder';
SELECT GEO();
basically what i want to achieve is to call the UDF on a column of ip address table and get the country code in another column for each ip address.
A:
To create a Java User-Defined Function (UDF) in Snowflake, you will need to use the CREATE FUNCTION statement in Snowflake SQL. The syntax for this statement is as follows:
CREATE OR REPLACE FUNCTION function_name
RETURNS data_type
LANGUAGE JAVA
IMPORTS = ('file_path_1', 'file_path_2', ...)
HANDLER = 'fully_qualified_class_name.method_name'
In your case, you can use the following CREATE FUNCTION statement to create your UDF:
CREATE OR REPLACE FUNCTION GEO
RETURNS VARCHAR
LANGUAGE JAVA
IMPORTS = ('@AWS_CSV_STAGE/lib/geoip2-2.8.0.jar','@AWS_CSV_STAGE/geodata/GeoLite2-City.mmdb')
HANDLER = 'com.maxmind.geoip2.DatabaseReader.Builder'
SELECT GEO(ip_address_column) AS country_code
FROM ip_addresses
This query will use the GEO UDF to get the country code for each IP address in the ip_addresses table, and return the country code in a new country_code column.
A:
In order to create a Java UDF in Snowflake, create a Java class that defines the UDF function and its behavior. This class should include the code that you provided to create the DatabaseReader and query the GeoIP2 database to get the ISO code for a given IP address.
Once this class is defined, use the CREATE OR REPLACE FUNCTION statement in Snowflake to register the function and make it available for use in your queries. The IMPORTS clause of the CREATE OR REPLACE FUNCTION statement is used to specify the external JAR files that your function depends on, such as the GeoIP2 library JAR file that you mentioned.
Here is an example of how your CREATE OR REPLACE FUNCTION statement might look:
CREATE OR REPLACE FUNCTION GEO(ipAddress VARCHAR)
RETURNS VARCHAR
LANGUAGE JAVA
IMPORTS ('@AWS_CSV_STAGE/lib/geoip2-2.8.0.jar','@AWS_CSV_STAGE/geodata/GeoLite2-City.mmdb')
HANDLER 'com.example.GeoIpFunction'
In this example, com.example.GeoIpFunction is the fully qualified name of the Java class that you defined to implement your function. To use the function in a query call it like any other Snowflake function, and pass in the IP address as an argument:
SELECT GEO('128.101.101.101')
This would return the ISO code for the specified IP address. Also use this function in a query to get the ISO code for each IP address in a column of a table:
SELECT GEO(ip_address) AS iso_code FROM my_table
| GeoIP2 Snowflake Java UDF Integration issue | I want to create a Java UDF in a snowflake worksheet in order to query GeoIp2 library and get the ISO code of a given IP. I have '@AWS_CSV_STAGE/lib/geoip2-2.8.0.jar','@AWS_CSV_STAGE/geodata/GeoLite2-City.mmdb'
already staged. How can i direct the function handler to the method that creates the Database Reader as explained here in the documentation for Java:
https://dev.maxmind.com/geoip/geolocate-an-ip/databases?lang=en#1-install-the-geoip2-client-library
in general how can i achieve this whole thing below in my udf?
File database = new File("/path/to/maxmind-database.mmdb")
DatabaseReader reader = new DatabaseReader.Builder(database).build();
InetAddress ipAddress = InetAddress.getByName("128.101.101.101");
CityResponse response = reader.city(ipAddress);
Country country = response.getCountry();
so far i wrote this but of course it's not working:
anyway i couldn't find much material about how to tackle this kind of problem.
CREATE OR REPLACE FUNCTION GEO()
returns varchar not null
language java
imports = ('@AWS_CSV_STAGE/lib/geoip2-2.8.0.jar','@AWS_CSV_STAGE/geodata/GeoLite2-City.mmdb')
handler = 'DatabaseReader.Builder';
SELECT GEO();
basically what i want to achieve is to call the UDF on a column of ip address table and get the country code in another column for each ip address.
| [
"To create a Java User-Defined Function (UDF) in Snowflake, you will need to use the CREATE FUNCTION statement in Snowflake SQL. The syntax for this statement is as follows:\nCREATE OR REPLACE FUNCTION function_name\nRETURNS data_type\nLANGUAGE JAVA\nIMPORTS = ('file_path_1', 'file_path_2', ...)\nHANDLER = 'fully_qualified_class_name.method_name'\n\n\nIn your case, you can use the following CREATE FUNCTION statement to create your UDF:\nCREATE OR REPLACE FUNCTION GEO\nRETURNS VARCHAR\nLANGUAGE JAVA\nIMPORTS = ('@AWS_CSV_STAGE/lib/geoip2-2.8.0.jar','@AWS_CSV_STAGE/geodata/GeoLite2-City.mmdb')\nHANDLER = 'com.maxmind.geoip2.DatabaseReader.Builder'\n\nSELECT GEO(ip_address_column) AS country_code\nFROM ip_addresses\n\n\nThis query will use the GEO UDF to get the country code for each IP address in the ip_addresses table, and return the country code in a new country_code column.\n",
"In order to create a Java UDF in Snowflake, create a Java class that defines the UDF function and its behavior. This class should include the code that you provided to create the DatabaseReader and query the GeoIP2 database to get the ISO code for a given IP address.\nOnce this class is defined, use the CREATE OR REPLACE FUNCTION statement in Snowflake to register the function and make it available for use in your queries. The IMPORTS clause of the CREATE OR REPLACE FUNCTION statement is used to specify the external JAR files that your function depends on, such as the GeoIP2 library JAR file that you mentioned.\nHere is an example of how your CREATE OR REPLACE FUNCTION statement might look:\nCREATE OR REPLACE FUNCTION GEO(ipAddress VARCHAR)\nRETURNS VARCHAR\nLANGUAGE JAVA\nIMPORTS ('@AWS_CSV_STAGE/lib/geoip2-2.8.0.jar','@AWS_CSV_STAGE/geodata/GeoLite2-City.mmdb')\nHANDLER 'com.example.GeoIpFunction'\n\nIn this example, com.example.GeoIpFunction is the fully qualified name of the Java class that you defined to implement your function. To use the function in a query call it like any other Snowflake function, and pass in the IP address as an argument:\nSELECT GEO('128.101.101.101')\n\nThis would return the ISO code for the specified IP address. Also use this function in a query to get the ISO code for each IP address in a column of a table:\nSELECT GEO(ip_address) AS iso_code FROM my_table\n\n"
] | [
0,
0
] | [] | [] | [
"java",
"snowflake_cloud_data_platform"
] | stackoverflow_0074669931_java_snowflake_cloud_data_platform.txt |
Q:
How do I write a unit test that successfully recognizes that a function threw an exception?
I'm trying to write my very first unit tests ever in Laravel using PHPUnit. The tests I am writing are trying to verify that a fairly simple helper function works properly. The way it is written, my function expects one or two arguments and then tries to verify that the arguments are appropriate; if they are not, I throw InvalidArgumentExceptions. In other words, the function is working correctly if it throws an InvalidArgumentException on certain values being supplied for the arguments.
How do I write a unit test that says, in effect, if the function gets a decimal number in the second argument when it is expecting an int (or if an int is provided but it is out of range), an InvalidArgumentException should be thrown? What would the assertion look like? (I've looked at the list of available assertions but nothing looks like it would be appropriate for an exception.)
Here is my function, in its entirety:
namespace App\Helpers;
/**
* This class contains a variety of helper functions useful to this app.
*/
class Helper2
{
/**
* This function chooses a random number of elements from an array and chooses
* them at random.
*
* $array - the array of items from which elements will be selected
* $numberOfElements - the number of elements from the array that the user desires.
*
* The array needs to be a simple array of strings, integers, etc.
*
* The $numberOfElements defaults to null. When the value is null, the function
* will choose a random number of elements between 1 and the number of elements
* in the array. If a non-null value greater than 0 is provided, that number of
* elements will be returned by the function. If the number is larger than the
* number of elements in the array, all elements of the array will be returned.
*/
public static function choose_random_elements(array $array, int $numberOfElementsDesired = null)
{
// The first argument of the function must be an array. (No test necessary:
// Laravel will not even let you code the function call with anything but
// an array in the first argument.)
/* if (! is_array($array)) {
throw new \InvalidArgumentException("The input array is not actually an array.");
}
*/
// Store the array in a collection.
$myCollection = collect($array);
// The second argument of the function must be numeric.
if (! is_numeric($numberOfElementsDesired)) {
throw new \InvalidArgumentException ("The number of elements desired must be a number.");
}
// The second argument of the function must be an integer.
if (! is_integer($numberOfElementsDesired)) {
throw new \InvalidArgumentException ("The number of elements desired must be an integer.");
}
// The second argument of the function cannot exceed the number of elements in the array.
if ($numberOfElementsDesired > count($myCollection)) {
throw new \InvalidArgumentException ("The number of elements desired cannot exceed the number of elements in the array.");
}
// The second argument of the function cannot be zero or less.
if ($numberOfElementsDesired <= 0) {
throw new \InvalidArgumentException("You cannot choose a negative number of elements from the array.");
}
// If no value was supplied for the second argument of the function, choose
// an integer between 1 and the number of elements in the array.
if (is_null($numberOfElementsDesired)) {
$numberOfElementsDesired = rand(1, count($myCollection));
}
// If the number supplied for the second argument of the function exceeds
// the number of elements in the array, set the second argument to the size
// of the array.
if ($numberOfElementsDesired > count($myCollection)) {
$numberOfElementsDesired = count($myCollection);
}
// Choose a random number of elements at random from the collection and put them in a new collection.
$randomSelectionsCollection = $myCollection->random($numberOfElementsDesired);
// Convert the resulting collection into an array.
$randomSelectionsArray = $randomSelectionsCollection->toArray();
// Convert the array of selected elements into a string.
$randomSelectionsString = implode(',', $randomSelectionsArray);
echo "Function output: " . $randomSelectionsString . "\n";
return $randomSelectionsString;
}
}
The only alternative I can think of is writing the test in a try/catch block. I came up with this and it seems to work. Is it a reasonable way to write this kind of test or is there a better approach?
try {
$result = $helper2->choose_random_elements(array(1, 6, "cat"), -4);
} catch(InvalidArgumentException $excp) {
$this->assertEquals("You cannot choose a negative number of elements from the array.", $excp->getMessage());
}
I am running Laravel 9.
A:
It is very simple indeed, your test should looke like the following:
$this->expectException(InvalidArgumentException::class);
$this->expectExceptionMessage("You cannot choose a negative number of elements from the array.");
Helper2::choose_random_elements(array(1, 6, "cat"), -4);
| How do I write a unit test that successfully recognizes that a function threw an exception? | I'm trying to write my very first unit tests ever in Laravel using PHPUnit. The tests I am writing are trying to verify that a fairly simple helper function works properly. The way it is written, my function expects one or two arguments and then tries to verify that the arguments are appropriate; if they are not, I throw InvalidArgumentExceptions. In other words, the function is working correctly if it throws an InvalidArgumentException on certain values being supplied for the arguments.
How do I write a unit test that says, in effect, if the function gets a decimal number in the second argument when it is expecting an int (or if an int is provided but it is out of range), an InvalidArgumentException should be thrown? What would the assertion look like? (I've looked at the list of available assertions but nothing looks like it would be appropriate for an exception.)
Here is my function, in its entirety:
namespace App\Helpers;
/**
* This class contains a variety of helper functions useful to this app.
*/
class Helper2
{
/**
* This function chooses a random number of elements from an array and chooses
* them at random.
*
* $array - the array of items from which elements will be selected
* $numberOfElements - the number of elements from the array that the user desires.
*
* The array needs to be a simple array of strings, integers, etc.
*
* The $numberOfElements defaults to null. When the value is null, the function
* will choose a random number of elements between 1 and the number of elements
* in the array. If a non-null value greater than 0 is provided, that number of
* elements will be returned by the function. If the number is larger than the
* number of elements in the array, all elements of the array will be returned.
*/
public static function choose_random_elements(array $array, int $numberOfElementsDesired = null)
{
// The first argument of the function must be an array. (No test necessary:
// Laravel will not even let you code the function call with anything but
// an array in the first argument.)
/* if (! is_array($array)) {
throw new \InvalidArgumentException("The input array is not actually an array.");
}
*/
// Store the array in a collection.
$myCollection = collect($array);
// The second argument of the function must be numeric.
if (! is_numeric($numberOfElementsDesired)) {
throw new \InvalidArgumentException ("The number of elements desired must be a number.");
}
// The second argument of the function must be an integer.
if (! is_integer($numberOfElementsDesired)) {
throw new \InvalidArgumentException ("The number of elements desired must be an integer.");
}
// The second argument of the function cannot exceed the number of elements in the array.
if ($numberOfElementsDesired > count($myCollection)) {
throw new \InvalidArgumentException ("The number of elements desired cannot exceed the number of elements in the array.");
}
// The second argument of the function cannot be zero or less.
if ($numberOfElementsDesired <= 0) {
throw new \InvalidArgumentException("You cannot choose a negative number of elements from the array.");
}
// If no value was supplied for the second argument of the function, choose
// an integer between 1 and the number of elements in the array.
if (is_null($numberOfElementsDesired)) {
$numberOfElementsDesired = rand(1, count($myCollection));
}
// If the number supplied for the second argument of the function exceeds
// the number of elements in the array, set the second argument to the size
// of the array.
if ($numberOfElementsDesired > count($myCollection)) {
$numberOfElementsDesired = count($myCollection);
}
// Choose a random number of elements at random from the collection and put them in a new collection.
$randomSelectionsCollection = $myCollection->random($numberOfElementsDesired);
// Convert the resulting collection into an array.
$randomSelectionsArray = $randomSelectionsCollection->toArray();
// Convert the array of selected elements into a string.
$randomSelectionsString = implode(',', $randomSelectionsArray);
echo "Function output: " . $randomSelectionsString . "\n";
return $randomSelectionsString;
}
}
The only alternative I can think of is writing the test in a try/catch block. I came up with this and it seems to work. Is it a reasonable way to write this kind of test or is there a better approach?
try {
$result = $helper2->choose_random_elements(array(1, 6, "cat"), -4);
} catch(InvalidArgumentException $excp) {
$this->assertEquals("You cannot choose a negative number of elements from the array.", $excp->getMessage());
}
I am running Laravel 9.
| [
"It is very simple indeed, your test should looke like the following:\n$this->expectException(InvalidArgumentException::class);\n$this->expectExceptionMessage(\"You cannot choose a negative number of elements from the array.\");\n\nHelper2::choose_random_elements(array(1, 6, \"cat\"), -4);\n\n"
] | [
1
] | [] | [] | [
"laravel",
"phpunit",
"unit_testing"
] | stackoverflow_0074671100_laravel_phpunit_unit_testing.txt |
Q:
Not able to disable scrolling in SingleChildScrollView
I am using a SingleChildScrollView and when tapping on FormTextField in it, the keyboard appears and the scroll view is scrolled upwards. After dismissing the keyboard, I am still able to scroll the Scrollview manually. Can you please suggest any solution to disable the manual scrolling after FormTextField disappears.
A:
You can use the following code in your singleChildScrollView.
physics: NeverScrollableScrollPhysics(),
It stops it from being able to scroll.
A:
In my case, when I put physics: NeverScrollableScrollPhysics() SingleChildScrollView can't be scrolled, but a scroll bar appears. If I use the scroll bar, the content is scrolled.
I need to hide the scroll bar:
ScrollConfiguration(
behavior:
ScrollConfiguration.of(context).copyWith(scrollbars: false),
child: SingleChildScrollView(
physics: const NeverScrollableScrollPhysics(),
child: child,
)),
| Not able to disable scrolling in SingleChildScrollView | I am using a SingleChildScrollView and when tapping on FormTextField in it, the keyboard appears and the scroll view is scrolled upwards. After dismissing the keyboard, I am still able to scroll the Scrollview manually. Can you please suggest any solution to disable the manual scrolling after FormTextField disappears.
| [
"You can use the following code in your singleChildScrollView. \nphysics: NeverScrollableScrollPhysics(),\n\nIt stops it from being able to scroll. \n",
"In my case, when I put physics: NeverScrollableScrollPhysics() SingleChildScrollView can't be scrolled, but a scroll bar appears. If I use the scroll bar, the content is scrolled.\nI need to hide the scroll bar:\nScrollConfiguration(\n behavior:\n ScrollConfiguration.of(context).copyWith(scrollbars: false),\n child: SingleChildScrollView(\n physics: const NeverScrollableScrollPhysics(),\n child: child,\n )),\n\n"
] | [
39,
0
] | [
"I think we have the same problem. I use two settings in singleChildScrollView\nphysics: NeverScrollableScrollPhysics(),\nreverse: true,\nThis solution is acceptable to me, I hope it is good for you.\n"
] | [
-3
] | [
"dart",
"flutter"
] | stackoverflow_0055917027_dart_flutter.txt |
Q:
NPM LINK how my team get the code while the shared module not push to npm or even registered at the package json
So I just try to use npm link for my multi project, everything work just fine for now, but one thing that i dont understand is since the shared folder not registered in the package.json and the node_modules folder not pushed to the remote repo then how my other team can get the code that being shared in my local ? it will done if i can change the directory of where the symlink will be placed (anywhere but not node_modules folder)
Please help or give some explanation here
Thanks
A:
When you use npm link on a local package, it creates a symlink from the global node_modules folder to the package's directory on your local machine. This allows other projects on your local machine to use the package as if it was installed from the registry, without needing to publish it or add it to the dependencies in their package.json.
However, this approach does not work for other members of your team, as the symlink is only created on your local machine. In order for other team members to use the shared package, you will need to either publish it to a registry (such as npm) or add it as a Git submodule in the repository.
To publish the package, you will need to create an account on a registry (such as npm) and follow the steps to publish your package. Once it is published, other team members can install it by running npm install in their project.
To add the package as a Git submodule, you can run the following commands in the root of your repository:
$ git submodule add <repository-url> shared
$ git submodule update --init --recursive
This will add the shared package as a submodule in the shared directory and initialize the submodules in the repository. Other team members can then clone the repository and run git submodule update --init --recursive to clone the submodule and initialize it in their local copy of the repository.
Once the shared package is added as a submodule or published to a registry, other team members can use it in their project by adding it to their dependencies in their package.json and running `npm
| NPM LINK how my team get the code while the shared module not push to npm or even registered at the package json | So I just try to use npm link for my multi project, everything work just fine for now, but one thing that i dont understand is since the shared folder not registered in the package.json and the node_modules folder not pushed to the remote repo then how my other team can get the code that being shared in my local ? it will done if i can change the directory of where the symlink will be placed (anywhere but not node_modules folder)
Please help or give some explanation here
Thanks
| [
"When you use npm link on a local package, it creates a symlink from the global node_modules folder to the package's directory on your local machine. This allows other projects on your local machine to use the package as if it was installed from the registry, without needing to publish it or add it to the dependencies in their package.json.\nHowever, this approach does not work for other members of your team, as the symlink is only created on your local machine. In order for other team members to use the shared package, you will need to either publish it to a registry (such as npm) or add it as a Git submodule in the repository.\nTo publish the package, you will need to create an account on a registry (such as npm) and follow the steps to publish your package. Once it is published, other team members can install it by running npm install in their project.\nTo add the package as a Git submodule, you can run the following commands in the root of your repository:\n$ git submodule add <repository-url> shared\n$ git submodule update --init --recursive\n\nThis will add the shared package as a submodule in the shared directory and initialize the submodules in the repository. Other team members can then clone the repository and run git submodule update --init --recursive to clone the submodule and initialize it in their local copy of the repository.\nOnce the shared package is added as a submodule or published to a registry, other team members can use it in their project by adding it to their dependencies in their package.json and running `npm\n"
] | [
1
] | [] | [] | [
"node.js",
"npm",
"npm_link"
] | stackoverflow_0074671233_node.js_npm_npm_link.txt |
Q:
Convert Multiline String (utmpdump results) into JSON
All,
This is my first time submitting a stack overflow question, so thanks in advance for taking the time to read/consider my question. I'm currently using the 'utmpdump' utility to dump linux authentication log results each hour from a bash script, which is done using the syntax shown below:
dateLastHour=$(date +"%a %b %d %H:" -d '1 hour ago')
dateNow=$(date +"%a %b %d %H:")
utmpdump /var/log/wtmp* | awk "/$dateLastHour/,/$dateNow/"
What I'm now trying to accomplish and the subject of this question is how can I take these results and delimited them by new line for each authentication log, before converting each authentication event into it's own JSON file to be exported to an external syslog collector for additional analysis and long term storage?
As an example, here's some of the test results I've been using:
[7] [08579] [ts/0] [egecko] [pts/0 ] [10.0.2.6 ] [1.1.1.1 ] [Fri Nov 04 23:40:29 2022 EDT]
[8] [08579] [ ] [ ] [pts/0 ] [ ] [0.0.0.0 ] [Fri Nov 04 23:55:16 2022 EDT]
[2] [00000] [~~ ] [reboot ] [~ ] [3.10.0-1160.80.1.el7.x86_64] [0.0.0.0 ] [Sat Dec 03 12:28:05 2022 EST]
[5] [00811] [tty1] [ ] [tty1 ] [ ] [0.0.0.0 ] [Sat Dec 03 12:28:12 2022 EST]
[6] [00811] [tty1] [LOGIN ] [tty1 ] [ ] [0.0.0.0 ] [Sat Dec 03 12:28:12 2022 EST]
[1] [00051] [~~ ] [runlevel] [~ ] [3.10.0-1160.80.1.el7.x86_64] [0.0.0.0 ] [Sat Dec 03 12:28:58 2022 EST]
[7] [02118] [ts/0] [egecko] [pts/0 ] [1.1.1.1 ] [1.1.1.1 ] [Sat Dec 03 12:51:22 2022 EST]
Any assistance or pointers here is greatly appreciated!
I've been using the following SED commands to trim out unnessecary whitespace, and I know that what I probably should do is using IDF to split the results string into new lines before using brackets as the delimeter:
utmpResults=$(echo "$utmpResults" | sed 's/ */ /g')
IFS="\n" read -a array <<< "$utmpResults"
echo $array
But when I echo $array it only returns the first line...?
A:
With the help of jq (sed for json), it's an easy task:
#!/bin/bash
jq -R -c '
select(length > 0) | # remove empty lines
[match("\\[(.*?)\\]"; "g").captures[].string # find content within square brackets
| sub("^\\s+";"") | sub("\\s+$";"")] # trim content
| { # convert to json object
"type" : .[0],
"pid" : .[1],
"terminal_name_suffix" : .[2],
"user" : .[3],
"tty" : .[4],
"remote_hostname" : .[5],
"remote_host" : .[6],
"datetime" : .[7],
"timestamp" : (.[7] | strptime("%a %b %d %T %Y %Z") | mktime)
}' input.txt
Output
{"type":"7","pid":"08579","terminal_name_suffix":"ts/0","user":"egecko","tty":"pts/0","remote_hostname":"10.0.2.6","remote_host":"1.1.1.1","datetime":"Fri Nov 04 23:40:29 2022 EDT","timestamp":1667605229}
{"type":"8","pid":"08579","terminal_name_suffix":"","user":"","tty":"pts/0","remote_hostname":"","remote_host":"0.0.0.0","datetime":"Fri Nov 04 23:55:16 2022 EDT","timestamp":1667606116}
{"type":"2","pid":"00000","terminal_name_suffix":"~~","user":"reboot","tty":"~","remote_hostname":"3.10.0-1160.80.1.el7.x86_64","remote_host":"0.0.0.0","datetime":"Sat Dec 03 12:28:05 2022 EST","timestamp":1670070485}
{"type":"5","pid":"00811","terminal_name_suffix":"tty1","user":"","tty":"tty1","remote_hostname":"","remote_host":"0.0.0.0","datetime":"Sat Dec 03 12:28:12 2022 EST","timestamp":1670070492}
{"type":"6","pid":"00811","terminal_name_suffix":"tty1","user":"LOGIN","tty":"tty1","remote_hostname":"","remote_host":"0.0.0.0","datetime":"Sat Dec 03 12:28:12 2022 EST","timestamp":1670070492}
{"type":"1","pid":"00051","terminal_name_suffix":"~~","user":"runlevel","tty":"~","remote_hostname":"3.10.0-1160.80.1.el7.x86_64","remote_host":"0.0.0.0","datetime":"Sat Dec 03 12:28:58 2022 EST","timestamp":1670070538}
{"type":"7","pid":"02118","terminal_name_suffix":"ts/0","user":"egecko","tty":"pts/0","remote_hostname":"1.1.1.1","remote_host":"1.1.1.1","datetime":"Sat Dec 03 12:51:22 2022 EST","timestamp":1670071882}
Without the option -c you can create formatted output.
To save each line in a file, you can do it like this in bash.
I have chosen the timestamp as the file name.
INPUT_AS_JSON_LINES=$(
jq -R -c '
select(length > 0) | # remove empty lines
[match("\\[(.*?)\\]"; "g").captures[].string # find content within square brackets
| sub("^\\s+";"") | sub("\\s+$";"")] # trim content
| { # convert to json object
"type" : .[0],
"pid" : .[1],
"terminal_name_suffix" : .[2],
"user" : .[3],
"tty" : .[4],
"remote_hostname" : .[5],
"remote_host" : .[6],
"datetime" : .[7],
"timestamp" : (.[7] | strptime("%a %b %d %T %Y %Z") | mktime)
}' input.txt
)
while read line
do
FILENAME="$(jq '.timestamp' <<< "$line").json"
CONTENT=$(jq <<< "$line") # format json
echo "writing file '$FILENAME'"
echo "$CONTENT" > "$FILENAME"
done <<< "$INPUT_AS_JSON_LINES"
Output
writing file '1667605229.json'
writing file '1667606116.json'
writing file '1670070485.json'
writing file '1670070492.json'
writing file '1670070492.json'
writing file '1670070538.json'
writing file '1670071882.json'
| Convert Multiline String (utmpdump results) into JSON | All,
This is my first time submitting a stack overflow question, so thanks in advance for taking the time to read/consider my question. I'm currently using the 'utmpdump' utility to dump linux authentication log results each hour from a bash script, which is done using the syntax shown below:
dateLastHour=$(date +"%a %b %d %H:" -d '1 hour ago')
dateNow=$(date +"%a %b %d %H:")
utmpdump /var/log/wtmp* | awk "/$dateLastHour/,/$dateNow/"
What I'm now trying to accomplish and the subject of this question is how can I take these results and delimited them by new line for each authentication log, before converting each authentication event into it's own JSON file to be exported to an external syslog collector for additional analysis and long term storage?
As an example, here's some of the test results I've been using:
[7] [08579] [ts/0] [egecko] [pts/0 ] [10.0.2.6 ] [1.1.1.1 ] [Fri Nov 04 23:40:29 2022 EDT]
[8] [08579] [ ] [ ] [pts/0 ] [ ] [0.0.0.0 ] [Fri Nov 04 23:55:16 2022 EDT]
[2] [00000] [~~ ] [reboot ] [~ ] [3.10.0-1160.80.1.el7.x86_64] [0.0.0.0 ] [Sat Dec 03 12:28:05 2022 EST]
[5] [00811] [tty1] [ ] [tty1 ] [ ] [0.0.0.0 ] [Sat Dec 03 12:28:12 2022 EST]
[6] [00811] [tty1] [LOGIN ] [tty1 ] [ ] [0.0.0.0 ] [Sat Dec 03 12:28:12 2022 EST]
[1] [00051] [~~ ] [runlevel] [~ ] [3.10.0-1160.80.1.el7.x86_64] [0.0.0.0 ] [Sat Dec 03 12:28:58 2022 EST]
[7] [02118] [ts/0] [egecko] [pts/0 ] [1.1.1.1 ] [1.1.1.1 ] [Sat Dec 03 12:51:22 2022 EST]
Any assistance or pointers here is greatly appreciated!
I've been using the following SED commands to trim out unnessecary whitespace, and I know that what I probably should do is using IDF to split the results string into new lines before using brackets as the delimeter:
utmpResults=$(echo "$utmpResults" | sed 's/ */ /g')
IFS="\n" read -a array <<< "$utmpResults"
echo $array
But when I echo $array it only returns the first line...?
| [
"With the help of jq (sed for json), it's an easy task:\n#!/bin/bash\n\njq -R -c '\n select(length > 0) | # remove empty lines\n [match(\"\\\\[(.*?)\\\\]\"; \"g\").captures[].string # find content within square brackets\n | sub(\"^\\\\s+\";\"\") | sub(\"\\\\s+$\";\"\")] # trim content\n | { # convert to json object\n \"type\" : .[0],\n \"pid\" : .[1],\n \"terminal_name_suffix\" : .[2],\n \"user\" : .[3],\n \"tty\" : .[4],\n \"remote_hostname\" : .[5],\n \"remote_host\" : .[6],\n \"datetime\" : .[7],\n \"timestamp\" : (.[7] | strptime(\"%a %b %d %T %Y %Z\") | mktime)\n }' input.txt\n\nOutput\n{\"type\":\"7\",\"pid\":\"08579\",\"terminal_name_suffix\":\"ts/0\",\"user\":\"egecko\",\"tty\":\"pts/0\",\"remote_hostname\":\"10.0.2.6\",\"remote_host\":\"1.1.1.1\",\"datetime\":\"Fri Nov 04 23:40:29 2022 EDT\",\"timestamp\":1667605229}\n{\"type\":\"8\",\"pid\":\"08579\",\"terminal_name_suffix\":\"\",\"user\":\"\",\"tty\":\"pts/0\",\"remote_hostname\":\"\",\"remote_host\":\"0.0.0.0\",\"datetime\":\"Fri Nov 04 23:55:16 2022 EDT\",\"timestamp\":1667606116}\n{\"type\":\"2\",\"pid\":\"00000\",\"terminal_name_suffix\":\"~~\",\"user\":\"reboot\",\"tty\":\"~\",\"remote_hostname\":\"3.10.0-1160.80.1.el7.x86_64\",\"remote_host\":\"0.0.0.0\",\"datetime\":\"Sat Dec 03 12:28:05 2022 EST\",\"timestamp\":1670070485}\n{\"type\":\"5\",\"pid\":\"00811\",\"terminal_name_suffix\":\"tty1\",\"user\":\"\",\"tty\":\"tty1\",\"remote_hostname\":\"\",\"remote_host\":\"0.0.0.0\",\"datetime\":\"Sat Dec 03 12:28:12 2022 EST\",\"timestamp\":1670070492}\n{\"type\":\"6\",\"pid\":\"00811\",\"terminal_name_suffix\":\"tty1\",\"user\":\"LOGIN\",\"tty\":\"tty1\",\"remote_hostname\":\"\",\"remote_host\":\"0.0.0.0\",\"datetime\":\"Sat Dec 03 12:28:12 2022 EST\",\"timestamp\":1670070492}\n{\"type\":\"1\",\"pid\":\"00051\",\"terminal_name_suffix\":\"~~\",\"user\":\"runlevel\",\"tty\":\"~\",\"remote_hostname\":\"3.10.0-1160.80.1.el7.x86_64\",\"remote_host\":\"0.0.0.0\",\"datetime\":\"Sat Dec 03 12:28:58 2022 EST\",\"timestamp\":1670070538}\n{\"type\":\"7\",\"pid\":\"02118\",\"terminal_name_suffix\":\"ts/0\",\"user\":\"egecko\",\"tty\":\"pts/0\",\"remote_hostname\":\"1.1.1.1\",\"remote_host\":\"1.1.1.1\",\"datetime\":\"Sat Dec 03 12:51:22 2022 EST\",\"timestamp\":1670071882}\n\nWithout the option -c you can create formatted output.\n\nTo save each line in a file, you can do it like this in bash.\nI have chosen the timestamp as the file name.\nINPUT_AS_JSON_LINES=$(\n jq -R -c '\n select(length > 0) | # remove empty lines\n [match(\"\\\\[(.*?)\\\\]\"; \"g\").captures[].string # find content within square brackets\n | sub(\"^\\\\s+\";\"\") | sub(\"\\\\s+$\";\"\")] # trim content\n | { # convert to json object\n \"type\" : .[0],\n \"pid\" : .[1],\n \"terminal_name_suffix\" : .[2],\n \"user\" : .[3],\n \"tty\" : .[4],\n \"remote_hostname\" : .[5],\n \"remote_host\" : .[6],\n \"datetime\" : .[7],\n \"timestamp\" : (.[7] | strptime(\"%a %b %d %T %Y %Z\") | mktime)\n }' input.txt\n )\n\nwhile read line\ndo\n FILENAME=\"$(jq '.timestamp' <<< \"$line\").json\"\n CONTENT=$(jq <<< \"$line\") # format json\n echo \"writing file '$FILENAME'\"\n echo \"$CONTENT\" > \"$FILENAME\"\ndone <<< \"$INPUT_AS_JSON_LINES\"\n\nOutput\nwriting file '1667605229.json'\nwriting file '1667606116.json'\nwriting file '1670070485.json'\nwriting file '1670070492.json'\nwriting file '1670070492.json'\nwriting file '1670070538.json'\nwriting file '1670071882.json'\n\n"
] | [
1
] | [] | [] | [
"json",
"sed"
] | stackoverflow_0074670503_json_sed.txt |
Q:
NGROK, Angular + Springboot cors weird problem
Ilustration
I'm having this problem,
As the picture illustrates.
I have two addresses in Ngrok (Free), one pointing to localhost:4200 (angular)
And another pointing to localhost:8080 (Springboot).
So far so good.
I put the front pointing to the Ngrok(Back) address to make the requests. POST works, but GET is not working.
It is giving CORS error. I've done everything and I still can't do it.
When I access the backend address through ngrok, it works.
request
On the first request it goes ok.
But when you update the front it gives the error.
@Component
public class CorsFilter extends OncePerRequestFilter {
@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException {
response.setHeader("Access-Control-Allow-Origin", "*");
response.setHeader("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS");
response.setHeader("Access-Control-Max-Age", "7200");
response.setHeader("Access-Control-Allow-Headers", "Origin, Authorization, Content-Type, xsrf-token, X-Requested-With, Accept, X-Auth-Token");
response.addHeader("Access-Control-Expose-Headers", "xsrf-token");
if ("OPTIONS".equals(request.getMethod())) {
response.setStatus(HttpServletResponse.SC_OK);
} else {
filterChain.doFilter(request, response);
}
}
}
A:
Spring boot has some security mechanism which let you define a your CORS (Cross Origin Resource Sharing) security rules
Postman bypasses this mechanism (by using different user agent, and other elements I won't be able to explain), which explains your request working normally when testing with postman.
I strongly recommend you having a look at the Baeldung website, which is great for spring development.
https://www.baeldung.com/spring-cors
There are multiple ways to set up CORS rules in spring, and I don't really know how your project is set up, but you will probably find the right answer to your issue there.
| NGROK, Angular + Springboot cors weird problem | Ilustration
I'm having this problem,
As the picture illustrates.
I have two addresses in Ngrok (Free), one pointing to localhost:4200 (angular)
And another pointing to localhost:8080 (Springboot).
So far so good.
I put the front pointing to the Ngrok(Back) address to make the requests. POST works, but GET is not working.
It is giving CORS error. I've done everything and I still can't do it.
When I access the backend address through ngrok, it works.
request
On the first request it goes ok.
But when you update the front it gives the error.
@Component
public class CorsFilter extends OncePerRequestFilter {
@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException {
response.setHeader("Access-Control-Allow-Origin", "*");
response.setHeader("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS");
response.setHeader("Access-Control-Max-Age", "7200");
response.setHeader("Access-Control-Allow-Headers", "Origin, Authorization, Content-Type, xsrf-token, X-Requested-With, Accept, X-Auth-Token");
response.addHeader("Access-Control-Expose-Headers", "xsrf-token");
if ("OPTIONS".equals(request.getMethod())) {
response.setStatus(HttpServletResponse.SC_OK);
} else {
filterChain.doFilter(request, response);
}
}
}
| [
"Spring boot has some security mechanism which let you define a your CORS (Cross Origin Resource Sharing) security rules\nPostman bypasses this mechanism (by using different user agent, and other elements I won't be able to explain), which explains your request working normally when testing with postman.\nI strongly recommend you having a look at the Baeldung website, which is great for spring development.\nhttps://www.baeldung.com/spring-cors\nThere are multiple ways to set up CORS rules in spring, and I don't really know how your project is set up, but you will probably find the right answer to your issue there.\n"
] | [
1
] | [] | [] | [
"angular",
"ngrok",
"spring_boot"
] | stackoverflow_0074671043_angular_ngrok_spring_boot.txt |
Q:
mapping JSON data within arrays
I'm working on a project with React. Before I start, I'm a complete beginner and looked through the internet for a solution but didn't get any further.
My project is about a YouTube Channel where questions are being answered in the videos from people.
I have a webpage where you have multiple divs which represent Video 1, Video 2, Video 3... and when clicking on one div/video then all questions are listed from that YouTube-Video and when clicking on a question then a Video-Player shows and plays the YouTube video from at a given time.
All my questions and their timestamp links are stored in one JSON file.
For example:
[
{
"id": 1,
"title": "Video 1",
"video_leght": "00:50:00",
"date": "20.05.2010",
"questions": [
{
"id": 1,
"question": "Question 1 ",
"url": "Link"
},
{
"id": 2,
"question": "Question 2",
"url": "Link"
},
{
"id": 3,
"question": "Question 3",
"url": "Link"
}
]
},
{
"id": 2,
"title": "Video 2",
"video_leght": "01:00:00",
"date": "14.07.2016",
"questions":[
{
"id": 1,
"question": "Question 1 ",
"url": "Link"
},
{
"id": 2,
"question": "Question 2",
"url": "Link"
},
{
"id": 3,
"question": "Question 3",
"url": "Link"
}
]
}
]
With the map function I was able to view the different videos (video 1, video 2...) but when I click on one of the videos, I can't figure it out to show only the questions from the selected video for example to show only the questions from Video 2.
This is how I used to show Video 1, Video 2, Video 3... (I imported DataList from the JSON file location)
<div>
{DataList.map((ListItem, index) => {
return (
<div key={index}>
<h3>{ListItem.title)}</h3>
</div>
);
})}
</div>
Please bear in mind that I am a complete beginner, and simply telling me to read some page how something works will not help me much. Most of the time I learn by looking at the given code and understanding it how it works or explaining it to me in your own words.
I have a .js File called "Pitanja.js" and here are all Videos shown (Video 1, Video 2...):
import React from "react";
import style from "./Pitanja.module.css";
import DataList from "../data/video_list.json";
import { useNavigate, Outlet} from "react-router-dom";
function Pitanja() {
const navigate = useNavigate();
return (
<div className={style.mainCard}>
{DataList.map((ListItem, index) => {
return (
<div
onClick={() => {
navigate(`/pitanja/${ListItem.id}`);
}}
key={index}
className={style.Card}
>
<h3 className={style.Title}>{ListItem.title}</h3>
<h3 className={style.video_leght}>{ListItem.video_leght}</h3>
<h4 className={style.Date}>{ListItem.date}</h4>
</div>
);
})}
<Outlet/>
</div>
);
}
export default Pitanja;
When clicking one Video, then the webpage routes to a Card.js file where the questions should be shown from the selected Video:
import React, { useState } from "react";
import DataList from "../data/video_list.json";
import style from "./Card.module.css";
import ReactPlayer from "react-player";
import Pitanja from "../Pages/Pitanja";
function Card() {
const [playUrl, setPlayUrl] = useState("");
const [isPlaying, setIsPlaying] = useState(true);
return (
<div className={style.ViewContent}>
<div className={style.mainCard}>
{DataList.map((ListItem, index) => {
return (
<div
onClick={() => setPlayUrl(ListItem.url)}
key={index}
className={style.Card}
>
<h3 className={style.question}>{ListItem.Pitanja}</h3>
</div>
);
})}
</div>
<div className={style.VideoPlayer}>
<ReactPlayer url={playUrl} controls={true} playing={isPlaying} />
</div>
</div>
);
}
export default Card;
My App.js contains the following code:
import "./App.css";
import React from "react";
import Home from "./Pages/Home";
import Pitanja from "./Pages/Pitanja";
import Card from "./components/Card";
import { BrowserRouter as Router, Route, Routes } from "react-router-dom";
function App() {
return (
<Router>
<div className="App">
<Nav />
<Routes>
<Route path="/" element={<Home />} />
<Route path="pitanja" element={<Pitanja />}>
</Route>
<Route path="pitanja/:id" element={<Card />} />
</Routes>
</div>
</Router>
);
}
export default App;
And navigation bar called Nav.js:
import React from "react";
import { Link } from "react-router-dom";
import style from "./Nav.module.css";
function Nav() {
const NavStyle = {
color: "white",
};
return (
<nav>
<h3>logo</h3>
<ul className={style.nav_links}>
<Link style={NavStyle} to={"/"}>
<li>Pocetna</li>
</Link>
<Link style={NavStyle} to={"/pitanja"}>
<li>Pitanja</li>
</Link>
</ul>
<input className="SearchBox"></input>
</nav>
);
}
export default Nav;
I didn't do much regarding search function and other stuff...
This is the first page with the videos:
This is the second page with all the questions of that Video:
A:
Hello React is also new to me but if i have understood you good you have problem with displaying questions and url for specific video. So i think your problem is that you have nested Array inside you JSON and you can not access it the way you are trying.
In Card.js component you are doing this:
{DataList.map((ListItem, index) => {
return (
<div
onClick={() => setPlayUrl(ListItem.url)}
key={index}
className={style.Card}
>
<h3 className={style.question}>{ListItem.Pitanja}</h3>
</div>
);
})}
You can not access ListItem.url as it is inside another Array.
Maybe you can do something like this:
export default function App() {
return (
<div className="App">
{videos.map((video, index) => {
return (
<div key={video.id}>
<div>{video.title}</div>
<div>{video.video_leght}</div>
{video.questions.map((q) => {
return (
<>
<div key={q.id}>
<div>{q.url}</div>
<div>{q.question}</div>
<button onClick={() => console.log(q.url)}>click</button>
</div>
</>
);
})}
</div>
);
})}
</div>
);
}
Please if i am dead wrong about this can someone else help me understand? :D
Thank you.
******** Updated Codesandbox after your comment****
Codesandbox
| mapping JSON data within arrays | I'm working on a project with React. Before I start, I'm a complete beginner and looked through the internet for a solution but didn't get any further.
My project is about a YouTube Channel where questions are being answered in the videos from people.
I have a webpage where you have multiple divs which represent Video 1, Video 2, Video 3... and when clicking on one div/video then all questions are listed from that YouTube-Video and when clicking on a question then a Video-Player shows and plays the YouTube video from at a given time.
All my questions and their timestamp links are stored in one JSON file.
For example:
[
{
"id": 1,
"title": "Video 1",
"video_leght": "00:50:00",
"date": "20.05.2010",
"questions": [
{
"id": 1,
"question": "Question 1 ",
"url": "Link"
},
{
"id": 2,
"question": "Question 2",
"url": "Link"
},
{
"id": 3,
"question": "Question 3",
"url": "Link"
}
]
},
{
"id": 2,
"title": "Video 2",
"video_leght": "01:00:00",
"date": "14.07.2016",
"questions":[
{
"id": 1,
"question": "Question 1 ",
"url": "Link"
},
{
"id": 2,
"question": "Question 2",
"url": "Link"
},
{
"id": 3,
"question": "Question 3",
"url": "Link"
}
]
}
]
With the map function I was able to view the different videos (video 1, video 2...) but when I click on one of the videos, I can't figure it out to show only the questions from the selected video for example to show only the questions from Video 2.
This is how I used to show Video 1, Video 2, Video 3... (I imported DataList from the JSON file location)
<div>
{DataList.map((ListItem, index) => {
return (
<div key={index}>
<h3>{ListItem.title)}</h3>
</div>
);
})}
</div>
Please bear in mind that I am a complete beginner, and simply telling me to read some page how something works will not help me much. Most of the time I learn by looking at the given code and understanding it how it works or explaining it to me in your own words.
I have a .js File called "Pitanja.js" and here are all Videos shown (Video 1, Video 2...):
import React from "react";
import style from "./Pitanja.module.css";
import DataList from "../data/video_list.json";
import { useNavigate, Outlet} from "react-router-dom";
function Pitanja() {
const navigate = useNavigate();
return (
<div className={style.mainCard}>
{DataList.map((ListItem, index) => {
return (
<div
onClick={() => {
navigate(`/pitanja/${ListItem.id}`);
}}
key={index}
className={style.Card}
>
<h3 className={style.Title}>{ListItem.title}</h3>
<h3 className={style.video_leght}>{ListItem.video_leght}</h3>
<h4 className={style.Date}>{ListItem.date}</h4>
</div>
);
})}
<Outlet/>
</div>
);
}
export default Pitanja;
When clicking one Video, then the webpage routes to a Card.js file where the questions should be shown from the selected Video:
import React, { useState } from "react";
import DataList from "../data/video_list.json";
import style from "./Card.module.css";
import ReactPlayer from "react-player";
import Pitanja from "../Pages/Pitanja";
function Card() {
const [playUrl, setPlayUrl] = useState("");
const [isPlaying, setIsPlaying] = useState(true);
return (
<div className={style.ViewContent}>
<div className={style.mainCard}>
{DataList.map((ListItem, index) => {
return (
<div
onClick={() => setPlayUrl(ListItem.url)}
key={index}
className={style.Card}
>
<h3 className={style.question}>{ListItem.Pitanja}</h3>
</div>
);
})}
</div>
<div className={style.VideoPlayer}>
<ReactPlayer url={playUrl} controls={true} playing={isPlaying} />
</div>
</div>
);
}
export default Card;
My App.js contains the following code:
import "./App.css";
import React from "react";
import Home from "./Pages/Home";
import Pitanja from "./Pages/Pitanja";
import Card from "./components/Card";
import { BrowserRouter as Router, Route, Routes } from "react-router-dom";
function App() {
return (
<Router>
<div className="App">
<Nav />
<Routes>
<Route path="/" element={<Home />} />
<Route path="pitanja" element={<Pitanja />}>
</Route>
<Route path="pitanja/:id" element={<Card />} />
</Routes>
</div>
</Router>
);
}
export default App;
And navigation bar called Nav.js:
import React from "react";
import { Link } from "react-router-dom";
import style from "./Nav.module.css";
function Nav() {
const NavStyle = {
color: "white",
};
return (
<nav>
<h3>logo</h3>
<ul className={style.nav_links}>
<Link style={NavStyle} to={"/"}>
<li>Pocetna</li>
</Link>
<Link style={NavStyle} to={"/pitanja"}>
<li>Pitanja</li>
</Link>
</ul>
<input className="SearchBox"></input>
</nav>
);
}
export default Nav;
I didn't do much regarding search function and other stuff...
This is the first page with the videos:
This is the second page with all the questions of that Video:
| [
"Hello React is also new to me but if i have understood you good you have problem with displaying questions and url for specific video. So i think your problem is that you have nested Array inside you JSON and you can not access it the way you are trying.\nIn Card.js component you are doing this:\n{DataList.map((ListItem, index) => {\n return (\n <div\n onClick={() => setPlayUrl(ListItem.url)}\n key={index}\n className={style.Card}\n >\n <h3 className={style.question}>{ListItem.Pitanja}</h3>\n </div>\n );\n })}\n\nYou can not access ListItem.url as it is inside another Array.\nMaybe you can do something like this:\nexport default function App() {\n return (\n <div className=\"App\">\n {videos.map((video, index) => {\n return (\n <div key={video.id}>\n <div>{video.title}</div>\n <div>{video.video_leght}</div>\n {video.questions.map((q) => {\n return (\n <>\n <div key={q.id}>\n <div>{q.url}</div>\n <div>{q.question}</div>\n <button onClick={() => console.log(q.url)}>click</button>\n </div>\n </>\n );\n })}\n </div>\n );\n })}\n </div>\n );\n} \n\nPlease if i am dead wrong about this can someone else help me understand? :D\nThank you.\n******** Updated Codesandbox after your comment****\nCodesandbox\n"
] | [
1
] | [] | [] | [
"json",
"reactjs"
] | stackoverflow_0074649418_json_reactjs.txt |
Q:
Showing a popup that depends on the authentication status
I'm trying the implement a page that some profiles are listed on card like components with their names, ids and pictures. In this page unregistered users are not allowed to see details of the profile, so I have to show them a popup to inform them a message like "If you want to see details you have to log in." I'm keeping authentication status in a variable for now just like "auth = true" or "auth = false".
If auth is false then the popup will be showed to user
If auth is true then nothing happens
I'm using Modal from the Material UI for popup, is slightly changed.
import * as React from 'react';
import Backdrop from '@mui/material/Backdrop';
import Box from '@mui/material/Box';
import Modal from '@mui/material/Modal';
import Fade from '@mui/material/Fade';
import Button from '@mui/material/Button';
import Typography from '@mui/material/Typography';
import "./popup.css";
export default function TransitionsModal(auth) {
const [open,
setOpen] = React.useState(auth);
const handleOpen = () => setOpen(true);
const handleClose = () => setOpen(false);
return (
<div>
<Modal
aria-labelledby="transition-modal-title"
aria-describedby="transition-modal-description"
open={open}
onClose={handleClose}
closeAfterTransition
BackdropComponent={Backdrop}
BackdropProps={{
timeout: 500
}}>
<Fade in={open}>
<div className="popup-container" id='blur'>
<h1>Profil detaylarını görebilmek için lütfen giriş yapınız.</h1 >
<div className='popup-buttons'>
<a className='giris-yap-wrapper' href="/signIn">
<input
type="submit"
value="Giriş Yap"
className="popup-input-login"
onclick="togglePopup()"/>
</a>
<input type="submit" value="Vazgeç" className="popup-input" onClick={handleClose}/>
</div>
</div>
</Fade>
</Modal>
</div>
);
}
and here is the code for profile card
import React from 'react';
import {useState} from 'react';
import './card.css';
import HeartButton from "./HeartButton";
import Modal from './Pop';
const Card = ({ profile, auth }) => {
const [isShown, setIsShown] = useState(auth);
const handleClick = event => {
setIsShown(auth);
alert(isShown+ " "+ auth);
}
return (
<div className='influencer-profile-card' onClick={handleClick}>
{!isShown && <Modal auth={ !auth } />}
<div>
<HeartButton/>
</div>
<div className="profile_picture">
<img src={profile.picture !== 'N/A' ? profile.picture : 'https://via.placeholder.com/400'} alt={profile.name} />
</div>
<div className="info">
<h3>{profile.name}</h3>
<h4>{profile.ID + " • " + profile.category}</h4>
</div>
</div>
)
}
export default Card
The above code is only shows the popup at user's first visit. How can I fix this?
A:
I may have looked at this wrong.
You can pass the auth variable as a prop to the Card component, and use this prop to determine whether or not to show the modal. This way, the modal will be shown if auth is false, and hidden if auth is true.
In the Card component, update the Modal component to use the auth prop that you're passing in:
const Card = ({ profile, auth }) => {
// You can remove the useState hook and isShown variable,
// as you won't need them anymore
// const [isShown, setIsShown] = useState(auth);
// You can also remove this handleClick function,
// as you won't need it anymore
// const handleClick = event => {
// setIsShown(auth);
// alert(isShown+ " "+ auth);
// }
return (
<div className='influencer-profile-card' onClick={handleClick}>
{/* Use the auth prop here to determine whether to show the modal or not */}
{!auth && <Modal auth={ !auth } />}
<div>
<HeartButton/>
</div>
<div className="profile_picture">
<img src={profile.picture !== 'N/A' ? profile.picture : 'https://via.placeholder.com/400'} alt={profile.name} />
</div>
<div className="info">
<h3>{profile.name}</h3>
<h4>{profile.ID + " • " + profile.category}</h4>
</div>
</div>
)
}
export default Card
Then, when you use the Card component, make sure to pass the auth variable as a prop:
<Card profile={profile} auth={auth} />
| Showing a popup that depends on the authentication status | I'm trying the implement a page that some profiles are listed on card like components with their names, ids and pictures. In this page unregistered users are not allowed to see details of the profile, so I have to show them a popup to inform them a message like "If you want to see details you have to log in." I'm keeping authentication status in a variable for now just like "auth = true" or "auth = false".
If auth is false then the popup will be showed to user
If auth is true then nothing happens
I'm using Modal from the Material UI for popup, is slightly changed.
import * as React from 'react';
import Backdrop from '@mui/material/Backdrop';
import Box from '@mui/material/Box';
import Modal from '@mui/material/Modal';
import Fade from '@mui/material/Fade';
import Button from '@mui/material/Button';
import Typography from '@mui/material/Typography';
import "./popup.css";
export default function TransitionsModal(auth) {
const [open,
setOpen] = React.useState(auth);
const handleOpen = () => setOpen(true);
const handleClose = () => setOpen(false);
return (
<div>
<Modal
aria-labelledby="transition-modal-title"
aria-describedby="transition-modal-description"
open={open}
onClose={handleClose}
closeAfterTransition
BackdropComponent={Backdrop}
BackdropProps={{
timeout: 500
}}>
<Fade in={open}>
<div className="popup-container" id='blur'>
<h1>Profil detaylarını görebilmek için lütfen giriş yapınız.</h1 >
<div className='popup-buttons'>
<a className='giris-yap-wrapper' href="/signIn">
<input
type="submit"
value="Giriş Yap"
className="popup-input-login"
onclick="togglePopup()"/>
</a>
<input type="submit" value="Vazgeç" className="popup-input" onClick={handleClose}/>
</div>
</div>
</Fade>
</Modal>
</div>
);
}
and here is the code for profile card
import React from 'react';
import {useState} from 'react';
import './card.css';
import HeartButton from "./HeartButton";
import Modal from './Pop';
const Card = ({ profile, auth }) => {
const [isShown, setIsShown] = useState(auth);
const handleClick = event => {
setIsShown(auth);
alert(isShown+ " "+ auth);
}
return (
<div className='influencer-profile-card' onClick={handleClick}>
{!isShown && <Modal auth={ !auth } />}
<div>
<HeartButton/>
</div>
<div className="profile_picture">
<img src={profile.picture !== 'N/A' ? profile.picture : 'https://via.placeholder.com/400'} alt={profile.name} />
</div>
<div className="info">
<h3>{profile.name}</h3>
<h4>{profile.ID + " • " + profile.category}</h4>
</div>
</div>
)
}
export default Card
The above code is only shows the popup at user's first visit. How can I fix this?
| [
"I may have looked at this wrong.\nYou can pass the auth variable as a prop to the Card component, and use this prop to determine whether or not to show the modal. This way, the modal will be shown if auth is false, and hidden if auth is true.\nIn the Card component, update the Modal component to use the auth prop that you're passing in:\nconst Card = ({ profile, auth }) => {\n // You can remove the useState hook and isShown variable,\n // as you won't need them anymore\n // const [isShown, setIsShown] = useState(auth);\n\n // You can also remove this handleClick function,\n // as you won't need it anymore\n // const handleClick = event => {\n // setIsShown(auth);\n // alert(isShown+ \" \"+ auth);\n // }\n\n return (\n <div className='influencer-profile-card' onClick={handleClick}>\n {/* Use the auth prop here to determine whether to show the modal or not */}\n {!auth && <Modal auth={ !auth } />}\n <div>\n <HeartButton/>\n </div>\n\n <div className=\"profile_picture\">\n <img src={profile.picture !== 'N/A' ? profile.picture : 'https://via.placeholder.com/400'} alt={profile.name} />\n </div>\n\n <div className=\"info\">\n <h3>{profile.name}</h3>\n <h4>{profile.ID + \" • \" + profile.category}</h4>\n </div>\n\n </div>\n )\n}\n\nexport default Card\n\nThen, when you use the Card component, make sure to pass the auth variable as a prop:\n<Card profile={profile} auth={auth} />\n\n"
] | [
0
] | [] | [] | [
"javascript",
"popup",
"react_hooks",
"reactjs"
] | stackoverflow_0074671152_javascript_popup_react_hooks_reactjs.txt |
Q:
Getting Time range between non intersecting ranges
I have the following timelines :
7 a.m --------------------- 12 a.m. 2 am .................. 10 a.m
10-------11 3------5
closed closed
the output should be the non-intersecting time ranges:
7-10 a.m, 11 -12 a.m, 2-3 p.m, 5-10 p.m
I tried to minus and subtract method for Ranges but didn't work
A tricky part could be the following case
7 a.m --------------------- 12 a.m. 2 am .................. 10 a.m
10----------------------------------------5
closed
the output should be the non-intersecting time ranges:
7-10 a.m, 5-10 p.m
Any Idea for kotlin implementation?
I tried to minus and subtract method for Ranges but didn't work
A:
Here's a simple approach to find the non-intersecting time ranges:
Create a list of all the time ranges (for example, in the first example, the list would be [7-12 a.m., 2-10 a.m.])
Sort the list by the start time of each range (in the first example, the list would be [7-12 a.m., 2-10 a.m.])
Loop through the list of ranges and compare each range with the next range to see if they intersect. If they do, merge the two ranges into one range.
In the first example, the first range (7-12 a.m.) would be compared with the second range (2-10 a.m.), and since they intersect, they would be merged into one range (7-10 a.m.).
Continue looping through the list and merging ranges until no more intersecting ranges are found. In the first example, after merging the first two ranges, the resulting list would be [7-10 a.m., 11-12 a.m., 2-5 p.m.].
Sample implementation:
fun findNonIntersectingRanges(ranges: List<Pair<Int, Int>>): List<Pair<Int, Int>> {
// Create a list of ranges
var nonIntersectingRanges = ranges.toMutableList()
// Sort the list by the start time of each range
nonIntersectingRanges.sortBy { it.first }
// Loop through the list of ranges and compare each range with the next range to see if they intersect
for (i in 0 until nonIntersectingRanges.size - 1) {
val currentRange = nonIntersectingRanges[i]
val nextRange = nonIntersectingRanges[i + 1]
// If the current range and the next range intersect, merge them into one range
if (currentRange.second >= nextRange.first) {
val mergedRange = currentRange.first to nextRange.second
nonIntersectingRanges[i] = mergedRange
nonIntersectingRanges.removeAt(i + 1)
}
}
return nonIntersectingRanges
}
And call it like this:
val ranges = listOf(7 to 12, 2 to 10)
val nonIntersectingRanges = findNonIntersectingRanges(ranges)
In the first example, the resulting list of non-intersecting ranges would be [7-10 a.m., 11-12 a.m., 2-5 p.m.]. In the second example, the resulting list would be [7-10 a.m., 5-10 p.m.].
A:
Sounds like a pretty common case and I suspect there are some existing algorithms for it, but nothing comes out of top of my head.
My idea is to first transform both lists of ranges into a single list of opening/closing "events", ordered by time. The start of an opening range increases the "openess" by +1 while its end decreases it (-1). Start of a closing range also decreases "openess" while its end increases it. Then we iterate the events in the time order, keeping the information on what is the current "openess" level. Whenever the "openess" level is 1, that means we are in the middle of an opening range, but not inside a closing range, so we are entirely open.
Assuming both lists of ranges are initially properly ordered, as in your example, I believe it should be doable in linear time and even without this intermediary list of events. However, such implementation would be pretty complicated to cover all possible states, so I decided to go with a simpler solution which is I believe O(n * log(n)). Also, this implementation requires that opening ranges do not overlap with each other, the same for closing ranges:
fun main() {
// your first example
println(listOf(Range(7, 12), Range(14, 22)) - listOf(Range(10, 11), Range(15, 17)))
// second example
println(listOf(Range(7, 12), Range(14, 22)) - listOf(Range(10, 17)))
// two close rangs "touch" each other
println(listOf(Range(8, 16)) - listOf(Range(10, 11), Range(11, 13)))
// both open and close range starts at the same time
println(listOf(Range(8, 16)) - listOf(Range(8, 12)))
}
data class Range(val start: Int, val end: Int)
operator fun List<Range>.minus(other: List<Range>): List<Range> {
// key is the time, value is the change of "openess" at this time
val events = sortedMapOf<Int, Int>()
forEach { (start, end) ->
events.merge(start, 1, Int::plus)
events.merge(end, -1, Int::plus)
}
other.forEach { (start, end) ->
events.merge(start, -1, Int::plus)
events.merge(end, 1, Int::plus)
}
val result = mutableListOf<Range>()
var currOpeness = 0
var currStart = 0
for ((time, change) in events) {
// we were open and now closing
if (currOpeness == 1 && change < 0) {
result += Range(currStart, time)
}
currOpeness += change
// we were closed and now opening
if (currOpeness == 1 && change > 0) {
currStart = time
}
}
return result
}
| Getting Time range between non intersecting ranges | I have the following timelines :
7 a.m --------------------- 12 a.m. 2 am .................. 10 a.m
10-------11 3------5
closed closed
the output should be the non-intersecting time ranges:
7-10 a.m, 11 -12 a.m, 2-3 p.m, 5-10 p.m
I tried to minus and subtract method for Ranges but didn't work
A tricky part could be the following case
7 a.m --------------------- 12 a.m. 2 am .................. 10 a.m
10----------------------------------------5
closed
the output should be the non-intersecting time ranges:
7-10 a.m, 5-10 p.m
Any Idea for kotlin implementation?
I tried to minus and subtract method for Ranges but didn't work
| [
"Here's a simple approach to find the non-intersecting time ranges:\nCreate a list of all the time ranges (for example, in the first example, the list would be [7-12 a.m., 2-10 a.m.])\nSort the list by the start time of each range (in the first example, the list would be [7-12 a.m., 2-10 a.m.])\nLoop through the list of ranges and compare each range with the next range to see if they intersect. If they do, merge the two ranges into one range.\nIn the first example, the first range (7-12 a.m.) would be compared with the second range (2-10 a.m.), and since they intersect, they would be merged into one range (7-10 a.m.).\nContinue looping through the list and merging ranges until no more intersecting ranges are found. In the first example, after merging the first two ranges, the resulting list would be [7-10 a.m., 11-12 a.m., 2-5 p.m.].\nSample implementation:\nfun findNonIntersectingRanges(ranges: List<Pair<Int, Int>>): List<Pair<Int, Int>> {\n // Create a list of ranges\n var nonIntersectingRanges = ranges.toMutableList()\n \n // Sort the list by the start time of each range\n nonIntersectingRanges.sortBy { it.first }\n \n // Loop through the list of ranges and compare each range with the next range to see if they intersect\n for (i in 0 until nonIntersectingRanges.size - 1) {\n val currentRange = nonIntersectingRanges[i]\n val nextRange = nonIntersectingRanges[i + 1]\n \n // If the current range and the next range intersect, merge them into one range\n if (currentRange.second >= nextRange.first) {\n val mergedRange = currentRange.first to nextRange.second\n nonIntersectingRanges[i] = mergedRange\n nonIntersectingRanges.removeAt(i + 1)\n }\n }\n \n return nonIntersectingRanges\n}\n\nAnd call it like this:\nval ranges = listOf(7 to 12, 2 to 10)\nval nonIntersectingRanges = findNonIntersectingRanges(ranges)\n\nIn the first example, the resulting list of non-intersecting ranges would be [7-10 a.m., 11-12 a.m., 2-5 p.m.]. In the second example, the resulting list would be [7-10 a.m., 5-10 p.m.].\n",
"Sounds like a pretty common case and I suspect there are some existing algorithms for it, but nothing comes out of top of my head.\nMy idea is to first transform both lists of ranges into a single list of opening/closing \"events\", ordered by time. The start of an opening range increases the \"openess\" by +1 while its end decreases it (-1). Start of a closing range also decreases \"openess\" while its end increases it. Then we iterate the events in the time order, keeping the information on what is the current \"openess\" level. Whenever the \"openess\" level is 1, that means we are in the middle of an opening range, but not inside a closing range, so we are entirely open.\nAssuming both lists of ranges are initially properly ordered, as in your example, I believe it should be doable in linear time and even without this intermediary list of events. However, such implementation would be pretty complicated to cover all possible states, so I decided to go with a simpler solution which is I believe O(n * log(n)). Also, this implementation requires that opening ranges do not overlap with each other, the same for closing ranges:\nfun main() {\n // your first example\n println(listOf(Range(7, 12), Range(14, 22)) - listOf(Range(10, 11), Range(15, 17)))\n // second example\n println(listOf(Range(7, 12), Range(14, 22)) - listOf(Range(10, 17)))\n\n // two close rangs \"touch\" each other\n println(listOf(Range(8, 16)) - listOf(Range(10, 11), Range(11, 13)))\n // both open and close range starts at the same time\n println(listOf(Range(8, 16)) - listOf(Range(8, 12)))\n}\n\ndata class Range(val start: Int, val end: Int)\n\noperator fun List<Range>.minus(other: List<Range>): List<Range> {\n // key is the time, value is the change of \"openess\" at this time\n val events = sortedMapOf<Int, Int>()\n forEach { (start, end) ->\n events.merge(start, 1, Int::plus)\n events.merge(end, -1, Int::plus)\n }\n other.forEach { (start, end) ->\n events.merge(start, -1, Int::plus)\n events.merge(end, 1, Int::plus)\n }\n\n val result = mutableListOf<Range>()\n var currOpeness = 0\n var currStart = 0\n for ((time, change) in events) {\n // we were open and now closing\n if (currOpeness == 1 && change < 0) {\n result += Range(currStart, time)\n }\n currOpeness += change\n // we were closed and now opening\n if (currOpeness == 1 && change > 0) {\n currStart = time\n }\n }\n\n return result\n}\n\n"
] | [
0,
0
] | [] | [] | [
"kotlin"
] | stackoverflow_0074670791_kotlin.txt |
Q:
the datasource for gridview Gridview1 didn't have any properties or attributes from which to generate columns. Ensure that your datasource has content
This is the error I'm getting. The data is being retrieved through WebAPI, but due to a gridview setting, it is not displaying the retrieved data.
protected void Page_Load(object sender, EventArgs e)
{
try
{
var webRequest = (HttpWebRequest)WebRequest.Create("https://localhost:44342/api/author");
var webResponse = (HttpWebResponse)webRequest.GetResponse();
if ((webResponse.StatusCode == HttpStatusCode.OK) && (webResponse.ContentLength > 0))
{
var reader = new StreamReader(webResponse.GetResponseStream());
string s = reader.ReadToEnd();
var arr = JsonConvert.DeserializeObject<JArray>(s);
//Console.WriteLine(arr);
GridView1.DataSource = arr;
GridView1.DataBind();
}
else
{
MessageBox.Show(string.Format("Status code == {0}", webResponse.StatusCode));
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
webform.aspx file gridview code
<asp:GridView ID="GridView1" Class="mygridview" runat="server" OnSelectedIndexChanged="GridView1_SelectedIndexChanged" BorderStyle="Solid" CellPadding="10" >
<Columns>
<asp:BoundField DataField="author_id" HeaderText="AuthorID" />
<asp:BoundField DataField="author_name" HeaderText="firstName" />
</Columns>
</asp:GridView>
Controller code:
public IHttpActionResult getauthor()
{
librariaEntities lb = new librariaEntities();
var results = lb.Authors.ToList();
return Ok(results);
}
A:
Okay, this looks pretty simple.
The GridView control needs to bind to a list of objects. You're doing this with the JArray. So far, so good.
But the GridView doesn't know how to get properties called "author_id" and "author_name" from the objects in the JArray. There are no such strongly-typed properties in those objects.
Can you deserialize your return value from the web API as a list (or array) of Author objects?
Incidentally, you're working with ASP.Net WebForms, not MVC.
| the datasource for gridview Gridview1 didn't have any properties or attributes from which to generate columns. Ensure that your datasource has content | This is the error I'm getting. The data is being retrieved through WebAPI, but due to a gridview setting, it is not displaying the retrieved data.
protected void Page_Load(object sender, EventArgs e)
{
try
{
var webRequest = (HttpWebRequest)WebRequest.Create("https://localhost:44342/api/author");
var webResponse = (HttpWebResponse)webRequest.GetResponse();
if ((webResponse.StatusCode == HttpStatusCode.OK) && (webResponse.ContentLength > 0))
{
var reader = new StreamReader(webResponse.GetResponseStream());
string s = reader.ReadToEnd();
var arr = JsonConvert.DeserializeObject<JArray>(s);
//Console.WriteLine(arr);
GridView1.DataSource = arr;
GridView1.DataBind();
}
else
{
MessageBox.Show(string.Format("Status code == {0}", webResponse.StatusCode));
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
webform.aspx file gridview code
<asp:GridView ID="GridView1" Class="mygridview" runat="server" OnSelectedIndexChanged="GridView1_SelectedIndexChanged" BorderStyle="Solid" CellPadding="10" >
<Columns>
<asp:BoundField DataField="author_id" HeaderText="AuthorID" />
<asp:BoundField DataField="author_name" HeaderText="firstName" />
</Columns>
</asp:GridView>
Controller code:
public IHttpActionResult getauthor()
{
librariaEntities lb = new librariaEntities();
var results = lb.Authors.ToList();
return Ok(results);
}
| [
"Okay, this looks pretty simple.\nThe GridView control needs to bind to a list of objects. You're doing this with the JArray. So far, so good.\nBut the GridView doesn't know how to get properties called \"author_id\" and \"author_name\" from the objects in the JArray. There are no such strongly-typed properties in those objects.\nCan you deserialize your return value from the web API as a list (or array) of Author objects?\nIncidentally, you're working with ASP.Net WebForms, not MVC.\n"
] | [
0
] | [] | [] | [
"asp.net",
"asp.net_core",
"asp.net_web_api",
"gridview"
] | stackoverflow_0074671212_asp.net_asp.net_core_asp.net_web_api_gridview.txt |
Q:
How to use data collection in nested loop
I can collect all of the input data, but I just can't seem to do anything with it. I would like to print all of data or add or subtract the numbers, perform calculations. I am not sure how to work with nested data.
import java.util.Scanner;
public class Names {
public static void main(String[] args) {
Scanner input = new Scanner(System.in);
System.out.println("How many students do you want to enter?");
String[] names = new String[2];
for (int stnumber = 0; stnumber < 2; stnumber++) {
System.out.println("Enter the first student " + (stnumber + 1));
names[stnumber] = input.next();
String[] quiz = new String[2];
for (int qznumber = 0; qznumber < 2; qznumber++) {
System.out.println("Enter quiz mark " + (qznumber + 1));
quiz[qznumber] = input.next();
}
String[] midterm = new String[1];
for (int mtnumber = 0; mtnumber < 1; mtnumber++) {
System.out.println("Enter midterm mark " + (mtnumber + 1));
midterm[mtnumber] = input.next();
}
String[] myfinal = new String[1];
for (int fnnumber = 0; fnnumber < 1; fnnumber++) {
System.out.println("Enter final mark " + (fnnumber + 1));
myfinal[fnnumber] = input.next();
}
}
input.close();
System.out.println("The students marks are");
for (int stnumber = 0; stnumber < 2; stnumber++) {
System.out.println(names[stnumber]);
}
}
}
A:
You should define a class to represent a Student:
public class Student {
String name;
String[] quizzes;
String[] midterms;
String[] finals;
}
Then, you will need a constructor to declare a new instance of a Student, here is an exmaple:
public Student(String name) {
this.name = name;
this.quizzes = new String[2];
this.midterms = new String[2];
this.finals = new String[1];
}
Now you can create a new student in your main method like this:
Student newStudent = new Student("put the name here");
And store the quizzes, midterms, etc in that instance of the Student:
newStudent.midterms[0] = "midterm 1 grade";
| How to use data collection in nested loop | I can collect all of the input data, but I just can't seem to do anything with it. I would like to print all of data or add or subtract the numbers, perform calculations. I am not sure how to work with nested data.
import java.util.Scanner;
public class Names {
public static void main(String[] args) {
Scanner input = new Scanner(System.in);
System.out.println("How many students do you want to enter?");
String[] names = new String[2];
for (int stnumber = 0; stnumber < 2; stnumber++) {
System.out.println("Enter the first student " + (stnumber + 1));
names[stnumber] = input.next();
String[] quiz = new String[2];
for (int qznumber = 0; qznumber < 2; qznumber++) {
System.out.println("Enter quiz mark " + (qznumber + 1));
quiz[qznumber] = input.next();
}
String[] midterm = new String[1];
for (int mtnumber = 0; mtnumber < 1; mtnumber++) {
System.out.println("Enter midterm mark " + (mtnumber + 1));
midterm[mtnumber] = input.next();
}
String[] myfinal = new String[1];
for (int fnnumber = 0; fnnumber < 1; fnnumber++) {
System.out.println("Enter final mark " + (fnnumber + 1));
myfinal[fnnumber] = input.next();
}
}
input.close();
System.out.println("The students marks are");
for (int stnumber = 0; stnumber < 2; stnumber++) {
System.out.println(names[stnumber]);
}
}
}
| [
"You should define a class to represent a Student:\npublic class Student {\n String name;\n String[] quizzes;\n String[] midterms;\n String[] finals;\n}\n\nThen, you will need a constructor to declare a new instance of a Student, here is an exmaple:\npublic Student(String name) {\n this.name = name;\n this.quizzes = new String[2];\n this.midterms = new String[2];\n this.finals = new String[1];\n} \n\nNow you can create a new student in your main method like this:\nStudent newStudent = new Student(\"put the name here\");\n\nAnd store the quizzes, midterms, etc in that instance of the Student:\nnewStudent.midterms[0] = \"midterm 1 grade\";\n\n"
] | [
0
] | [
"public class Student {\nString name;\nString[] quizzes;\nString[] midterms;\nString[] finals;\n}\n\n\npublic Student(String name) {\nthis.name = name;\nthis.quizzes = new String[2];\nthis.midterms = new String[2];\nthis.finals = new String[1];\n\n"
] | [
-2
] | [
"java",
"loops",
"nested"
] | stackoverflow_0074663469_java_loops_nested.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.