Id
int64
1.68k
75.6M
PostTypeId
int64
1
2
AcceptedAnswerId
int64
1.7k
75.6M
ParentId
int64
1.68k
75.6M
Score
int64
-60
3.16k
ViewCount
int64
8
2.68M
Body
stringlengths
1
41.1k
Title
stringlengths
14
150
ContentLicense
stringclasses
3 values
FavoriteCount
int64
0
1
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
int64
-1
21.3M
OwnerUserId
int64
1
21.3M
Tags
sequence
75,361,927
2
null
75,360,894
0
null
You'd need to manage the user data in state and then modify the state each time a message is updated. Splitting your code into components can also help you more easily reason about the JSX. Here's a code snippet demo: ``` <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" /> <div id="root"></div><script src="https://cdn.jsdelivr.net/npm/[email protected]/umd/react.development.js"></script><script src="https://cdn.jsdelivr.net/npm/[email protected]/umd/react-dom.development.js"></script><script src="https://cdn.jsdelivr.net/npm/@babel/[email protected]/babel.min.js"></script> <script type="text/babel" data-type="module" data-presets="env,react"> // This Stack Overflow snippet demo uses UMD modules const {StrictMode, useCallback, useState} = React; const getInitialUserData = () => [{"id":1,"name":"Gowtham","followers":[{"id":11,"name":"Anna"},{"id":12,"name":"Theo"}]},{"id":2,"name":"Billy","followers":[{"id":11,"name":"Oliver"},{"id":12,"name":"Emma"}]}]; function Follower ({follower, updateFollowerMessage}) { return ( <div className="d-flex"> <label className="col-form-label">{follower.name}</label> <input type="text" name={follower.name} className="form-control-sm" onChange={ev => updateFollowerMessage(follower, ev.currentTarget.value)} value={follower.message ?? ""} /> </div> ); } function App () { const [users, setUsers] = useState(getInitialUserData); const [activeUserIndex, setActiveUserIndex] = useState(-1); const updateFollowerMessage = useCallback(( follower, message, ) => setUsers( users => users.map((u, i) => { if (i !== activeUserIndex) return u; const followers = u.followers.map( f => f === follower ? {...f, message} : f ); return {...u, followers}; }) ), [activeUserIndex, setUsers]); const user = users[activeUserIndex]; return ( <div className="App mt-5"> <h1>Message users followers</h1> <select className="form-select" onChange={ev => setActiveUserIndex(Number(ev.currentTarget.value))} > <option value={-1}>Select</option> { users.map((user, index) => ( <option key={user.id} value={index}>{user.name}</option> )) } </select> { user ? ( <div className=""> { user.followers.map(follower => ( <Follower key={follower.id} follower={follower} updateFollowerMessage={updateFollowerMessage} /> )) } </div> ) : null } </div> ); } const reactRoot = ReactDOM.createRoot(document.getElementById("root")); reactRoot.render( <StrictMode> <App /> </StrictMode> ); </script> ```
null
CC BY-SA 4.0
null
2023-02-06T13:29:09.953
2023-02-06T13:29:09.953
null
null
438,273
null
75,362,033
2
null
73,659,818
0
null
Based on the format that is being used by sites like UberEats and Facebook, you should use the format defined in the Universal Links documentation by Apple. The format should look like this: ``` { "applinks": { "apps": [], "details": [ { "appID": "your.bundle.id", "paths": [ "your_desired_path/*"] } ] } } ``` The appID key should contain your bundle ID, and the paths key should specify the desired path of your Universal Link. The apps key should be empty. If it's not working on a real device, check if you have enabled Associated Domains on Xcode, and if you have the correct Bundle ID set in your Apple Developer account. It's also important to check if the > apple-app-site-association file is hosted on a secure server (HTTPS) and is accessible to your device.
null
CC BY-SA 4.0
null
2023-02-06T13:38:56.493
2023-02-06T13:38:56.493
null
null
3,727,909
null
75,362,084
2
null
75,350,536
0
null
This is new in android studio this file project level gradle build you just go with this In Gradle -> settings.gradle here all plugins are define
null
CC BY-SA 4.0
null
2023-02-06T13:42:40.050
2023-02-06T13:42:40.050
null
null
19,493,508
null
75,362,090
2
null
72,030,997
1
null
You can achieve the desired result if the date is a column of the original frame. It is sufficient to put it in the index when creating the pivot_table ``` dp = df.pivot_table(index=['Date', 'client'...etc], ...) ```
null
CC BY-SA 4.0
null
2023-02-06T13:43:07.360
2023-02-08T21:46:06.243
2023-02-08T21:46:06.243
12,439,683
20,557,177
null
75,362,186
2
null
75,360,978
0
null
> As far as I understood the concept of NDF and roughness, a high alpha value would mean that the (micro)surface is very rough, and a low value smooth. So, if I want to render a smooth metallic surface such as the body of a car, I would set my alpha to a low value such as 0,1. By doing so, the result of my D(h) is so low that the object cant even be seen. Am I missing something or did I not fully understand the value of alpha? It's true that the numerator of the equation goes to zero. But denominator too. And it does so more rapidly. Taking, as an example, `n = h` -> `dot(n, h)` will be one. And if alpha is 0.1: ``` 0.1^2 / (3.141593 * (1 + (0.1^2 - 1))^2) ``` If you plug that into your calculator you will get ~32.83. So, as you can see, the whole equation doesn't go to zero. Actually, if you calculate the limit of the equation as alpha goes to zero, the equation goes to infinity. Which makes sense, because when roughness is zero, all the normals are concentrated in a single direction.
null
CC BY-SA 4.0
null
2023-02-06T13:52:29.433
2023-02-06T13:52:29.433
null
null
1,754,322
null
75,362,270
2
null
75,355,886
0
null
You shouldn't expect to be able to run a real hardware BIOS image on QEMU. Generally speaking, BIOS binaries are tightly tied to the specific hardware that they are running on (eg which specific motherboard chipset is used on the motherboards they were built for). Even if the BIOS is intended to work with the i440fx or q35 chipsets that QEMU emulates, it may also try to exercise hardware features which QEMU doesn't emulate (because no higher-than-BIOS-level code needs to touch them), or touch motherboard-specific hardware. It's likely that the BIOS has crashed before it was able to get to the point of enabling the display. The intended BIOS for QEMU is one which is aware of QEMU, such as SeaBIOS (which is the default).
null
CC BY-SA 4.0
null
2023-02-06T13:58:28.353
2023-02-06T13:58:28.353
null
null
4,499,941
null
75,362,322
2
null
75,332,537
0
null
By default, the MappingName is a String property that is not bindable property (Dependency property). In your scenario, you are trying to use this like a dependency property (bindable property) with the Converter, that’s why the column does not populate properly. So, you can define the underlying property name to the MappingName and if you want to change the header text then you can define the binding with converter for HeaderText property of GridColumn below mentioned code snippet, ``` <syncfusion:GridTextColumn MappingName="Country" HeaderText="{Binding Path=ColumnsText, Converter={StaticResource translationConverter}}" IsHidden="{Binding UserExpertValues, Mode=TwoWay, Source={StaticResource viewModel}}" /> ``` The column definitions of the DataGrid are not part of the SfDataGrid visual tree, so you can not bind SfDataGrid DataContext with GridColumn. However, you can overcome this problem by defining ViewModel (DataContext) inside Resources and you can bind ViewModel to GridColumns by using StaticResource binding as in the following code example. ``` <Window.Resources> <local:TranslationConverter x:Key="translationConverter"/> <local:ViewModel x:Key="viewModel" /> </Window.Resources> <Grid DataContext="{StaticResource viewModel}" > <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition Width="200"/> </Grid.ColumnDefinitions> <syncfusion:SfDataGrid x:Name="sfDataGrid" AllowEditing="True" ItemsSource="{Binding Orders}" AutoGenerateColumns="False"> <syncfusion:SfDataGrid.Columns> <syncfusion:GridTextColumn MappingName="OrderID" HeaderText="Order ID" /> <syncfusion:GridTextColumn MappingName="CustomerID" HeaderText="Customer ID" /> <syncfusion:GridTextColumn MappingName="CustomerName" HeaderText="Customer Name" /> <syncfusion:GridTextColumn MappingName="Country" HeaderText="{Binding Path=ColumnsText, Converter={StaticResource translationConverter}}" IsHidden="{Binding UserExpertValues, Mode=TwoWay, Source={StaticResource viewModel}}" /> <syncfusion:GridTextColumn MappingName="UnitPrice" HeaderText="Unit Price" /> </syncfusion:SfDataGrid.Columns> </syncfusion:SfDataGrid> ``` [Sample demo Link](https://www.syncfusion.com/downloads/support/directtrac/general/ze/Sample188170293)
null
CC BY-SA 4.0
null
2023-02-06T14:02:14.520
2023-02-06T14:02:14.520
null
null
13,338,179
null
75,362,508
2
null
75,362,235
0
null
I think that the JSON isn't good. Try ``` body: JSON.stringify({ 'username' : username, 'password': password }) ``` Also the `mode: 'no-cors'` will force the content-type to text. Either take it out or set `mode: 'cors'` - see [no-cors breaks content-type header](https://stackoverflow.com/questions/42372834/request-header-not-set-as-expected-when-using-no-cors-mode-with-fetch-api)
null
CC BY-SA 4.0
null
2023-02-06T14:17:30.707
2023-02-06T16:42:55.367
2023-02-06T16:42:55.367
8,041,003
8,041,003
null
75,362,589
2
null
74,432,453
0
null
use this pip install -r odoo/requirement.txt
null
CC BY-SA 4.0
null
2023-02-06T14:24:51.083
2023-02-06T14:24:51.083
null
null
10,155,163
null
75,362,584
2
null
75,362,235
0
null
`@PostMapping` or other mappings can be set to specific input/output ContentTypes ``` @PostMapping(path = "/login", consumes = "application/json", produces = "application/json") public ResponseEntity<String> login(@RequestBody LoginRequest request) { ... return ResponseEntity; } ``` Other references to understand Spring @PostMapping: [(Baeldung) @RequestMapping Consumes and Produces](https://What%20is%20produce%20and%20consume%20in%20@Request%20Mapping) [(Stackoverflow) What is produce and consume in @Request Mapping](https://stackoverflow.com/questions/33591574/what-is-produce-and-consume-in-request-mapping) For the JavaScript part. The problem is with `mode: 'no-cors'` [JavaScript no-cors answer](https://stackoverflow.com/a/55837819/9085392) A way to get arround with that is changing the java: [JSON to Java Object](https://www.baeldung.com/jackson-object-mapper-tutorial) ``` @PostMapping(path = "/login", consumes = "text/plain", produces = "application/json") public ResponseEntity<String> login(@RequestBody String request) { ObjectMapper objectMapper = new ObjectMapper(); LoginRequest loginRequest = objectMapper.readValue(json, LoginRequest.class); ... return ResponseEntity; } ```
null
CC BY-SA 4.0
null
2023-02-06T14:24:20.620
2023-02-06T18:31:17.250
2023-02-06T18:31:17.250
9,085,392
9,085,392
null
75,362,675
2
null
71,575,037
1
null
In your Next.js application, check your pages/api/preview.js file. There is some code there that needs to be correct before the redirect will work ``` export default function handler(req, res) { res.setPreviewData({}) res.redirect(req.query.redirect) } ``` if you have something like the following your code won't work ``` res.end('Preview mode enabled') ```
null
CC BY-SA 4.0
null
2023-02-06T14:33:16.897
2023-02-06T14:33:16.897
null
null
2,757,347
null
75,362,737
2
null
75,249,977
0
null
It might be temporary server issue with Microsoft OAuth that is ongoing for several day now. Many many reports from ppl with different email clients report for the same issue. For example [https://support.emclient.com/index.php?/Knowledgebase/Article/View/256/7/cannot-send-emails-for-outlookcom-accounts---authentication-aborted](https://support.emclient.com/index.php?/Knowledgebase/Article/View/256/7/cannot-send-emails-for-outlookcom-accounts---authentication-aborted)
null
CC BY-SA 4.0
null
2023-02-06T14:38:04.080
2023-02-06T14:38:04.080
null
null
3,114,498
null
75,362,850
2
null
75,362,289
-1
null
Have you tried to see you have a internet connection in your VM ? how do you connect to VM via SSH/public IP or UI ? I would say try to curl to github first and see if that goes through , it seems like your VM does not have internet access and you need to configure cloud router to fix it. /Ehsan
null
CC BY-SA 4.0
null
2023-02-06T14:48:21.090
2023-02-06T14:48:21.090
null
null
3,188,424
null
75,362,848
2
null
29,907,536
0
null
I use ``` function rotate( object, deg, axis ) { // axis is a THREE.Vector3 var q = new THREE.Quaternion(); q.setFromAxisAngle(axis, THREE.MathUtils.degToRad( deg ) ); // we need to use radians q.normalize(); object.quaternion.multiply( q ); } ``` So to on the Z axis we would call it like ``` rotate( myMesh, 90, new THREE.Vector3( 0, 0, 1 ); ``` Or if you want to you can use slerp. And increase the progress value that goes from 0 to 1. ``` function rotateSlerp( object, deg, axis, progress ) { var q = new THREE.Quaternion(); q.setFromAxisAngle( axis, THREE.MathUtils.degToRad( deg ) ); q.normalize(); object.quaternion.slerp( q, progress ); } ``` To use it, you would call ``` let progress = 0; function loop() { progress += 0.05; rotateSlerp( myMesh, 90, new THREE.Vector3( 0, 0, 1), progress ); requestAnimationFrame( loop ); } ```
null
CC BY-SA 4.0
null
2023-02-06T14:48:15.243
2023-02-06T14:48:15.243
null
null
14,528,531
null
75,362,857
2
null
73,501,020
1
null
I had the same issue after loading one of the existing project. I was able to resolve this by installing the .NET version project was targetting. (In my case, it was .NET 5). Reason for the error: Since .NET's previous versions are out-of-support, Visual Studio won't install previous versions during installation. Remedy: Intall SDK by yourself after installing Visual Studio
null
CC BY-SA 4.0
null
2023-02-06T14:48:59.890
2023-02-06T14:48:59.890
null
null
7,311,043
null
75,363,159
2
null
75,362,788
0
null
To use your own functions within sparkSQL, you need to wrap them inside of a UDF (user defined function). ``` val df = spark.range(1) .withColumn("x", array(lit(1), lit(2), lit(3))) // defining the user defined functions from the scala functions. val magnitude_udf = udf(magnitude _) val dot_product_udf = udf(dotProduct(_,_)) df .withColumn("magnitude", magnitude_udf('x)) .withColumn("dot_product", dot_product_udf('x, 'x)) .show ``` ``` +---+---------+------------------+-----------+ | id| x| magnitude|dot_product| +---+---------+------------------+-----------+ | 0|[1, 2, 3]|3.7416573867739413| 14| +---+---------+------------------+-----------+ ```
null
CC BY-SA 4.0
null
2023-02-06T15:15:02.100
2023-02-06T15:15:02.100
null
null
8,893,686
null
75,363,279
2
null
75,362,852
0
null
That function is from an old version of scikit-learn. You can try using > pip install scikit-plot ``` # Import scikit-plot import scikitplot as skplt import matplotlib.pyplot as plt skplt.metrics.plot_precision_recall(y, y_pred) plt.show() ``` Documentation: [https://scikit-plot.readthedocs.io/en/stable/metrics.html#scikitplot.metrics.plot_precision_recall](https://scikit-plot.readthedocs.io/en/stable/metrics.html#scikitplot.metrics.plot_precision_recall) Or you can use `precision_recall_curve` in the current version of sklearn as mentioned by [Dr. Snoopy](https://stackoverflow.com/users/349130/dr-snoopy) ``` from sklearn.metrics import precision_recall_curve ```
null
CC BY-SA 4.0
null
2023-02-06T15:25:25.010
2023-02-06T15:37:58.443
2023-02-06T15:37:58.443
17,749,677
17,749,677
null
75,363,288
2
null
23,044,218
0
null
I've used a solution with `CustomTabBar`, but I ended up getting a lot of exceptions which didn't tell me anything. Every time I was getting a different error, or sometimes no error at all. For example one of the errors in a random place: ``` Thread 1: EXC_BAD_ACCESS (code=1, address=0x18) ``` Or: ``` malloc: Incorrect checksum for freed object 0x14ce4c790: probably modified after being freed. Corrupt value: 0xb000000000000001 malloc: *** set a breakpoint in malloc_error_break to debug ``` I searched for a solution and on Apple's dev-forum user advised to use [Standard Memory Debugging Tools](https://developer.apple.com/forums/thread/92102). instrument didn't help, but [Address Sanitizer](https://developer.apple.com/documentation/xcode/diagnosing-memory-thread-and-crash-issues-early) helped to identify the problem at once! ``` SUMMARY: AddressSanitizer: heap-buffer-overflow MyTabBarController.swift in MyTabBarController.CustomTabBar.hasBanner.setter thread #1: tid = 0x6801d, 0x00000001089bb250 libclang_rt.asan_iossim_dynamic.dylib`__asan::AsanDie(), queue = 'com.apple.main-thread', stop reason = Heap buffer overflow { "access_size": 1, "access_type": 1, "address": 4918104816, "description": "heap-buffer-overflow", "instrumentation_class": "AddressSanitizer", "pc": 4382625148, "stop_type": ``` The problem was that I used an instance variable in CustomTabBar. For some reason it was causing crashes. I've switched the var to a static and it solved the problem! ``` class MyTabBarController: UITabBarController { override func viewDidLoad() { // We have to put all init logic here because: // `UITabBarController` calls `loadView()` inside `super.init()` method, // which causes the call to `viewDidLoad()`. // So the `viewDidLoad()` method will be called before `init()` has finished its job. object_setClass(tabBar, CustomTabBar.self) CustomTabBar.hasBanner = InAppPurchaseManager.shared.activeSubscription == nil super.viewDidLoad() // ... } // ... } extension MyTabBarController { class CustomTabBar: UITabBar { // We have to use a static var `hasBanner` // because an instance var causes a Heap Buffer Overflow. static var hasBanner: Bool = true // <------------------- THE SOLUTION override func sizeThatFits(_ size: CGSize) -> CGSize { var sizeThatFits = super.sizeThatFits(size) sizeThatFits.height = Constants.Layout.tabBarHeight + (Self.hasBanner ? 44 : 0) return sizeThatFits } } } ```
null
CC BY-SA 4.0
null
2023-02-06T15:26:21.020
2023-02-06T15:32:19.070
2023-02-06T15:32:19.070
1,967,771
1,967,771
null
75,363,634
2
null
22,373,546
0
null
This is a really old question and I hope you found a solution. I stumbled upon it while searching for help on a similar (but not exactly the same) error. I found this note which doesn't apply to my system/error, but might apply to yours, so I'll leave it here for reference: [2318244](https://launchpad.support.sap.com/#/notes/2318244) - Shortdump occurs in /IDXGC/PDOCMON01 when click some Process Step No. to display step additional data Regards, tao
null
CC BY-SA 4.0
null
2023-02-06T15:54:58.323
2023-02-06T15:54:58.323
null
null
21,159,134
null
75,363,843
2
null
75,363,700
0
null
I figured it out. Posting for others for future reference. ``` crewSizeWAvg = CALCULATE ( AVERAGEX ( VALUES ( 'Workmax Time - Open and Batched'[inTimeDateRawDateOnly] ), CALCULATE ( DISTINCTCOUNT ( 'Workmax Time - Open and Batched'[employee_id] ) ) )) ```
null
CC BY-SA 4.0
null
2023-02-06T16:12:42.740
2023-02-06T16:12:42.740
null
null
17,262,054
null
75,363,917
2
null
67,332,490
0
null
Kill all java services. In Linux run `killall -9 java` Re re-run the zookeeper and Kafka server. It works
null
CC BY-SA 4.0
null
2023-02-06T16:18:47.227
2023-02-06T16:18:47.227
null
null
7,098,524
null
75,363,931
2
null
75,361,761
1
null
Pyodide is not meant to be installed with Pip - you should be including it via a script tag to the appropriate CDN link, as [indicated in the Pyodide docs](https://pyodide.org/en/stable/usage/quickstart.html).
null
CC BY-SA 4.0
null
2023-02-06T16:20:17.387
2023-02-06T16:20:17.387
null
null
19,718,391
null
75,364,162
2
null
75,362,996
1
null
What causes the legend box (which is too big for the plot dimension) to be positioned there, is probably some quite clever patchwork code, and is related to `guide_area` (therefore my question title edit). The below is a slightly unsatisfactory, but effective hack to modify the position. It's a bit of a trial and error. Simply give a negative margin to the legend box to the right and it will "drag" the box accordingly. I've removed all the `legend.position = "none"` from your plots as this is not necessary with `guides = "collect"` ``` library(ggplot2) library(patchwork) p1 <- ggplot(iris) + geom_point(aes(Sepal.Length, Sepal.Width, color = Species, size = Petal.Length)) p2 <- ggplot(iris) + geom_point(aes(Sepal.Length, Sepal.Width, color = Species, size = Petal.Length)) p3 <- ggplot(iris) + geom_point(aes(Sepal.Length, Sepal.Width, color = Species, size = Petal.Length)) p4 <- ggplot(iris) + geom_point(aes(Sepal.Length, Sepal.Width, color = Species, size = Petal.Length)) p1 + p2 + p3 + p4 + guide_area()+ plot_layout(ncol=3, guides = "collect", widths=c(6,1,1), heights=c(6,1)) & theme(legend.direction = "vertical", legend.box = "horizontal", legend.box.margin = margin(r = -1, unit = "in")) ``` [](https://i.stack.imgur.com/SVaDx.png)
null
CC BY-SA 4.0
null
2023-02-06T16:41:25.407
2023-02-06T16:49:20.090
2023-02-06T16:49:20.090
7,941,188
7,941,188
null
75,364,231
2
null
75,362,996
2
null
There are two issues with your code. First using `+` to glue your plots together and setting `ncol=3` will place the `guide_area` in the second column of the second row. To center the legend I would suggest to use the `design` argument to specify the layout of the plot. Second, while the plot panels will adjust to the space set via the `height` and width arguments and the size of your plotting device, the legend will not, i.e. if the legend will not fit into the space given it will overlap with the surrounding panels. To fix that I would suggest to increase the widths of the second and third columns and the height of the second row. But as I said this also depends on the size of the plotting device. Using some fake example plot based on `mtcars`(see below) let's first reproduce your issue: ``` library(ggplot2) library(patchwork) list( dots, g_box_tmax, g_box_t0, tmax_box, guide_area() ) |> wrap_plots() + plot_layout(guides = "collect", widths = c(6, 1, 1), heights = c(6, 1), ncol = 3) & theme(legend.direction = "vertical", legend.box = "horizontal") ``` [](https://i.stack.imgur.com/Ty5On.png) However, specifying the layout via the `design` argument and increasing the height of the second row as well as the widths of the second and third columns works fine and centers the legend in the guide area: ``` design <- " ABC DEE " list( dots, g_box_tmax, g_box_t0, tmax_box, guide_area() ) |> wrap_plots() + plot_layout(guides = "collect", widths = c(6, 1.5, 1.5), heights = c(6, 1.5), design = design) & theme(legend.direction = "vertical", legend.box = "horizontal") ``` [](https://i.stack.imgur.com/CYgnJ.png) ``` dots <- ggplot(mtcars, aes(mpg, hp, color = factor(cyl), size = qsec)) + geom_point() + theme_bw() + theme( axis.title.x = element_blank(), panel.grid.minor.y = element_blank() ) g_box_tmax <- g_box_t0 <- ggplot(mtcars, aes(factor(cyl), hp, fill = factor(cyl))) + geom_boxplot() + theme_bw() + theme( axis.text.x = element_blank(), axis.ticks.x = element_blank(), panel.grid.minor.y = element_blank(), axis.text.y = element_blank(), axis.ticks.y = element_blank(), axis.title.x = element_blank(), legend.position = "none" ) tmax_box <- ggplot(mtcars, aes(mpg, factor(cyl), fill = factor(cyl))) + geom_boxplot() + theme_bw() + theme( axis.text.x = element_blank(), axis.ticks.x = element_blank(), axis.text.y = element_blank(), axis.ticks.y = element_blank(), axis.title.y = element_blank(), legend.position = "none" ) ```
null
CC BY-SA 4.0
null
2023-02-06T16:47:59.210
2023-02-06T16:47:59.210
null
null
12,993,861
null
75,364,473
2
null
19,548,384
0
null
If you do the derivation yourself according the figure, where we have surface normal pointing in the opposite direction (dot product of incident ray and normal is negative), I think it is safe to say the front ones are correct. For the latter ones, it seems that the normal is flipped to the opposite side of the surface yet we calculate all the cosine terms w.r.t the new normal vector. So in this situation (notice that cos theta_i is negative now, w.r.t to the downward-pointing normal, we can substitute it by -cos(pi - theta_i)), we can actually get the equivalent formula in which the only difference is one more negative sign for the normal vector. So I think the contradiction is caused by the direction of the normal vector and the definition of incident angle.
null
CC BY-SA 4.0
null
2023-02-06T17:11:42.547
2023-02-06T17:11:42.547
null
null
16,957,060
null
75,364,828
2
null
75,338,981
1
null
Let's start by addressing your database issues. Storing dates and times as strings is a bad idea as they use more space and cannot be handled as efficiently as native `DATE` / `TIME` types. A date as string '01-12-2022' stored in a `VARCHAR` uses 11 Bytes, whereas if you convert it to `DATE` it is only 3 Bytes. Similarly for your time data - 8 Bytes as `VARCHAR` or 3 Bytes as `TIME`. Even better would be to combine the two together as `DATETIME` requiring only 5 Bytes, but I shall leave that for you to ponder. ``` -- Update dates from dd-mm-yyyy (note 4 digit year) to yyyy-mm-dd UPDATE `voltage` SET `date` = STR_TO_DATE(`date`, '%d-%m-%Y'); -- If your existing dates have 2 digit year then use UPDATE `voltage` SET `date` = STR_TO_DATE(`date`, '%d-%m-%y'); -- update the column types ALTER TABLE `voltage` MODIFY COLUMN `date` DATE NOT NULL, MODIFY COLUMN `time` TIME NOT NULL; ``` You should also make sure you have a composite index on (`date`, `time`). To avoid this answer getting too long, I am not going to include the full content of the `index.html` template file but I have made the following changes - 1. <div id="purchase_order"> to <div id="voltages"> 2. Added <div id="chart"></div> before <div id="voltages"> 3. Added <thead> around the header row and tbody around the rest of the table rows 4. Added <script src="https://cdn.plot.ly/plotly-latest.min.js"></script> after the 2 jQuery scripts 5. Renamed From as from in various places Then the inline script - ``` <script> const chartLayout = { hovermode: 'closest', xaxis: { type: 'date', dtick: 10800000, hoverformat: '%H:%M:%S', tickformat: '%H:00\n%d %b', rangebreaks: [{ pattern: 'hour' }] } }; Plotly.react('chart', [{ x: [/* leaving these for you to figure out */], y: [], line: { simplify: false } }], chartLayout); $(document).ready(function(){ $.datepicker.setDefaults({ dateFormat: 'yy-mm-dd' }); $(function(){ $("#from").datepicker(); $("#to").datepicker(); }); $('#range').click(function(){ var from = $('#from').val(); var to = $('#to').val(); if(from != '' && to != '') { $.ajax({ url:"/range", method:"GET", data:{from:from, to:to}, success:function(data) { let x = [], y = [], rows = ''; for (const row of data) { x[x.length] = `${row.date} ${row.time}`; y[y.length] = row.voltage; rows += `<tr><td>${row.date}</td><td>${row.time}</td><td>${row.voltage}</td><td>${row.ignition}</td></tr>`; } // update table content $('#voltages > table > tbody').html(rows); // update chart Plotly.react('chart', [{ x: x, y: y, line: { simplify: false } }], chartLayout); } }); } else { alert("Please Select the Date"); } }); }); </script> ``` And this is the modified `/range` route - ``` @app.route('/range') def range(): cur = mysql.connection.cursor(MySQLdb.cursors.DictCursor) fromDate = request.args.get('from') toDate = request.args.get('to') query = """ SELECT CAST(`date` AS CHAR) AS `date`, CAST(`time` AS CHAR) AS `time`, `voltage`, `ignition` FROM voltage WHERE date BETWEEN '{}' AND '{}' ORDER BY date, time """.format(fromDate, toDate) cur.execute(query) voltages = cur.fetchall() return jsonify(voltages) ``` The `date` and `time` have been cast to `CHAR`s in the `SELECT` as `json.dumps()` (used by jsonify) does not like handling them as their native types. You should switch to using parameterized prepared statements to mitigate the current [SQLi vulnerabilities](https://realpython.com/prevent-python-sql-injection/).
null
CC BY-SA 4.0
null
2023-02-06T17:48:37.537
2023-02-07T18:05:32.700
2023-02-07T18:05:32.700
1,191,247
1,191,247
null
75,364,870
2
null
75,301,271
0
null
Found the issue and resolution 1. Raised a Microsoft case to see the logs of the APPGW at the platform level 2. Microsoft verified the logs and identified that AppGW is not able to communicate with the Keyvault to read the ssl certificate as we are using Keyvault to store ssl cert for TLS encryption 3. Found out that subnet to subnet communication is blocked and hence AppGW is unable to communicate with KV in another subnet Resolution: Allowed subnet to subnet communication where appgw and kv are present Conclusion: Microsoft would have enabled better logging information (error details) in the AppGW resource deployment and or resource activity logs
null
CC BY-SA 4.0
null
2023-02-06T17:52:57.000
2023-02-06T17:52:57.000
null
null
21,112,921
null
75,365,876
2
null
75,360,978
1
null
The root problem is that simple point lights often don't suffice for full PBR rendering. Consider the following two renderings of a smooth metallic sphere: [](https://i.stack.imgur.com/ipkKL.png) This is the top-left sphere from a [glTF sample model](https://github.com/KhronosGroup/glTF-Sample-Models/tree/master/2.0/MetalRoughSpheres) rendered in [Babylon Sandbox](https://sandbox.babylonjs.com/). On the left side, the sphere is placed in a dark environment against a gray background, and a single point light illuminates the scene. The light is quite bright, but because the sphere is so smooth, and because the "point" nature of the light gives it essentially no radius, the reflection of this light is barely a few pixels, regardless of how bright it may be. The remainder of the sphere has the low D(h) values you mentioned, and is almost black. On the right side, the same sphere again in the same rendering engine, but this time the engine is using its default environment, which comes from an HDR image. In the case of smooth metal, the resulting render is mostly a mirror reflection of the environment, but rougher and non-metallic surfaces can also have their appearance greatly influenced by colors and intensities in the surrounding environment. With a good quality environment, there's often no need to add point lights at all, and indeed there are no point lights in the right image. In general, PBR, and particularly metallic PBR, looks best with a full HDRI environment, not just point lights. For some sample code and shaders showing some of this math in action, the [Khronos glTF Sample Viewer](https://github.com/KhronosGroup/glTF-Sample-Viewer) might be a good place to start. [Disclaimer, I'm a contributor.]
null
CC BY-SA 4.0
null
2023-02-06T19:45:23.300
2023-02-06T19:45:23.300
null
null
836,708
null
75,366,155
2
null
14,266,333
0
null
Okay, so you have a series `X`, and you use the builtin [stats::acf](https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/acf) function to compute the autocorrelation function values. To have a concrete example: ``` X <- c(seq(20,10,-1),seq(1,20)) X_ACF <- acf(X) # by default the same as `acf(X, ci.type="white")` ``` You'll get a plot with confidence intervals at a constant value `acf(X, ci.type="white")` (for the default white-noise null hypothesis) or nonconstant value `acf(X, ci.type="ma")` (for a moving average assumption). [plot.acf](https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/plot.acf) [](https://i.stack.imgur.com/bELN5.png) However, counterintuitively, the data for confidence intervals in those plots are included in the object returned by `acf()`. But, you can still get them yourself pretty easily. To answer your question directly, (inspired by @csgillespie's suggestion): ``` get_clim <- function(x, ci=0.95, ci.type="white"){ #' Gets confidence limit data from acf object `x` if (!ci.type %in% c("white", "ma")) stop('`ci.type` must be "white" or "ma"') if (class(x) != "acf") stop('pass in object of class "acf"') clim0 <- qnorm((1 + ci)/2) / sqrt(x$n.used) if (ci.type == "ma") { clim <- clim0 * sqrt(cumsum(c(1, 2 * x$acf[-1]^2))) return(clim[-length(clim)]) } else { return(clim0) } } ``` Use it like ``` get_clim(X_ACF, ci.type = "white") # returns a single ci limit value (ci is plus or minus this value) ``` > ``` [1] 0.3520199 ``` ``` get_clim(X_ACF, ci.type = "ma") # returns a list of values, one per value of X_ACF$acf ``` > ``` [1] 0.3520199 0.5589558 0.6672833 0.7277000 0.7583282 0.7702831 0.7724234 0.7726377 0.7778812 0.7935320 [11] 0.8225467 0.8650100 0.9061862 0.9443976 ``` --- Now, to show that this worked, and since it may be useful, here's a function which makes [ggplot2](https://ggplot2.tidyverse.org/index.html) plots corresponding to the default base R plots above. ``` library(ggplot2) theme_set(theme_minimal()) ggplot_acf <- function( x, ci=0.95, ci.type="white", ci.col = "blue"){ #' Replicates plot.acf() but using ggplot by default instead of base R plot #' `x` must be an object of class "acf" such as that outputted by `acf()` #' `ci.type` must be "white" or "ma" if (!ci.type %in% c("white", "ma")) stop('`ci.type` must be "white" or "ma"') if (class(x) != "acf") stop('pass in object of class "acf"') with.ci <- ci > 0 && x$type != "covariance" with.ci.ma <- with.ci && ci.type == "ma" && x$type == "correlation" if(with.ci.ma && x$lag[1L, 1L, 1L] != 0L) { warning("can use ci.type=\"ma\" only if first lag is 0") with.ci.ma <- FALSE } clim <- get_clim(x, ci=ci, ci.type=ci.type) df <- data.frame(lag = x$lag, acf=x$acf) p <- ggplot(df, aes(x=lag)) + geom_linerange(aes(ymax=acf, ymin=0)) + labs(y="ACF", x="Lag") if (with.ci) { if (ci.type == "white") { p <- p + geom_hline(yintercept = 0-clim, lty = 2, col = ci.col) + geom_hline(yintercept = 0+clim, lty = 2, col = ci.col) } else if (with.ci.ma && ci.type == "ma") { # ci.type="ma" not allowed for pacf dfclim <- df[-1,] dfclim$clim <- clim p <- p + geom_line(data = dfclim, aes(y = 0-clim), lty = 2, col = ci.col) + geom_line(data = dfclim, aes(y = 0+clim), lty = 2, col = ci.col) } } return(p) } ``` To check that this is working, lets plot the resulting ggplot objects next to their corresponding base R plots made by `plot.acf`. ``` library(patchwork) p11 <- ggplot_acf(X_ACF, ci.type="white") + labs(subtitle="ggplot version") p12 <- wrap_elements(panel=~plot(X_ACF, ci.type="white")) + labs(subtitle="base R version") old_par <- par(mar = c(0,0,0,0), bg = NA) (p11+p12) par(old_par) p21 <- ggplot_acf(X_ACF, ci.type="ma") + labs(subtitle="ggplot version") p22 <- wrap_elements(panel=~plot(X_ACF, ci.type="ma")) + labs(subtitle="base R version") old_par <- par(mar = c(0,0,0,0), bg = NA) (p21+p22) par(old_par) ``` [](https://i.stack.imgur.com/hlYME.png) [](https://i.stack.imgur.com/fIgsl.png)
null
CC BY-SA 4.0
null
2023-02-06T20:18:25.473
2023-02-07T04:36:48.213
2023-02-07T04:36:48.213
1,676,393
1,676,393
null
75,366,366
2
null
61,576,670
0
null
I followed the introductory advice from @MwamiTovi but I still did not have an option to create a server as he noted. However, I was able to get my databases to appear by clicking menu option Object -> Register -> Server and type in the information (hostname/address, port) from my associated psql setup. This was using PG Admin 4.19 on macOS Big Sur.
null
CC BY-SA 4.0
null
2023-02-06T20:41:43.487
2023-02-06T20:41:43.487
null
null
4,368,068
null
75,366,382
2
null
11,058,659
0
null
Try something like: ``` plot # ... \ keyentry w l lw 1 lc 2 t "Title" # ... ``` And remove the old keys.
null
CC BY-SA 4.0
null
2023-02-06T20:43:14.880
2023-02-06T20:43:14.880
null
null
17,949,232
null
75,366,388
2
null
62,834,872
0
null
I came across a similar issue using the elastic beanstalk python-3.8 platform and Django 3.2. It's worth noting that the 502 error from nginx will occur with a few different deployment errors (mine were occurring during the `manage.py migrate` command). Despite fixing the issues suggested in other comments, the 502 error persisted for me. Ultimately, it was because I was running into some version issues with SQLite and the Amazon Linux 2 platform. I switched to postgres and the issue was fixed.
null
CC BY-SA 4.0
null
2023-02-06T20:43:56.310
2023-02-06T20:43:56.310
null
null
5,058,266
null
75,366,542
2
null
75,364,860
0
null
As per the snapshot: ![comments](https://i.stack.imgur.com/KSarU.png) the ancestor `<div>` of the `<ul>` isn't a descendant of `h2[text()="Comments"]`. Hence you see [exception of no element found](https://stackoverflow.com/a/47995294/7429447) --- ## Solution To print the you have to induce [WebDriverWait](https://stackoverflow.com/a/59130336/7429447) for [visibility_of_all_elements_located()](https://stackoverflow.com/a/64770041/7429447) and you can usethe following [locator strategy](https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver): ``` print([my_elem.text for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//div/h2[contains(., 'Comments')]//following-sibling::div[2]/ul//li")))]) ``` : You have to add the following imports : ``` from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC ```
null
CC BY-SA 4.0
null
2023-02-06T21:01:40.703
2023-02-06T21:07:35.440
2023-02-06T21:07:35.440
7,429,447
7,429,447
null
75,366,655
2
null
75,366,469
0
null
Did you follow the installation instructions? [https://docs.expo.dev/build/setup/](https://docs.expo.dev/build/setup/) Especially `eas build:configure` If you installed it with yarn, you should run it like `yarn eas update`, because it's not global.
null
CC BY-SA 4.0
null
2023-02-06T21:15:43.340
2023-02-06T21:15:43.340
null
null
15,627,885
null
75,366,740
2
null
75,364,742
0
null
The attribute values like `popover_otrppv916b` are dynamically generated and is bound to change sooner/later. They may change next time you access the application afresh or even while next application startup. So can't be used in locators. --- ## Solution To click on the element with text as you need to induce [WebDriverWait](https://stackoverflow.com/a/59130336/7429447) for the [element_to_be_clickable()](https://stackoverflow.com/a/54194511/7429447) and you can use either of the following [locator strategies](https://stackoverflow.com/a/48056120/7429447): - Using and `normalizespace()`:``` WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//ul[@class='group-list']//li//span[normalizespace()='Last month']"))).click() ``` - Using and `contains()`:``` WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//ul[@class='group-list']//li//span[contains(., 'Last month')]"))).click() ``` - : You have to add the following imports :``` from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC ```
null
CC BY-SA 4.0
null
2023-02-06T21:25:03.447
2023-02-06T21:25:03.447
null
null
7,429,447
null
75,366,967
2
null
75,366,576
0
null
Two things at play here. The first is the target setting for the SSIS Script Task/Component. You can change that up/down as the item requires. [](https://i.stack.imgur.com/gJZxW.png) That's a development setting by the way. When you deploy to the server, would need to ensure that server itself has an equivalent runtime on it. [](https://i.stack.imgur.com/m1Dz0.png) Now, things get "weird" because for the 4.0 runtime framework, which we both have, there are developer versions within that version, from a Command Prompt if you typed `C:\Windows\Microsoft.NET\Framework\v4.0.30319\csc.exe` you'd see what version there. For example, I see > Microsoft (R) Visual C# Compiler version 4.8.4084.0 for C# 5 [How do I find the .NET version?](https://stackoverflow.com/questions/1565434/how-do-i-find-the-net-version)
null
CC BY-SA 4.0
null
2023-02-06T21:54:30.447
2023-02-06T21:54:30.447
null
null
181,965
null
75,367,281
2
null
75,362,788
0
null
One way is to wrap your functions within UDFs. Yet UDFs are known to be suboptimal most of the time. You could therefore rewrite your functions with spark primitives. To ease the reuse of the expression you write, you can write functions that take `Column` objects as parameters. ``` import org.apache.spark.sql.Column def magnitude(x : Column) = { aggregate(transform(x, _ * _), lit(0), _ + _) } def dotProduct(x : Column, y : Column) = { val products = transform(arrays_zip(x, y), s => s(x.toString) * s(y.toString)) aggregate(products, lit(0), _ + _) } def cosineSimilarity(x : Column, y : Column) = { dotProduct(x, y) / (magnitude(x) * magnitude(y)) } ``` Let's test this: ``` val df = spark.range(1).select( array(lit(1), lit(2), lit(3)) as "x", array(lit(1), lit(3), lit(5)) as "y" ) df.select( 'x, 'y, magnitude('x) as "magnitude_x", dotProduct('x, 'y) as "dot_prod_x_y", cosineSimilarity('x, 'y) as "cosine_x_y" ).show() ``` which yields: ``` +---------+---------+-----------+------------+--------------------+ | x| y|magnitude_x|dot_prod_x_y| cosine_x_y| +---------+---------+-----------+------------+--------------------+ |[1, 2, 3]|[1, 3, 5]| 14| 22|0.044897959183673466| +---------+---------+-----------+------------+--------------------+ ```
null
CC BY-SA 4.0
null
2023-02-06T22:37:32.827
2023-02-07T08:19:26.293
2023-02-07T08:19:26.293
8,893,686
8,893,686
null
75,367,437
2
null
75,329,190
0
null
Thank you cowplot! I have made the second ggplot into a grob using `as_grob`, specifying the dimensions and coordinates in `draw_grob`. ``` library(cowplot) ggdraw(p) + draw_grob(as_grob(p1), x = 0.55, y = 0.02, width = 0.288, height = 1) ```
null
CC BY-SA 4.0
null
2023-02-06T23:03:18.650
2023-02-06T23:03:18.650
null
null
21,135,828
null
75,367,658
2
null
75,366,383
1
null
Use [scipy.spatial.KDTree](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.html). Once you have built the KDTree on the points of the Baraffe track, you can use the different methods of the KDTree instance to compute all the quantities that are interesting you. Here, for simplicity, I have just shown how to use the `query` method to build a 1—1 correspondence between most-neighboring points. [](https://i.stack.imgur.com/coB9D.png) ``` import numpy as np import matplotlib.pyplot as plt from scipy.spatial import KDTree np.random.seed(20230307) x = np.linspace(0, 10, 51) y = np.sin(x)*0.7 x, y = +x*0.6+y*0.8, -0.8*x+0.6*y xp = np.linspace(1, 9, 21) yp = -1+np.random.rand(21)*0.4 xp, yp = +xp*0.6+yp*0.8, -0.8*xp+0.6*yp kdt = KDTree(np.vstack((x, y)).T) # the array that is indexed must be N×2 distances, indices = kdt.query(np.vstack((xp, yp)).T, k=1) fig, ax = plt.subplots() ax.set_aspect(1) ax.plot(x, y, color='k', lw=0.8) ax.scatter(xp, yp, color='r') for x0, y0, i in zip(xp, yp, indices): plt.plot((x0, x[i]), (y0, y[i]), color='g', lw=0.5) plt.show() ```
null
CC BY-SA 4.0
null
2023-02-06T23:41:56.667
2023-02-07T00:11:03.243
2023-02-07T00:11:03.243
2,749,397
2,749,397
null
75,367,804
2
null
75,363,750
0
null
That should be the `RootScrollViewer`. You can see it using the [Live Visual Tree](https://learn.microsoft.com/en-us/visualstudio/xaml-tools/inspect-xaml-properties-while-debugging?view=vs-2022). There's not much info about it, generally speaking you don't need to know much about it, but at least now you know where that `ScrollViewer` comes from.
null
CC BY-SA 4.0
null
2023-02-07T00:14:52.620
2023-02-07T00:14:52.620
null
null
2,411,960
null
75,367,863
2
null
71,687,776
2
null
I made a cli tool that lets you read properties from the private `MediaRemote` framework. [https://github.com/kirtan-shah/nowplaying-cli](https://github.com/kirtan-shah/nowplaying-cli) Since it uses private APIs, it may break with future macOS updates but is currently working on Ventura 13.1. Here is an example that will retrieve the song name: [](https://i.stack.imgur.com/xPs3H.png) [](https://i.stack.imgur.com/H4OZw.png)
null
CC BY-SA 4.0
null
2023-02-07T00:26:25.353
2023-02-07T00:26:25.353
null
null
4,077,203
null
75,367,994
2
null
75,367,905
0
null
You have `width={"100%"} height={"100%"}`, and the text is making the container bigger, so it's logical that the 100% will be bigger when the container is bigger because the text is bigger. If you don't want that, you can just change the width and height using a fixed width, and not a width with %, that follows the change in the container.
null
CC BY-SA 4.0
null
2023-02-07T00:54:54.807
2023-02-07T00:54:54.807
null
null
14,818,875
null
75,368,241
2
null
75,367,751
2
null
I think you are looking for something like (in Bash) ``` [[ $(git merge-base X Y) = $(git rev-parse Y) ]] && echo yes || echo no ``` `git merge-base X Y` finds the common ancestor and prints its full sha1 value. As to , the [doc](https://www.git-scm.com/docs/git-merge-base#_description) says that > One common ancestor is better than another common ancestor if the latter is an ancestor of the former. A common ancestor that does not have any better common ancestor is a best common ancestor. In most cases, the best common ancestor is the nearest common commit of the 2 branches away from both heads. `git rev-parse Y` prints the sha1 value of Y's head. If the best common ancestor is the same with Y's head, it meets that `every commit in branch X has Y as an ancestor (or is equal to Y or is one of Y's ancestors)`. In other words, the set of Y's commits is a subset of the set of X's commits. But in practice, Y in the remote repository could be being updated by others and the local Y could get updated unintentionally. The test would say no even if X is really descendent of Y during the period after X has been created and before Y is updated.
null
CC BY-SA 4.0
null
2023-02-07T01:46:00.990
2023-02-07T02:54:53.293
2023-02-07T02:54:53.293
6,330,106
6,330,106
null
75,368,320
2
null
75,279,701
0
null
First of all, update all of your nuget package project to the latest, then close your VS, then delete the 'bin' and 'obj' folders, rebuild and run your project, if it worked as normal.
null
CC BY-SA 4.0
null
2023-02-07T02:03:31.157
2023-02-07T02:03:31.157
null
null
19,818,926
null
75,368,810
2
null
75,355,612
0
null
I have discovered the error. In my AppliersControllers, at the [HttpGet("{id}")] section, I used GetAll instead of Get. This made it so that I was loading the entire list instead of just the data at the Id.
null
CC BY-SA 4.0
null
2023-02-07T03:52:01.053
2023-02-07T03:52:01.053
null
null
20,963,345
null
75,368,861
2
null
75,368,621
0
null
You can use django's [check_password](https://docs.djangoproject.com/en/4.1/topics/auth/customizing/#django.contrib.auth.models.AbstractBaseUser.check_password) function to do this. Example - ``` def clean_password(self): user = User.objects.filter(username=username) success = user.check_password(password) if not success: raise forms.ValidationError('Incorrect password') return password ```
null
CC BY-SA 4.0
null
2023-02-07T04:04:05.643
2023-02-07T04:04:05.643
null
null
16,475,089
null
75,369,041
2
null
75,367,751
5
null
One way to check whether all commits in `Y..X` are descendants of `Y` is to check the boundary of that range using `git log --boundary Y..X` or `git rev-list --boundary Y..X`. Starting from this history : ``` $ git log --graph --oneline --all * 036a9f9 (HEAD -> X) create d.txt * cadd199 create c.txt | * 0680934 (Z) Merge commit '22a23fe' into Z |/| | * 22a23fe create b.txt * | 8dec744 (Y) create a.txt |/ * 878ac8b first commit ``` You will get : ``` $ git log --oneline --boundary Y..X 036a9f9 (HEAD -> X) create d.txt cadd199 create c.txt - 8dec744 (Y) create a.txt # <- one single boundary commit, pointing at Y $ git log --oneline --boundary Y..Z 0680934 (Z) Merge commit '22a23fe' into Z 22a23fe create b.txt - 8dec744 (Y) create a.txt # <- two commits on the boundary - 878ac8b first commit # <- ``` A scriptable way to check if you are in this situation is : ``` # 'git rev-list' prints full hashes, boundary commits are prefixed with '-' boundary=$(git rev-list --boundary Y..X | grep -e '^-') want=$(git rev-parse Y) want="-$want" # the boundary should consist of "-<hash of Y>" only: if [ "$boundary" = "$want" ]; then echo "all commits in X are descendants of Y" fi ``` --- The above just checks that all commits come after `Y`. You could also be faced with the following situation : ``` * 036a9f9 (HEAD -> X) create d.txt * 0680934 Merge 'origin/X' into X # <- someone created a merge commit in between |\ | * cadd199 create c.txt * | 22a23fe create b.txt |/ * 8dec744 (Y) create a.txt * 878ac8b first commit ``` and this would also get into the way of a rebase workflow. If you also want to rule this out, use the `--merges` option of `git log` or `git rev-list` : ``` # git rev-list also has a --count option, which will output the count # rather than the complete list of commits merges=$(git rev-list --count --merges Y..X) if [ "$merges" -eq 0 ]; then echo "all good, no merges between Y and X" fi ``` --- The [documentation for --boundary](https://git-scm.com/docs/git-log#Documentation/git-log.txt---boundary) does not give a good explanation of what a "boundary commit" is. I would say [this SO answer](https://stackoverflow.com/questions/42437590/what-is-a-git-boundary-commit) has a decent definition: > A boundary commit is the commit that limits a revision range but does not belong to that range. For example the revision range HEAD~3..HEAD consists of 3 commits (HEAD~2, HEAD~1, and HEAD), and the commit HEAD~3 serves as a boundary commit for it.More formally, git processes a revision range by starting at the specified commit and getting at other commits through the parent links. It stops at commits that don't meet the selection criteria (and therefore should be excluded) - those are the boundary commits.
null
CC BY-SA 4.0
null
2023-02-07T04:52:10.817
2023-02-08T05:25:01.913
2023-02-08T05:25:01.913
86,072
86,072
null
75,369,367
2
null
75,368,182
0
null
Importing the packages to be used: ``` import pandas import matplotlib.pyplot as plt import seaborn as sns import numpy as np import matplotlib.patches as mpatches import matplotlib.colors as mcolors ``` First, you need to convert all the NaN values to a 0 in your pandas dataframe. ``` df = df.replace(np.nan, 0) ``` You need a colormap to map the values to colors starting with 0. You can create a colormap using the `matplotlib.colors.LinearSegmentedColormap` class. You can change the colors as you wish. ``` cmap = mcolors.LinearSegmentedColormap.from_list("", ["white","red","yellow","blue"]) ``` Then, you can draw the heatmap using the `seaborn.heatmap` function. We will avoid the `annot` attribute to avoid displaying the values in the cells. ``` sns.heatmap(df, cmap=cmap, linewidths=0.5, linecolor='black', annot=True, fmt='') plt.show() ``` Alternatively, if you want to draw the heatmap without the color axis, you can set the `cbar` attribute to `False`. Instead you can add a legend with specific colors for each value to the plot. ``` legend = ['1', '2', '3'] colors = ['red', 'yellow', 'blue'] patches = [mpatches.Patch(color=colors[i], label=legend[i]) for i in range(len(legend))] sns.heatmap(df, cmap=cmap, linewidths=0.5, linecolor='black', annot=True, fmt='', cbar=False) plt.legend(handles=patches, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) ```
null
CC BY-SA 4.0
null
2023-02-07T05:52:17.857
2023-02-07T05:52:17.857
null
null
9,815,919
null
75,369,617
2
null
75,367,905
0
null
You can set `margin:0` for the h3 tags. Since `<h>` tags have a default margin to them. You have to set them to 0 manually ``` .text-center h3{ margin :0 } ```
null
CC BY-SA 4.0
null
2023-02-07T06:32:27.300
2023-02-07T06:32:27.300
null
null
20,069,966
null
75,369,834
2
null
24,838,399
0
null
You can change your selected date color with create a selector. create a selector file: your_selector.xml ``` <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_activated="true" android:color="@color/white" /> <item android:color="@color/color_black" /> </selector> ``` Use your selector in style : ``` <style name="CalenderViewDateCustomText"> <item name="colorControlNormal">@color/white</item> <item name="colorControlActivated">@color/white</item> <item name="colorControlHighlight">@color/white</item> <item name="android:textColor">@drawable/your_selector</item> </style> ``` use it in calender view : ``` <CalendarView android:layout_width="match_parent" android:layout_height="wrap_content" android:theme="@style/CalenderViewCustom" android:dateTextAppearance="@style/CalenderViewDateCustomText" android:weekDayTextAppearance="@style/CalenderViewWeekCustomText" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" /> ``` [calendar example](https://i.stack.imgur.com/b6c7B.png)
null
CC BY-SA 4.0
null
2023-02-07T07:00:37.667
2023-02-07T07:00:37.667
null
null
13,164,822
null
75,369,968
2
null
75,365,756
1
null
Since you have used `Label` widgets to show the images of the board and chess piece and `Label` does not support transparent background. As your second example shows, you can use `Canvas` to show those transparent images. Below are the modified `boardtoimage()` and `CMI_clicked()` functions to use `Canvas` instead of `Label`: ``` ... def boardtoimage(root, boardstr): #places all pieces onto the window graphically piece_mapping = { 'r': bRook, 'n': bKnight, 'b': bBishop, 'q': bQueen, 'k': bKing, 'p': bPawn, 'R': wRook, 'N': wKnight, 'B': wBishop, 'Q': wQueen, 'K': wKing, 'P': wPawn, } for y in range(8): for x in range(8): piece = boardtoarr(boardstr)[y][x] # get the corresponding piece image image = piece_mapping.get(piece, None) if image: xcoord = topleftpixels(x,y)[0] ycoord = topleftpixels(x,y)[1] # show the piece image canvas.create_image(xcoord, ycoord, image=image, anchor="nw") def CMI_clicked(): #check if the chess menu image was clicked global canvas for widget in root.winfo_children(): #code to clear page widget.destroy() # create the canvas to show those transparent images canvas = tk.Canvas(root, width=BoardImage.width(), height=BoardImage.height(), highlightthickness=0) canvas.pack() # show the chess board image canvas.create_image(0, 0, image=BoardImage, anchor="nw") board.push_san('e2e4') boardtoimage(root,str(board)) root.bind('<Button-1>', squareclicked) ... ``` Result: [](https://i.stack.imgur.com/E8URX.png)
null
CC BY-SA 4.0
null
2023-02-07T07:15:20.153
2023-02-07T07:15:20.153
null
null
5,317,403
null
75,370,248
2
null
75,369,945
2
null
There is probably a better way, but if you first concat the 4 columns with specific unique delimiter to split on later in a custom column, you have a work-around in PQ: [](https://i.stack.imgur.com/UXmP3.png) ``` let Source = Excel.CurrentWorkbook(){[Name="Table2"]}[Content], #"Changed Type" = Table.TransformColumnTypes(Source,{{"ID", type text}, {"Type1", type text}, {"Type1 Val", Int64.Type}, {"Type2", type text}, {"Type2 Val", Int64.Type}}), #"Added Custom" = Table.AddColumn(#"Changed Type", "Custom1", each [Type1]&"|"&Number.ToText([Type1 Val])&"$"&[Type2]&"|"&Number.ToText([Type2 Val])), #"Removed Columns" = Table.RemoveColumns(#"Added Custom",{"Type1", "Type1 Val", "Type2", "Type2 Val"}), #"Split Column by Delimiter" = Table.ExpandListColumn(Table.TransformColumns(#"Removed Columns", {{"Custom1", Splitter.SplitTextByDelimiter("$", QuoteStyle.Csv), let itemType = (type nullable text) meta [Serialized.Text = true] in type {itemType}}}), "Custom1"), #"Changed Type1" = Table.TransformColumnTypes(#"Split Column by Delimiter",{{"Custom1", type text}}), #"Split Column by Delimiter1" = Table.SplitColumn(#"Changed Type1", "Custom1", Splitter.SplitTextByEachDelimiter({"|"}, QuoteStyle.Csv, false), {"Custom1.1", "Custom1.2"}), #"Changed Type2" = Table.TransformColumnTypes(#"Split Column by Delimiter1",{{"Custom1.1", type text}, {"Custom1.2", Int64.Type}}) in #"Changed Type2" ``` --- Just in case you tagged 'Excel-Formula' and you have access to ms365: [](https://i.stack.imgur.com/cAFVI.png) Formula in `H1`: ``` =REDUCE({"ID","Type","Val"},ROW(A2:A5),LAMBDA(X,Y,VSTACK(X,INDEX(A:E,Y,{1,2,3}),INDEX(A:E,Y,{1,4,5})))) ```
null
CC BY-SA 4.0
null
2023-02-07T07:44:23.140
2023-02-07T07:49:40.260
2023-02-07T07:49:40.260
9,758,194
9,758,194
null
75,370,316
2
null
75,369,945
1
null
Or formula: `=SORT(VSTACK(A2:C5,HSTACK(A2:A5,D2:E5)))`
null
CC BY-SA 4.0
null
2023-02-07T07:50:59.853
2023-02-07T07:50:59.853
null
null
12,634,230
null
75,370,553
2
null
75,360,112
0
null
As I said in my comment, I doubt this is possible. If you can live with a different structure that more or less has the same properties regarding visual appearance and editability (does that word exist?), what I do is: - - This takes care of the visual appearance. To insert items inbetween manually later, one can insert a row into the table, but it's necessary to re-number the following items manually.
null
CC BY-SA 4.0
null
2023-02-07T08:17:05.953
2023-02-07T08:17:05.953
null
null
2,814,025
null
75,370,824
2
null
75,370,692
0
null
1. make sure your cert is format in pem. 2. pass the filename to param cert (or file path, which means your pem file is in current directory) [https://requests.readthedocs.io/en/latest/user/advanced/?highlight=cert#client-side-certificates](https://requests.readthedocs.io/en/latest/user/advanced/?highlight=cert#client-side-certificates)
null
CC BY-SA 4.0
null
2023-02-07T08:42:15.523
2023-02-07T08:42:15.523
null
null
12,105,008
null
75,371,296
2
null
30,884,350
0
null
``` $(document).on("ready pjax:success", function () { alert("Write your code here...."); }); ```
null
CC BY-SA 4.0
null
2023-02-07T09:27:42.327
2023-02-07T09:27:42.327
null
null
7,886,372
null
75,371,374
2
null
19,577,299
0
null
Below solution is an update to the @Hesam's [answer](https://stackoverflow.com/a/22758359/10989990), which addresses the following: 1. Final view is distorted in few screens 2. Preview is laid at the very top of screen, not centralized, which looks very weird. --- You just have to update the onMeasure like this, everything else remains the same. ``` @Override protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) { final int width = resolveSize(getSuggestedMinimumWidth(), widthMeasureSpec); final int height = resolveSize(getSuggestedMinimumHeight(), heightMeasureSpec); //centralize preview FrameLayout.LayoutParams surfaceParams = new FrameLayout.LayoutParams(width, (int) height); surfaceParams.gravity = Gravity.CENTER; this.setLayoutParams(surfaceParams); if (mSupportedPreviewSizes != null) { mPreviewSize = getOptimalPreviewSize(mSupportedPreviewSizes, width, height); } if (mPreviewSize != null) { float ratio; if (mPreviewSize.height >= mPreviewSize.width) ratio = (float) mPreviewSize.height / (float) mPreviewSize.width; else ratio = (float) mPreviewSize.width / (float) mPreviewSize.height; setMeasuredDimension(width, (int) (width * ratio)); //fix distortion, based on this answer https://stackoverflow.com/a/30634009/6688493 float camHeight = (int) (width * ratio); float newCamHeight; float newHeightRatio; if (camHeight < height) { newHeightRatio = (float) height / (float) mPreviewSize.height; newCamHeight = (newHeightRatio * camHeight); Log.e(TAG, camHeight + " " + height + " " + mPreviewSize.height + " " + newHeightRatio + " " + newCamHeight); setMeasuredDimension((int) (width * newHeightRatio), (int) newCamHeight); Log.e(TAG, mPreviewSize.width + " | " + mPreviewSize.height + " | ratio - " + ratio + " | H_ratio - " + newHeightRatio + " | A_width - " + (width * newHeightRatio) + " | A_height - " + newCamHeight); } else { newCamHeight = camHeight; setMeasuredDimension(width, (int) newCamHeight); Log.e(TAG, mPreviewSize.width + " | " + mPreviewSize.height + " | ratio - " + ratio + " | A_width - " + (width) + " | A_height - " + newCamHeight); } } } ```
null
CC BY-SA 4.0
null
2023-02-07T09:33:15.470
2023-02-07T09:33:15.470
null
null
6,688,493
null
75,371,607
2
null
53,452,674
0
null
I had similar issue. What i did was, deleted all the browsing data from the date i started using google colab. I don't know why but this actually worked. I hope this answer help some
null
CC BY-SA 4.0
null
2023-02-07T09:52:21.980
2023-02-07T09:52:21.980
null
null
7,521,298
null
75,372,150
2
null
75,372,143
0
null
Use [to_timedelta](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_timedelta.html) with casting times to strings and add to column `Date`: ``` df['dt'] = df['Date']+pd.to_timedelta(df['Time'].astype(str)) ```
null
CC BY-SA 4.0
null
2023-02-07T10:36:27.093
2023-02-07T10:36:27.093
null
null
2,901,002
null
75,372,226
2
null
75,369,945
3
null
More generically to stack vertically in powerquery while keeping certain columns ``` let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content], base_columns=1, groupsof=2, //stack them Combo = List.Transform(List.Split(List.Skip(Table.ColumnNames(Source),base_columns),groupsof), each List.FirstN(Table.ColumnNames(Source),base_columns) & _), #"Added Custom" =List.Accumulate(Combo, #table({"Column1"}, {}),(state,current)=> state & Table.Skip(Table.DemoteHeaders(Table.SelectColumns(Source, current)),1)), #"Rename"=Table.RenameColumns(#"Added Custom",List.Zip({Table.ColumnNames(#"Added Custom"),List.FirstN(Table.ColumnNames(Source),base_columns+groupsof)})) in #"Rename" ``` What seems to be fastest method of those I've tested ``` let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content], leading=1, groupsof=2, #"Added Custom" = Table.AddColumn(Source, "Custom", each List.Split( List.RemoveFirstN(Record.ToList( _),leading), groupsof) ), #"Added Custom0" = Table.AddColumn(#"Added Custom", "Custom0", each Text.Combine(List.FirstN(Record.ToList(_),leading),"|")), #"Removed Other Columns" = Table.SelectColumns(#"Added Custom0",{"Custom0", "Custom"}), #"Expanded Custom" = Table.ExpandListColumn( #"Removed Other Columns", "Custom"), #"Extracted Values" = Table.TransformColumns(#"Expanded Custom", {"Custom", each Text.Combine(List.Transform(_, Text.From), "|"), type text}), #"Merged Columns" = Table.CombineColumns(#"Extracted Values",{"Custom0", "Custom"},Combiner.CombineTextByDelimiter("|", QuoteStyle.None),"Custom"), #"Split Column by Delimiter" = Table.SplitColumn(#"Merged Columns", "Custom", Splitter.SplitTextByDelimiter("|", QuoteStyle.Csv), List.FirstN(Table.ColumnNames(Source),leading+groupsof)) in #"Split Column by Delimiter" ```
null
CC BY-SA 4.0
null
2023-02-07T10:43:01.160
2023-02-13T22:20:04.390
2023-02-13T22:20:04.390
9,264,230
9,264,230
null
75,372,309
2
null
69,862,216
-1
null
``` item(span = { GridItemSpan(2) }){// item code here} ```
null
CC BY-SA 4.0
null
2023-02-07T10:50:46.263
2023-02-07T10:50:46.263
null
null
4,204,340
null
75,372,404
2
null
10,048,060
0
null
2022, macos, swift: open "Get Info" window of finder: ``` func openGetInfoWnd(for url: URL) { openGetInfoWnd(for: [url]) } func openGetInfoWnd(for urls: [URL]) { let pBoard = NSPasteboard(name: NSPasteboard.Name(rawValue: "pasteBoard_\(UUID().uuidString )") ) pBoard.writeObjects(urls as [NSPasteboardWriting]) NSPerformService("Finder/Show Info", pBoard); } ```
null
CC BY-SA 4.0
null
2023-02-07T10:58:13.437
2023-02-07T10:58:13.437
null
null
4,423,545
null
75,372,430
2
null
70,622,649
0
null
In `androidx.navigation:navigation-compose:2.5.3` the property `decorFitsSystemWindows` of DialogProperties is added and has a default value of true. This in combination with `usePlatformDefaultWidth = false` and `Modifier.fillMaxSize()` should fix your problem. Here is an example: ``` dialog( "your route", dialogProperties: DialogProperties = DialogProperties( usePlatformDefaultWidth = false, decorFitsSystemWindows = true // passing this as parameter isn't mandatory but I added is for clarification purposes ) ) { Column(Modifier.fillMaxSize() { // your screen here } } ```
null
CC BY-SA 4.0
null
2023-02-07T11:01:05.597
2023-02-07T11:01:05.597
null
null
14,741,293
null
75,372,481
2
null
75,361,013
0
null
You only need to wrap your content widget with two scrolling widgets and set one for vertical scroll direction (default) and the other for horizontal scroll direction ``` Container( width: 300, height: 300, child: SingleChildScrollView( //this will scroll horizontal scrollDirection: Axis.horizontal, child: SingleChildScrollView( //this will scroll vertical child: LargeContainer(), //replace large container with your grid ), ), ); ```
null
CC BY-SA 4.0
null
2023-02-07T11:06:17.887
2023-02-07T11:06:17.887
null
null
20,367,275
null
75,372,674
2
null
20,070,333
0
null
Swift version of [Parag Bafna excellent answer](https://stackoverflow.com/a/20072869/775083) ``` var deviceName: String { var str = "Unknown Device" var len = 0 sysctlbyname("hw.model", nil, &len, nil, 0) if len > 0 { var data = Data(count: len) sysctlbyname("hw.model", &data, &len, nil, 0) if let s = String(bytes: data, encoding: .utf8) { str = s } } return str } ```
null
CC BY-SA 4.0
null
2023-02-07T11:23:40.093
2023-02-07T11:23:40.093
null
null
775,083
null
75,373,031
2
null
75,367,891
0
null
You can use this parser source code and make your own solution on the base [https://github.com/SafranCassiopee/php-metar-decoder](https://github.com/SafranCassiopee/php-metar-decoder) or simple way to get field ``` $arr = '$yourDataArrayHere$'; $metarRw = $arr['message'] echo 'Metar message '.$metarRw; ``` ``` $airportName = $arr['airport']['name'] echo 'Airport name '.$airportName; ``` etc.
null
CC BY-SA 4.0
null
2023-02-07T11:58:02.930
2023-02-07T12:11:36.413
2023-02-07T12:11:36.413
20,981,816
20,981,816
null
75,373,125
2
null
12,813,573
0
null
The first step is to have 6 long columnar boxes: [](https://i.stack.imgur.com/yrfzc.png) The second step is to use `position: absolute` and move them all into the middle of your container: [](https://i.stack.imgur.com/3OTa9.png) And now rotate them around the pivot point located at the `bottom center`. Use `:nth-child` to vary rotation angles: ``` div { transform-origin: bottom center; @for $n from 0 through 7 { &:nth-child(#{$n}) { rotate: (360deg / 6) * $n; } } ``` [](https://i.stack.imgur.com/HmU6r.png) Now all you have to do is to locate your images at the far end of every column, and compensate the rotation with an anti-rotation :) Full source: ``` <div class="flower"> <div class="petal">1</div> <div class="petal">2</div> <div class="petal">3</div> <div class="petal">4</div> <div class="petal">5</div> <div class="petal">6</div> </div> ``` ``` .flower { width: 300px; height: 300px; // We need a relative position // so that children can have "position:abolute" position: relative; .petal { // Make sure petals are visible border: 1px solid #999; // Position them all in one point position: absolute; top: 0; left: 50%; display: inline-block; width: 30px; height: 150px; // Rotation transform-origin: bottom center; @for $n from 0 through 7 { &:nth-child(#{$n}) { // Petal rotation $angle: (360deg / 6) * $n; rotate: $angle; // Icon anti-rotation .icon { rotate: -$angle; } } } } } ``` See [CodePen](https://codepen.io/kolypto/pen/LYBvJdb)
null
CC BY-SA 4.0
null
2023-02-07T12:07:48.613
2023-02-07T12:13:51.970
2023-02-07T12:13:51.970
134,904
134,904
null
75,373,142
2
null
54,236,111
0
null
For some reason, the above answer did not work for me. I do not know why. What worked for me is as follows: ``` cax2 = fig.add_axes([<xposition>, <yposition>, <xlength>, <ylegth>]) cax21 = cax2.twinx() cax2.set_ylabel('right-label',size=<right_lable_size>) cax2.tick_params(labelsize=<right_tick_size>) ''' These did not work for me cbar21.ax.yaxis.set_ticks_position('left') cbar21.ax.yaxis.set_label_position('left') ''' # This worked. cax21.yaxis.tick_left() cax21.yaxis.label_position='left' cax21.set_ylim(<minVal>,<maxVal>,<step>) cax21.set_ylabel("left-label",size=<left_lable_size>) cax21.tick_params(labelsize=<left_tick_size>) ``` Hopefully, this helps.
null
CC BY-SA 4.0
null
2023-02-07T12:09:17.673
2023-02-15T15:33:35.860
2023-02-15T15:33:35.860
6,570,411
6,570,411
null
75,373,178
2
null
15,021,006
0
null
``` #import <GoogleMaps/GoogleMaps.h> ``` add above the in the Appdelegate.m then add + ``` [GMSServices provideAPIKey:@"IOS_GOOGLE_MAPS_API_KEY"]; ``` then add the Below Lines into above the ``` rn_maps_path = '../node_modules/react-native-maps' pod 'react-native-google-maps', :path => rn_maps_path ``` This will work
null
CC BY-SA 4.0
null
2023-02-07T12:11:46.613
2023-02-07T12:11:46.613
null
null
16,109,026
null
75,373,297
2
null
75,335,136
0
null
Error is solved ``` import React,{useState} from 'react'; import {View, StyleSheet, Text, Button, Dimensions, TouchableOpacity} from 'react-native'; // import {LineChart} from "react-native-chart-kit"; import { LineChart, XAxis, YAxis } from 'react-native-svg-charts'; let {height, width} = Dimensions.get("window");//(Below) make it as width as the screen const GraphComponent_1 = (props) => { const {pinnedMeasurements, Labelss} = props; const data = [ 50, 10, 40, 95, -4, -24, 85, 91, 35, 53, -53, 24 ] const xLabels = [ 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec' ] const yLabels = [ 'Poor', 'Fair', 'Good', 'Very Good', 'Excellent' ] return ( <View style={{ height: 200, margin: 20, marginTop: 60, flexDirection: "row" }}> <YAxis style={{ marginRight: 10 }} svg={{ fill: "grey", fontSize: 10, }} contentInset={{ top: 20, bottom: 20 }} data={data} min={-60} max={100} numberOfTicks={yLabels.length} formatLabel={(value, index) => yLabels[index]} /> <View style={{flex: 1,}}> <LineChart data={data} style={{ flex: 1 }} svg={{ stroke: "rgb(134, 65, 244)" }} contentInset={{ top: 20, bottom: 20 }} /> <XAxis data={xLabels} contentInset={{ left: 10, right: 10 }} svg={{ fill: "grey", fontSize: 10, }} numberOfTicks={xLabels.length} formatLabel={(value, index) => xLabels[index]} /> </View> </View> ) } const styles = StyleSheet.create({ }); // formatLabel={(value) => `${value}ºC`} export default GraphComponent_1; ```
null
CC BY-SA 4.0
null
2023-02-07T12:22:55.560
2023-02-07T12:23:56.750
2023-02-07T12:23:56.750
21,140,122
21,140,122
null
75,373,327
2
null
75,369,945
3
null
In Power Query, the following is adaptable to any number of type/value column pairs. - - - - ``` //credit: Cam Wallace https://www.dingbatdata.com/2018/03/08/non-aggregate-pivot-with-multiple-rows-in-powerquery/ //Rename: fnPivotAll (Source as table, ColToPivot as text, ColForValues as text)=> let PivotColNames = List.Buffer(List.Distinct(Table.Column(Source,ColToPivot))), #"Pivoted Column" = Table.Pivot(Source, PivotColNames, ColToPivot, ColForValues, each _), TableFromRecordOfLists = (rec as record, fieldnames as list) => let PartialRecord = Record.SelectFields(rec,fieldnames), RecordToList = Record.ToList(PartialRecord), Table = Table.FromColumns(RecordToList,fieldnames) in Table, #"Added Custom" = Table.AddColumn(#"Pivoted Column", "Values", each TableFromRecordOfLists(_,PivotColNames)), #"Removed Other Columns" = Table.RemoveColumns(#"Added Custom",PivotColNames), #"Expanded Values" = Table.ExpandTableColumn(#"Removed Other Columns", "Values", PivotColNames) in #"Expanded Values" ``` ``` let Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content], #"Changed Type" = Table.TransformColumnTypes(Source,{ {"ID", type text}, {"Type 1", type text}, {"Type 1 Value", Int64.Type}, {"Type 2", type text}, {"Type 2 Value", Int64.Type}}), #"Unpivoted Other Columns" = Table.UnpivotOtherColumns(#"Changed Type", {"ID"}, "Attribute", "Value"), #"Added Custom" = Table.AddColumn(#"Unpivoted Other Columns", "Custom", each if Text.EndsWith([Attribute],"Value") then "Value" else "Type"), #"Removed Columns" = Table.RemoveColumns(#"Added Custom",{"Attribute"}), #"Added Index" = Table.AddIndexColumn(#"Removed Columns", "Index", 0, 1, Int64.Type), #"Inserted Integer-Division" = Table.AddColumn(#"Added Index", "Integer-Division", each Number.IntegerDivide([Index], 2), Int64.Type), #"Removed Columns1" = Table.RemoveColumns(#"Inserted Integer-Division",{"Index"}), Pivot = fnPivotAll(#"Removed Columns1","Custom","Value"), #"Removed Columns2" = Table.RemoveColumns(Pivot,{"Integer-Division"}), #"Changed Type1" = Table.TransformColumnTypes(#"Removed Columns2",{{"Type", type text}, {"Value", Int64.Type}}) in #"Changed Type1" ``` [](https://i.stack.imgur.com/6TPnk.png)
null
CC BY-SA 4.0
null
2023-02-07T12:25:02.030
2023-02-07T12:25:02.030
null
null
2,872,922
null
75,373,421
2
null
72,833,627
0
null
You should be able to use [Conditional Types](https://www.typescriptlang.org/docs/handbook/2/conditional-types.html) for this: ``` type Possibilities = "a" | "b" | "c" type OnlyPossibilities<P extends string | number | symbol, T> = T extends { [key in P]?: any } ? T : never type Foo = OnlyPossibilities<Possibilities, { a: number }> type Bar = OnlyPossibilities<Possibilities, { d: number }> ``` (Link to the [TS Playground](https://www.typescriptlang.org/play?#code/C4TwDgpgBACg9gZwQSwEbIDbOMiCoC8UARAIbFQA+JqF1xAxsQLABQbokUA8gHYYh4SNJmy4EAHhhQIAD2AReAE3wJgAJ2S8A5lSi8ArgFtUEdXoQgTcDABooAFQB8hRzPmKVUAN5soUAG0AawgQKC1YAF0AfgAuKFJeEDYAXyhot3jeCAA3MzYOcGgAMTg4Vz4BIRR0LBw8KUQa0XqEe19Wf1Is41N1VKcC1k5oACFScyJKwSaROvFG4VqxPHa-KCUek3zWFMHWIA)) Here I am declaring a helper type which accepts both the possibilities, `P`, and the type you want to check, `T`. `Foo` will correctly be of type: ``` type Foo = { a: number } ``` ...but `Bar` will be: ``` type Bar = never ``` thus "not allowing any values". You can also get rid of `P`, I just used it to make the helper type customizable: ``` type Possibilities = "a" | "b" | "c" type OnlyPossibilities<T> = T extends { [key in Possibilities]?: any } ? T : never ``` If you have `any` banned, you should be able to use `unknown` instead.
null
CC BY-SA 4.0
null
2023-02-07T12:32:59.363
2023-02-07T12:32:59.363
null
null
4,759,433
null
75,373,753
2
null
75,365,616
0
null
It seems like the issue was with the database. I created a new project with a new table & it runs flawlessly now. Still, couldn't figure out what caused supabase to send different timestamps on different runs for the same data. But creating a completely new table & relinking with the other system has solved it.
null
CC BY-SA 4.0
null
2023-02-07T13:02:12.663
2023-02-07T13:02:12.663
null
null
21,160,083
null
75,373,919
2
null
75,350,840
16
null
I have same issue using VSCode on Mac OS. VSCode's 'Jupyter' plugin is broken, causing VSCode unable to bind with python interpreter. Downgrading from `v2023.1.2000312134` to `v2022.11.1003412109` fixed my issue. [](https://i.stack.imgur.com/n5ntL.jpg)
null
CC BY-SA 4.0
null
2023-02-07T13:16:42.307
2023-02-07T13:16:42.307
null
null
4,074,725
null
75,373,931
2
null
75,367,095
0
null
As the error is telling you, your method needs to have the attribute `[PunRPC]`. You have it only on the `RPC_play` method but not on the `RPC_playAnim` the error is referring to! Each method (or in general member) has and requires its own attribute(s). ``` [PunRPC] void RPC_sound() { nukeSound.Play(); } [PunRPC] void RPC_playAnim() { animator.Play(nuke, 0, 0.0f); } ``` What happens is basically on compile time photon goes through all types and checks if there are any methods attributed with `[PunRPC]` and if so assigns bakes this method into a dictionary so later via network it just passes on the according key and can thereby find the method on receiver side. Btw in general to avoid typos I personally prefer to not hard code the names but rather use e.g. ``` PV.RPC(nameof(RPC_playAnim), RpcTarget.AllBuffered); ``` --- Further it could also be that this component is not attached to the same object as `GetScript.pView` which is the other half of the error message. The component using `PhotonRPC` needs to be actually attached to the same `GameObject` as the `PhotonView` otherwise the receiving client has no chance to find the according component.
null
CC BY-SA 4.0
null
2023-02-07T13:18:24.270
2023-02-08T07:44:30.213
2023-02-08T07:44:30.213
7,111,561
7,111,561
null
75,373,943
2
null
25,120,736
0
null
The answers in this question may be helpfull [How to fix empty space between a border and a background in button with rounded corners?](https://stackoverflow.com/questions/18581204/how-to-fix-empty-space-between-a-border-and-a-background-in-button-with-rounded) > WPF renders the elements with anti-aliasing by default and this can result in small gaps between shapes.Set the `EdgeMode` to `Aliased` on your `Border` this should get rid of the small gap``` RenderOptions.EdgeMode="Aliased" ```
null
CC BY-SA 4.0
null
2023-02-07T13:19:28.913
2023-02-07T13:19:28.913
null
null
8,604,852
null
75,374,298
2
null
75,374,202
0
null
The `MaterialState` enum provides interactive states that some of the Material widgets can take on when receiving input from the user. The `MaterialStateProperty<T>` class is an interface for classes that resolve to a value of type T based on a widget's interactive "state". To provide more clarification, please refer the Static Methods under MaterialStateProperty class [here](https://api.flutter.dev/flutter/material/MaterialStateProperty-class.html). If you have a single value for all states (say 10.0 as the double value), you can set the elevation as follows: ``` class RoundIconButton extends StatelessWidget { const RoundIconButton({Key? key}) : super(key: key); @override Widget build(BuildContext context) { return ButtonStyleButton( onPressed: () {}, style: ButtonStyle( elevation: MaterialStateProperty.all(10), ), ); } } ```
null
CC BY-SA 4.0
null
2023-02-07T13:50:49.800
2023-02-07T13:50:49.800
null
null
11,690,853
null
75,374,413
2
null
75,374,281
0
null
If you are using `grob = linesGrob()` without passing any arguments to `linesGrob`, the result will always slope upwards left-to-right, because its default `x` argument is set to `c(0, 1)` and it default `y` argument is set to `c(0, 1)` - in a sense it is just a square picture of a 45 degree line sloping upwards left-to-right. By rescaling its x and y dimensions you can convert it to anything from a horizontal to a vertical line, but it will always be upsloping left-to-right between these two extremes. However, the x and y co-ordinates of a `linesGrob` can be changed to whatever you like, to draw arbitrary line shapes. The grob is drawn on whatever rectangle you set according to `ymin`, `ymax`, `xmin` and `xmax`. The coordinates you set in `linesGrob` represent the space within this rectangle, with `x = 0, y = 0` being the bottom left corner, and `x = 1, y = 1` being the top right corner. Just change the default y values to `c(1, 0)`, and we have a downsloping line that you can then rescale with `annotation_custom`: ``` p1 + theme(plot.margin = unit(c(0,5,0,0), "cm")) + annotation_custom(grob = linesGrob(), xmin =5.2, ymin = 100, xmax = 7, ymax = 200) + annotation_custom(grob = linesGrob(y = c(1, 0)), xmin =7, ymin = 0, xmax = 5.2, ymax = 50) + coord_cartesian(clip = "off") + p2 ``` [](https://i.stack.imgur.com/zlpze.png)
null
CC BY-SA 4.0
null
2023-02-07T13:59:51.603
2023-02-07T14:11:41.753
2023-02-07T14:11:41.753
12,500,315
12,500,315
null
75,374,712
2
null
15,764,242
0
null
After scratching my head around this for good 15 minutes I realized: Relative paths to images do work, but when you're writing a markdown file directly from the github web app, the images don't show up in preview. Once you commit the file the images are visible as expected
null
CC BY-SA 4.0
null
2023-02-07T14:27:22.027
2023-02-07T14:27:22.027
null
null
9,420,717
null
75,374,735
2
null
75,248,307
1
null
You need to group the clauses. Tick all 3 checkboxes next to `Or`'s for `Iteration Path` (green in the image below), then click the icon next to `And/Or` (blue in the image below) [](https://i.stack.imgur.com/lCtLO.png)
null
CC BY-SA 4.0
null
2023-02-07T14:29:36.880
2023-02-07T14:29:36.880
null
null
2,497,152
null
75,374,746
2
null
75,374,077
-1
null
it would be better if you could share the function responsible for removing the item but i can guess what might be the problem if you are getting this error when removing the last item in the list if you are removing an item then you need to be careful about something first let's take a look at the removing function ``` E removeAt(int index) { final E removedItem = _items.removeAt(index); if (removedItem != null) { _animatedGrid!.removeItem( index, (BuildContext context, Animation<double> animation) { return removedItemBuilder(removedItem, context, animation); }, ); } return removedItem; ``` as you can see at the start of the function we are using the to remove the item we want to remove and storing it in a new variable then we are using it here ``` _animatedGrid!.removeItem( index, (BuildContext context, Animation<double> animation) { return removedItemBuilder(removedItem, context, animation); },); ``` as you can see we are using the item that we removed from the list because it will be displayed during the animation that's why we need the item but not the index and we can't use the index directly in this part cause we already removed the item from the list so if we used it like that ``` _animatedGrid!.removeItem( index, (BuildContext context, Animation<double> animation) { return removedItemBuilder(_items[index], context, animation); }, ); ``` you will be getting a RangeError (index): Invalid value: Not in inclusive range because this item is already removed and so it's index is out of range
null
CC BY-SA 4.0
null
2023-02-07T14:30:27.383
2023-02-07T14:30:27.383
null
null
14,990,975
null
75,374,898
2
null
75,372,538
0
null
data is coming as an object don't need to iterate it ``` <tr> <td>{{searchedInventory.id}}</td> <td>{{searchedInventory.foodName}}</td> <td>{{searchedInventory.foodDescription}}</td> <td>{{searchedInventory.price}}</td> <td>{{searchedInventory.date}}</td> <td>{{searchedInventory.hotelName}}</td> <td>{{searchedInventory.hotelAddress}}</td> <td><button class="btn btn-success">Select</button></td> </tr> ```
null
CC BY-SA 4.0
null
2023-02-07T14:40:30.420
2023-02-07T14:40:30.420
null
null
17,715,540
null
75,374,988
2
null
30,999,290
0
null
Building on the other answers to get a legend styled to match the origianl request: [](https://i.stack.imgur.com/zfLls.png) ``` ' set legend to have a white background skinparam legendBackgroundColor #FFFFFF ' remove box around legend skinparam legendBorderColor #FFFFFF ' remove the lines between the legend items skinparam legendEntrySeparator #FFFFFF legend right ' the <#FFFFFF,#FFFFFF> sets the background color of the legend to white <#FFFFFF,#FFFFFF>|<#red>| Type A Classes| ' the space between the | and <#blue> is important to make the color column wider |<#blue> | Type B Classes| |<#green>| Type C Classes| endlegend ``` References used, in case someone needs to style further: [1](https://plantuml-documentation.readthedocs.io/en/latest/formatting/all-skin-params.html), [2](https://plantuml.com/creole#51c45b795d5d18a3).
null
CC BY-SA 4.0
null
2023-02-07T14:46:14.900
2023-02-07T14:46:14.900
null
null
4,212,158
null
75,375,391
2
null
75,326,082
0
null
> it does not guard the routes.what is the way to keep the routes ? You must always navigate using vue-router if you want vue-router's navigation guards to activate on route changes. This line: ``` window.location.replace('/dashboard') ``` can be replaced with the vue-router [replace](https://router.vuejs.org/guide/essentials/navigation.html#replace-current-location) equivalent which will activate your navigation guards ``` router.replace('/dashboard') ```
null
CC BY-SA 4.0
null
2023-02-07T15:18:02.913
2023-02-07T15:18:02.913
null
null
6,225,326
null
75,375,463
2
null
52,407,469
0
null
as ways, people are developers but seams to have a really weak logic, post code and link the rest to a site that can die at any time, here the 2 missing line of the answer ``` public static System.Drawing.Drawing2D.GraphicsPath Transparent(Image im) { int x; int y; Bitmap bmp = new Bitmap(im); System.Drawing.Drawing2D.GraphicsPath gp = new System.Drawing.Drawing2D.GraphicsPath(); Color mask = bmp.GetPixel(0, 0); for (x = 0; x <= bmp.Width - 1; x++) { for (y = 0; y <= bmp.Height - 1; y++) { if (!bmp.GetPixel(x, y).Equals(mask)) { gp.AddRectangle(new Rectangle(x, y, 1, 1)); } } } bmp.Dispose(); return gp; ``` use: ``` System.Drawing.Drawing2D.GraphicsPath gp = Resources.Images.Transparent(pictureBox1.Image); pictureBox1.Region = new System.Drawing.Region(gp); ```
null
CC BY-SA 4.0
null
2023-02-07T15:23:43.940
2023-02-07T15:23:43.940
null
null
17,555,538
null
75,375,737
2
null
75,357,665
0
null
The output of `-dPDFINFO` will be determined by the file contents so start with a valid empty file and using OP windows version 1000\gswin64c `gswin64c -dPDFINFO blank.pdf -o` should look like this (note this is console copy ``` GPL Ghostscript 10.0.0 (2022-09-21) Copyright (C) 2022 Artifex Software, Inc. All rights reserved. This software is supplied under the GNU AGPLv3 and comes with NO WARRANTY: see the file COPYING for details. File has 1 page. Producer: GPL Ghostscript 10.00.0 CreationDate: D:20230115003354Z00'00' ModDate: D:20230115003354Z00'00' Processing pages 1 through 1. Page 1 MediaBox: [0 0 595 842] C:\Apps\PDF\GS\gs1000w64\bin> ``` to suppress the copy write use -q [](https://i.stack.imgur.com/yGXwd.png) to save in a file use level2 redirection `gswin64c -q -dBATCH -dPDFINFO blank.pdf 2>out.txt` [](https://i.stack.imgur.com/BSDLp.png) To filter output of text file use pipe filters [](https://i.stack.imgur.com/nvs0I.png) Does it have spot colours [](https://i.stack.imgur.com/rAGzl.png) What are they [](https://i.stack.imgur.com/nRVJa.png) > As long as no open standard for spot colours exists, TCPDF users will have to buy a colour book by one of the colour manufacturers and insert the values and names of spot colours directly So here the names are on a RGB scale ``` - Dark Green is 0,71,57 - Light Yellow is 255,246,142 - Black is 39,36,37 - Red is 166,40,52 - Green is 0,132,75 - Blue is 0,97,157 - Yellow is 255,202,9 ``` But black is not full black is there a better way? yes of course `type example_037.pdf|find /i "/separation"` Now we can see the CMYK spots [](https://i.stack.imgur.com/hxBGK.png) In this simplified case the `CMYK` values after each name are shown as for example . Note often the separation may be encoded inside the PDF data thus you need to decompress the data first to do the search. There are several tools to do the decompression, so common cross platform ones are qpdf (FOSS) mutool (partner to GhostScript) and PDFtk amongst others.
null
CC BY-SA 4.0
null
2023-02-07T15:42:54.463
2023-02-13T13:17:44.320
2023-02-13T13:17:44.320
10,802,527
10,802,527
null
75,376,382
2
null
75,374,077
0
null
To remove items takes `Duration(milliseconds: 300)`. So setState try to rebuild the items meanwhile and cause the issue. In order to overcome this issue, I came up with removing one by one and then inserting item, created another two method on the `ListModel`. ``` class ListModel<E> { ..... void clear() { for (int i = _items.length - 1; i >= 0; i--) { removeAt(i); } } void addAll(List<E> item) { for (int i = 0; i < item.length; i++) { insert(i, item[i]); } } ``` Now while you like to reset the item. ``` void _insert() async { _list.clear(); /// delay to looks good; kDuration takes to remove item, therefore I am using Future method. await Future.delayed(const Duration(milliseconds: 300)); setState(() { _list.addAll(<int>[7, 6, 5]); }); } ```
null
CC BY-SA 4.0
null
2023-02-07T16:35:51.943
2023-02-08T14:38:47.953
2023-02-08T14:38:47.953
10,157,127
10,157,127
null
75,377,147
2
null
75,376,742
0
null
If you want to just get the stuff after the first "-" you can do something like: ``` select stuff(Subscriber_Id, 1, 3, '') AS ContractIDVersion1 , RIGHT(Subscriber_Id, LEN(Subscriber_Id) - 3) AS ContractIDWithRight , REPLACE(Subscriber_id, LEFT(Subscriber_Id, 3), '') AS ContractIDByReplace , CASE WHEN CharIndex('-', Subscriber_id) > 0 THEN SUBSTRING(Subscriber_id, CharIndex('-', Subscriber_id) + 1, LEN(Subscriber_id)) ELSE Subscriber_id END AS ContractIDByCharIndex FROM EDW_ODS.dbo.ODS_LABCORP_LABS WHERE ContractId IS NULL ```
null
CC BY-SA 4.0
null
2023-02-07T17:49:35.987
2023-02-07T17:49:35.987
null
null
13,061,224
null
75,377,239
2
null
75,376,913
0
null
In iOS 16 was added a modifier `scrollDisabled` to achieve what you need here's an example: ``` struct Scroll: View { @State private var scrollDisabled = false var body: some View { VStack { Button("\(scrollDisabled ? "Enable" : "Disable") Scroll") { scrollDisabled.toggle() } ScrollView { VStack { ForEach(1..<50) { i in Rectangle() .fill(.blue) .frame(width: 50, height: 50) .overlay { Text("\(i)") } } }.frame(maxWidth: .infinity) } .scrollDisabled(scrollDisabled) } } } ``` [](https://i.stack.imgur.com/PwJ0s.gif)
null
CC BY-SA 4.0
null
2023-02-07T17:57:54.303
2023-02-07T17:57:54.303
null
null
14,096,169
null
75,377,341
2
null
75,377,287
1
null
Your screenshot is of hover info for a TypeScript type declaration for `String.prototype.split()`, which shows its documentation comment from a lib.d.ts file bundled with TypeScript. You can find the source code for the lib.d.ts files on TypeScript's GitHub repo: [https://github.com/microsoft/TypeScript/tree/main/lib](https://github.com/microsoft/TypeScript/tree/main/lib). You'll see there that they do provide translations for TypeScript's compiler's error messages, but I'm not aware of them providing versions of their lib.d.ts files with translated documentation comments. Much more common on the internet and open source world for web libraries is for English to be used as a lingua franca, which might explain why in my (perhaps limited) experience in web dev, I haven't seen anyone put effort into providing versions of their package files with documentation comments translated to various languages. So unfortunately, it's not something that [changing your display language setting](https://code.visualstudio.com/docs/getstarted/locales#_changing-the-display-language) can affect. This probably extends to more than just web libraries. English is fairly well established as the lingua franca of code at least in the open source world. Most of this hover documentation even for other programming lanugages is pulled from documentation comments, which are usually in English, and as already stated, most libraries don't provide header files / documentation sources with translated documentation comments. At least- not that I've observed in my limited experience in the programming world. I'd be happy to be pointed to counterexamples, though!
null
CC BY-SA 4.0
null
2023-02-07T18:08:40.760
2023-02-10T05:11:53.090
2023-02-10T05:11:53.090
11,107,541
11,107,541
null
75,377,466
2
null
73,501,020
0
null
In my case that was ``` app.Run(); ``` line deleted accidently from Program.cs
null
CC BY-SA 4.0
null
2023-02-07T18:22:23.993
2023-02-07T18:22:23.993
null
null
4,165,898
null
75,377,469
2
null
75,377,406
0
null
To disable an extension, go to the extensions view. You can do that under the "View" menu at the top left, or click the icon at the left bar, or use the `View: Show Extenstions` command, or ++ (on Windows and Linux). Here are pictures of the extension button icon that you can see in the left bar: ![light/extensions.svg](https://raw.githubusercontent.com/microsoft/vscode-icons/main/icons/light/extensions.svg) ![dark/extensions.svg](https://raw.githubusercontent.com/microsoft/vscode-icons/main/icons/dark/extensions.svg) Then scroll until you find the Github Co-pilot extension, then click the gear icon at the bottom right of the extension card. Then, in the menu that pops up, click "Disable" or if you only want to disable it for the current workspace, "Disable (Workspace)". Here are pictures of the gear button icon that you can see at the bottom right of the extension card: ![light/gear.svg](https://raw.githubusercontent.com/microsoft/vscode-icons/main/icons/light/gear.svg) ![dark/gear.svg](https://raw.githubusercontent.com/microsoft/vscode-icons/main/icons/dark/gear.svg)
null
CC BY-SA 4.0
null
2023-02-07T18:22:44.437
2023-02-07T18:22:44.437
null
null
11,107,541
null
75,377,547
2
null
74,788,344
0
null
I encountered the same problem. Update your builds array in your vercel.json file to file following ``` "builds": [ { "src": "~/nuxt.config.js", "use": "@nuxtjs/[email protected]", "config": { "serverFiles": [" Your server path here e.g api/* "] } }, { "src": "Your server path here e.g api/*.js", "use": "@vercel/node" } ], ``` Hope this helps
null
CC BY-SA 4.0
null
2023-02-07T18:31:01.363
2023-02-07T18:31:01.363
null
null
21,167,285
null
75,377,740
2
null
75,376,417
0
null
Use [date_add](https://spark.apache.org/docs/latest/api/java/org/apache/spark/sql/functions.html#date_add-org.apache.spark.sql.Column-org.apache.spark.sql.Column-) and cast it to timestamp This should work: ``` df1.withColumn("newDateWithTimestamp", F.date_add(F.col("next_apt"), F.col("days")).cast("timestamp")).show() ``` Input [](https://i.stack.imgur.com/cmUTu.png) Output [](https://i.stack.imgur.com/sr4zK.png)
null
CC BY-SA 4.0
null
2023-02-07T18:51:32.273
2023-02-07T18:51:32.273
null
null
2,718,939
null
75,378,803
2
null
75,378,504
0
null
You can use the standard `ExposedDropdownMenuBox` provided by M3. Something like: ``` val options = listOf("Option 1", "Option 2", "Option 3", "Option 4", "Option 5") var expanded by remember { mutableStateOf(false) } var selectedOptionText by remember { mutableStateOf(options[0]) } val shape = if (expanded) RoundedCornerShape(8.dp).copy(bottomEnd = CornerSize(0.dp), bottomStart = CornerSize(0.dp)) else RoundedCornerShape(8.dp) ExposedDropdownMenuBox( expanded = expanded, onExpandedChange = { expanded = !expanded }, ) { TextField( modifier = Modifier.menuAnchor(), textStyle = TextStyle.Default.copy( fontSize = 14.sp, fontWeight= FontWeight.Light), readOnly = true, value = selectedOptionText, onValueChange = {}, label = { Text("Unit of length", fontWeight = FontWeight.Bold, ) }, trailingIcon = { ExposedDropdownMenuDefaults.TrailingIcon(expanded = expanded) }, shape = shape, colors = ExposedDropdownMenuDefaults.textFieldColors( focusedIndicatorColor = Transparent, unfocusedIndicatorColor = Transparent ) ) ExposedDropdownMenu( expanded = expanded, onDismissRequest = { expanded = false }, ) { options.forEach { selectionOption -> DropdownMenuItem( text = { Text(selectionOption) }, onClick = { selectedOptionText = selectionOption expanded = false }, contentPadding = ExposedDropdownMenuDefaults.ItemContentPadding, ) } } } ``` [](https://i.stack.imgur.com/BlgaK.png) [](https://i.stack.imgur.com/Dz03R.png)
null
CC BY-SA 4.0
null
2023-02-07T20:44:39.640
2023-02-07T21:42:52.993
2023-02-07T21:42:52.993
2,016,562
2,016,562
null
75,378,973
2
null
75,378,581
1
null
You can use the [CustomControl ExpressionBinding Tag](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_format.ps1xml?view=powershell-7.3#customcontrol-tag) to get that output you're looking thru a format `ps1xml` file however you should note, since you're interested in displaying objects as if they were strings, there is no way to handle dynamic padding with this method. Your format file should look like this: ``` <Configuration> <ViewDefinitions> <View> <Name>CustomIODisplay</Name> <ViewSelectedBy> <TypeName>System.IO.DirectoryInfo</TypeName> <TypeName>System.IO.FileInfo</TypeName> </ViewSelectedBy> <CustomControl> <CustomEntries> <CustomEntry> <CustomItem> <ExpressionBinding> <ScriptBlock> '{0,-25}{1,-10}{2,-20:dd/MM/yyyy HH:mm:ss}{3}' -f @( $_.Basename $_.Extension $_.LastAccessTime $_.DirectoryName ) </ScriptBlock> </ExpressionBinding> </CustomItem> </CustomEntry> </CustomEntries> </CustomControl> </View> </ViewDefinitions> </Configuration> ``` Then you can `Update-FormatData`: ``` Update-FormatData -PrependPath path\to\my\test.format.ps1xml ``` Then `Get-ChildItem` update would look more or less like you wanted: ``` PS ..\pwsh> Get-ChildItem C:\Windows\ -File | Select-Object -First 10 bfsvc .exe 15/01/2023 19:21:46 C:\Windows bootstat .dat 07/02/2023 15:00:22 C:\Windows comsetup .log 15/08/2021 02:44:15 C:\Windows Core .xml 07/02/2023 10:24:46 C:\Windows CoreSingleLanguage .xml 05/02/2023 17:12:19 C:\Windows diagerr .xml 05/02/2023 17:12:19 C:\Windows diagwrn .xml 05/02/2023 17:12:19 C:\Windows DirectX .log 15/08/2021 02:44:15 C:\Windows DPINST .LOG 15/03/2022 13:17:14 C:\Windows DtcInstall .log 15/08/2021 02:44:15 C:\Windows ``` A much simpler solution to the problem would be to use [Format-Table -HideTableHeaders](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/format-table?view=powershell-7.3#-hidetableheaders), then the cmdlet can handle dynamic padding for you: ``` Get-ChildItem C:\Windows\ -File | Format-Table Basename, Extension, LastAccessTime, DirectoryName -HideTableHeaders ```
null
CC BY-SA 4.0
null
2023-02-07T21:05:22.537
2023-02-07T21:18:03.860
2023-02-07T21:18:03.860
15,339,544
15,339,544
null
75,379,013
2
null
75,378,999
0
null
You can keep RET with absolute value greater than 0.10 by ``` data = data[abs(data['RET'].astype('float')) >= 0.10] ```
null
CC BY-SA 4.0
null
2023-02-07T21:09:20.157
2023-02-07T21:15:01.377
2023-02-07T21:15:01.377
16,353,662
16,353,662
null
75,379,068
2
null
75,378,999
0
null
You can use [abs()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.abs.html) and [le()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.le.html) or [lt()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.lt.html) to filter for the wanted values. ``` df_drop = data[data['RET'].abs().lt(0.10)] ``` Also consider [between()](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.between.html). Select a appropriate policy for `inclusive`. It can be "neither", "both", "left" or "right". ``` df_drop = data[data['RET'].between(-0.10, 0.10, inclusive="neither")] ``` ## Example ``` data = pd.DataFrame({ 'RET':[0.011806, -0.122290, 0.274011, 0.039013, -0.05044], 'other': [1,2,3,4,5] }) RET other 0 0.011806 1 1 -0.122290 2 2 0.274011 3 3 0.039013 4 4 -0.050440 5 ``` Both ways aboth will lead to ``` RET other 0 0.011806 1 3 0.039013 4 4 -0.050440 5 ``` All rows with an absolute value greater than 0.10 in `RET` are excluded.
null
CC BY-SA 4.0
null
2023-02-07T21:14:49.487
2023-02-08T07:32:27.893
2023-02-08T07:32:27.893
14,058,726
14,058,726
null
75,379,196
2
null
75,303,741
0
null
# The service account you're using for authentication has to have the correct access scope. It must have a [Datastore access scope](https://cloud.google.com/compute/docs/access/service-accounts#associating_a_service_account_to_an_instance) to connect into Cloud Datastore API. You may refer to this documentation [Cloud Datastore API](https://cloud.google.com/datastore/docs/reference/data/rest). Then you must set proper [authenticate workloads using service accounts](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#console_1) using the console in the VM details page.
null
CC BY-SA 4.0
null
2023-02-07T21:31:46.547
2023-02-07T21:31:46.547
null
null
19,229,284
null
75,379,275
2
null
75,379,153
0
null
When you call the `setState` method, it updates the state behind the scenes, and then schedules a re-render of your component with the state variable (e.g. `arrayOfNotes`) set to the new value. Because you've got your `console.log` inside the function that's calling the `setState` method, you're logging the old value, because it hasn't done the re-render yet. Next render cycle it'll be fine, in fact you could move your console.log into the body of the component and see it'll behave as you're expecting. --- When you're updating a state and the new state value depends on the previous one, I'd also recommend using the function version of setState precisely for this reason, e.g. ``` setArrayOfNotes((prev) => [newNoteObject, ...prev]) ``` This is because prev will take into account other sets done this render cycle, but the way you're doing it currently won't. Doing it this way will certainly save you from other bugs later on. --- The cycle order also is what's causing your second issue. You're setting the active note, but because the set won't apply until the next re-render, `activeNote` will still be the old value in the line below, when you're expecting it to already have been updated. You can just pass `id` in there instead in this case.
null
CC BY-SA 4.0
null
2023-02-07T21:40:08.117
2023-02-07T21:40:08.117
null
null
2,691,058
null