content
stringlengths
228
999k
pred_label
stringclasses
1 value
pred_score
float64
0.5
1
自由軟體翻譯的守則 (內容還待整理) 守則一、同一個程式內,一樣的詞彙要翻成相同的用語。能建個參考用術語翻譯庫是最好的。最終極的目標是達到即使不同的程式,提到了同一個詞語 (term) 時,同一個語境下的中文翻譯也會相同以便於讀者理解。 想法:籌組個「自由軟體桌面環境翻譯工作小組」,討論決定名詞的「參考翻譯」。:P 附註:經濟部工業局 96 年度專案計畫 - 自由軟體產業推動計畫之國際合作與國際標準分項計畫 自由軟體英中語彙對照表 守則二、不把握的翻譯,翻譯後要標示為 fuzzy 或是 not approved。等到和其他翻譯者討論完後,或者可以確認時再進行變更。絕對不贊同不清楚原文之意,就直接翻譯並回報的舉動。 守則三、不明瞭的單字要查詢資料輔助。 相關輔助網站:Yahoo!奇摩字典WikipediaMerrian-Webster 英英字典劍橋雙語字典Google 等。 守則四、未有中文翻譯的專有名詞,翻譯時以音意皆譯為最優先考量,再來為意譯,僅音譯則不列入考慮(音譯只用於特殊情況如人名、地名)。 守則五、完整且貼切地表達原文意思。 如果原文如果表達不清晰,中文應該意譯,並且應根據上下文和註釋進行推斷並填補相應的信息;但切勿畫蛇添足,把意思表達完整後就該罷手。 守則六、翻譯的句子中,若有中英文交雜時,為了配合英文字前後留空之閱讀感、節奏感,請在中文與英文、中文與阿拉伯數字、英文與阿拉伯數字之間加入一個半形空格。請注意,英文字作為結尾時,直接加上句號即可,不必再補空白在其後。例如:「Moblin 網路瀏覽器」,以及「歡迎使用 Moblin。」。 守則七、遵守「翻譯 PO 檔注意事項」。 摘要如下: # 標頭部份: * 每個 po 檔一開頭的幾行,大多固定長的像底下的樣子。其中比較要注意的幾個項目為: o PO-Revision-Date:此欄位就請填入您翻譯時的日期時間 o Last-Translator:最後翻譯的人。若您是最後翻的人,就填入您的資料,以便讓人有疑問時可以連絡的上。至於之前翻譯者的資料該如何處理呢?或許比較好的方式,您可以把他的資料放在最前面,並將該行以 # 開頭註解起來。 如在 po 檔的最前面幾行加入這樣的資訊: # Translator: aaa <[email protected]>, bbb <[email protected]> # ccc <[email protected]> # ddd <[email protected]> o Language-Team:若原本已是 Chinese (traditional) 那就不須要變動。若是新的 po 檔,則可以將其改為 Chinese (traditional) ,表示這是由我們繁體中文的翻譯小組所翻譯的。 o Content-Type: text/plain; charset=utf-8 o Content-Transfer-Encoding: 8bit encoding 部份我們中文字都是用 8bit # 翻譯者的姓名及e-mail: * 請填您自己的姓名及e-mail,千萬不要把它給翻成了 "您的姓名"、"您的電子郵件帳號"。 # 快速鍵部份: * 如底下範例。若有看到 "&" 開頭的地方,如 &D,則表示這可能是選單中的某快速鍵。如我們可以按 alt + d 鍵來快速執行等等。此部份的翻譯方式,我們則是在最後面加上 "(&D)" 來表示。 # c-format 部份: * 如底下範例中的 %1、%2 等變數,不一定 %1 就一定在前 %2 在後,可以視翻譯的文法句型做適當的調整。 # HTML TAG 部份: * 如底下範例,若有 HTML TAG,您必須保留其語法部份。 # 淘汰部份: * 有些在舊版本中有的訊息,但在新版本中已經沒有了,因此這部份就沒有用了。會出現在整個 po 檔的最後面部份,都以 "#~" 為開頭。關於這部份,您可以將它刪除掉,或是也可以將其保留,當做日後參考用也行。 # 其他綜合建議: * 標點符號請儘量用全形標點符號,但欄名尾的冒號則用回半形。 * 提示要用 "您" 取替 "你"。 * 編譯:msgfmt -cv xxx.po -o /dev/null * 安裝:msgfmt -cv xxx.po -o /usr/share/locale/zh_TW/LC_MESSAGES/xxx.mo (as root) * 測試: LC_MESSAGES=zh_TW xxx ; LC_MESSAGES=zh_HK xxx ; LC_MESSAGES=zh_TW.Big5 xxx ; 此外,KDE 的 po 檔中會有所謂的「翻譯提示」: #: ui/konsole_mnu.cpp:85 #, c-format msgid "" "_: Screen is a program controlling screens!\n" "Screen at %1" msgstr "" "視窗於 %1" 上面的 _: 符號即是所謂的翻譯提示,是應用程式作者給翻譯者的一些說明。那部份不需要翻譯,只要翻譯 \n 之後的訊息即可。 守則八、參考「自由軟體正體中文化 L10n 工作指引」。 以下摘自 L10n 工作指引: 基本原則 1. 貼切表述原文的意思 2. 中文應該意思清晰且符合中文表達習慣 3. 原文如果表達不清晰,中文應該意譯,並且應根據上下文和註解進行推斷並填補相應的訊息。例如「print error」若直譯為「列印錯誤」可能造成混淆,「列印發生錯誤」較佳 4. 情況 3 不能太多 5. 對相同術語或短語的翻譯,必須前後一致,若翻譯軟體或平臺有「詞彙表」功能請善用之 6. 使用「您」而不是「你」 7. 不要使用機器翻譯的成果來提交,也就是說您可以使用 Google Translate 來幫助您理解內容,但是不能不經考慮就把自動翻譯的結果放在翻譯裡 標點的使用 一般的原則是:除了刪節號和破折號視情況可保留不變外,都應該使用中文(全形)標點符號。英文標點符號後方常常跟隨有一個半形空格,請在翻譯成中文標點符號時將其去除。 1. 英文中的 , 在中文中可能是,或者、 2. 英文中的 . 在中文中應該是,或者。 ,視上下文而定,多數是。 3. 英文中的 “%s” 在圖形介面程式(GUI)中應該翻譯為 「%s」, 而不是 “%s” 或者 \「%s\」,而且後者是不符合換碼序列要求的。即在 GUI 程式中 ‘something’ 以及 “something” 都應該翻譯為 「某事」 4. 英文中的 “command -parameter argument” 或 ‘command -parameter argument’ 等語境為英文的字句,可將引號保持原樣,不一定必要改為中文引號「」,由翻譯者自行判斷 5. 英文中的 : 應該翻譯為:(全形)而不是:(半形), 而作為分隔符(例如時間分隔符)的 : 請保留為英文(半形)的:,因為這個時候不是標點符號 6. 英文中的 ( ) 其括住內容若是中文,請用全形();若 () 所括住的內容是英文,請使用半形小括號 ( )。若半形小括號前後有文字(包含中文、英文)請使用空格隔開;若小括號前後是標點,或是表示捷徑字元的小括號時,前後不應加上空格,換句話說,標點相連之間無須補入空格。 7. 英文中的 … 可保持不變。由於翻譯的時候常常難以分清哪些條目是選單項,哪些條目是一般語句,而後者才能使用中文的刪節號……,所以未明者統一翻譯為…。至於源文語句若本身已使用刪節號…,則照用即可,一般選項內文字之刪節號請使用一個…,若為內文中之刪節號可用兩個……。 8. 英文中的 – 可保持不變,亦可使用全形破折號 ——。 9. 遇到 %q 標記的時候,此標記會以可重複使用的方式輸出文字,故不需另加引號。 10. 遇到說明英文標點的訊息時,除了翻譯為中文外,請在譯文中加入該標點,並在標點前後加上中文 (較建議) 或英文引號。例:「separated by common」即:以半形冒號「:」隔開;或以半形冒號 “:” 隔開 11. 遇到說明選單的訊息時,例如:System > Administration ,可以見到頭文字大寫,這麼做除了模擬選單本身頭文字大寫外,還能在同是小寫的句子中吸引讀者的目光。因此請使用上下引號將選目內容括住,以達到同樣效果:「系統 > 管理」,並請注意標點之間無需空格。 12. 引用軟體之名稱,可以用書名號《》括起,例如請使用《檔案》開啟,這裡的「檔案」是 GNOME nautilus 的在地名稱;若要引用程式中的某功能選項或按鈕,則可用「」括起,例如點按「偏好設定」。 空格的使用 為了符合英文字詞與鄰接字之間加空格之排版慣例,請在中文與英文、中文與阿拉伯數字加入一個半形空格。例如: 源文: Installing driver for %1 譯文: 正在安裝 %1 的驅動程式 源文: Parameter start_num specifies the character at which to start the search. The first character is character number 1. If start_num is omitted, it is assumed to be 1. 譯文: 參數 start_num 指定開始搜索的字元位置。第一個字元序號為 1。如果省略 start_num,預設它為 1。 對於小括號和半形引號,在與中文或英文鄰接處加入空格: 源文: Original idea and author (KDE1) 譯文: 原創發想與作者 (KDE1) 源文: The APM Management subsystem seems to be disabled.\n Try executing \"apm -e 1\" (FreeBSD) and see if \n that helps.\n 譯文: APM 管理子系統似乎被禁用了。\n 試試執行 \"apm -e 1\" (FreeBSD) 並看看\n 是否有用。\n 遇到斜體排版標記時,考量到中文字斜體後之傾斜部分會與後方鄰接字重疊,故斜體字詞後方若有連接文字請在它們之間加入空格以利閱讀;如果後方連接的是標點符號則不必加入空格。 我想強調 這件事;然而,沒有機會說出口。 包含 XML/HTML 標籤的條目,如要在標籤中的內容兩側添加空格,請把空格置於標籤外側,否則空格可能顯示不出來。 這是 <b>HTML</b> 的語法手冊 TODO: 將「翻譯 PO 檔注意事項」與「Ubuntu 簡體中文小組工作指南」整理成一份「自由軟體正體中文翻譯指引」。 a, b, c, and d 依照中文習慣應翻為:「a、b、c、d 等」。 註:本文在 Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported 許可下發布。 留言 這個網誌中的熱門文章 感謝有您——個人自由軟體參與回顧 論 Render 翻譯(算繪/演繹) GNU 專案術語對照表
__label__pos
0.778722
Attribute or ACF? Which Is better? Choosing between Attributes and Advanced Custom Fields (ACF) in WordPress and WooCommerce is a pivotal decision that significantly influences how you structure and present content on your website. Attributes, inherent to WooCommerce, excel in organizing and categorizing e-commerce products, providing structured data for product variations, and enhancing SEO. On the other hand, ACF offers unparalleled versatility, empowering you to create custom fields for various content types beyond e-commerce, with an intuitive, user-friendly interface. The right choice hinges on your project’s objectives: e-commerce-centric or a broader content strategy, with potential for a hybrid approach. This decision shapes the user experience and content management capabilities of your website. What is Attribute? WooCommerce attributes are fundamental product characteristics that help categorize and differentiate products in an online store. Attributes provide valuable information to shoppers and facilitate effective product filtering and searching. Attributes can represent various product traits such as size, color, material, or any custom feature that distinguishes one product from another. They enhance the customer’s shopping experience by allowing them to refine their search and find products that meet their specific preferences. To use WooCommerce attributes, follow these steps: • In your WooCommerce settings, create attribute terms (e.g., “Size” with options like “Small,” “Medium,” and “Large”). • Assign attributes to your products, specifying the relevant options for each product. • Display attributes on product pages to help customers make informed choices. Using WooCommerce attributes efficiently can boost sales by making it easier for customers to find the products they desire. What is ACF? Advanced Custom Fields (ACF) is a popular WordPress plugin that enhances the content management capabilities of your website. It allows you to create and manage custom fields and content types with ease, making it a powerful tool for developers and non-developers alike. Why use ACF: • ACF enables you to tailor WordPress to your specific needs by adding custom fields to posts, pages, or custom post types. • It helps organize and present data in a structured way, improving the user experience and making it easier to manage complex content. • ACF’s intuitive interface makes it accessible for non-technical users to add, edit, and display custom content without coding knowledge. How to use ACF: • Install the ACF plugin, activate it, and choose your preferred license. • Define custom fields, specifying their type (text, image, date, etc.) and where they should appear (e.g., on a post). • Use PHP functions or shortcodes to display the custom fields in your theme templates or content. ACF enhances WordPress’s flexibility and is particularly valuable for creating unique websites, customizing themes, and managing complex content structures. Attribute or ACF? The choice between Attributes and Advanced Custom Fields (ACF) in WordPress and WooCommerce depends on your specific needs and the context of your project. Both have their strengths and are suited for different purposes, so let’s explore when each is better: Attributes: • Best for E-commerce: If you’re running an online store with WooCommerce, attributes are essential. They are specifically designed for product characteristics like size, color, and material. They enable product variations and provide a structured way to organize and filter products. • SEO Benefits: Using attributes for product variations can improve your website’s SEO. Search engines can better understand and index your products, potentially boosting your visibility in search results. • Native Integration: Attributes are native to WooCommerce, which means they are well-integrated into the system, making them easier to set up and manage for product-related content. Advanced Custom Fields (ACF): • Versatility: ACF is incredibly versatile and suitable for a wide range of content types beyond e-commerce. It can be used to add custom fields to posts, pages, custom post types, and even user profiles. • User-Friendly: ACF provides an intuitive interface, making it accessible to users with limited coding knowledge. Creating and managing custom fields is straightforward, and it’s a great choice if you want to empower non-technical users to customize content. • Customization: ACF offers extensive customization options, allowing you to create unique content structures and display custom data precisely where you want it. Choosing the Better Option 1. E-commerce Store: If your primary focus is running an online store with WooCommerce, using attributes is essential. They are designed for product-specific characteristics, variations, and are SEO-friendly. 2. Content Variety: If your project involves more than just e-commerce, and you need custom fields for various content types like blog posts, articles, or custom post types, then ACF is the better choice. It provides the flexibility to customize content beyond products. 3. Hybrid Approach: In some cases, you might benefit from using both. You can utilize attributes for standard product characteristics and employ ACF for additional custom data associated with products or other content types. Conclusion Ultimately, the decision depends on your project’s specific requirements, the type of content you’re dealing with, and your familiarity with each tool. In many cases, both Attributes and ACF can coexist to meet your website’s diverse needs. Leave a Comment Your email address will not be published. Required fields are marked * Scroll to Top
__label__pos
0.904834
summaryrefslogtreecommitdiff path: root/riscos/png.c blob: cd76931c1242a4f5bb2f069326fafe9c735ac707 (plain) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 /* * This file is part of NetSurf, http://netsurf.sourceforge.net/ * Licensed under the GNU General Public License, * http://www.opensource.org/licenses/gpl-license * Copyright 2003 James Bursa <[email protected]> */ #include <assert.h> #include <string.h> #include <stdlib.h> #include "libpng/png.h" #include "oslib/colourtrans.h" #include "oslib/os.h" #include "oslib/osspriteop.h" #include "netsurf/content/content.h" #include "netsurf/riscos/png.h" #include "netsurf/utils/log.h" #include "netsurf/utils/utils.h" /* libpng uses names starting png_, so use nspng_ here to avoid clashes */ /* maps colours to 256 mode colour numbers */ static os_colour_number colour_table[4096]; static void info_callback(png_structp png, png_infop info); static void row_callback(png_structp png, png_bytep new_row, png_uint_32 row_num, int pass); static void end_callback(png_structp png, png_infop info); void nspng_init(void) { /* generate colour lookup table for reducing to 8bpp */ unsigned int red, green, blue; for (red = 0; red != 0x10; red++) for (green = 0; green != 0x10; green++) for (blue = 0; blue != 0x10; blue++) colour_table[red << 8 | green << 4 | blue] = colourtrans_return_colour_number_for_mode( blue << 28 | blue << 24 | green << 20 | green << 16 | red << 12 | red << 8, 21, 0); } void nspng_create(struct content *c) { c->data.png.sprite_area = 0; c->data.png.png = png_create_read_struct(PNG_LIBPNG_VER_STRING, 0, 0, 0); assert(c->data.png.png != 0); c->data.png.info = png_create_info_struct(c->data.png.png); assert(c->data.png.info != 0); if (setjmp(png_jmpbuf(c->data.png.png))) { png_destroy_read_struct(&c->data.png.png, &c->data.png.info, 0); assert(0); } png_set_progressive_read_fn(c->data.png.png, c, info_callback, row_callback, end_callback); } void nspng_process_data(struct content *c, char *data, unsigned long size) { if (setjmp(png_jmpbuf(c->data.png.png))) { png_destroy_read_struct(&c->data.png.png, &c->data.png.info, 0); assert(0); } LOG(("data %p, size %li", data, size)); png_process_data(c->data.png.png, c->data.png.info, data, size); c->size += size; } /** * info_callback -- PNG header has been completely received, prepare to process * image data */ void info_callback(png_structp png, png_infop info) { char *row, **row_pointers; int i, bit_depth, color_type, palette_size, log2bpp, interlace; unsigned int rowbytes, sprite_size; unsigned long width, height; struct content *c = png_get_progressive_ptr(png); os_palette *palette; os_sprite_palette *sprite_palette; osspriteop_area *sprite_area; osspriteop_header *sprite; png_color *png_palette; png_color_16 *png_background; png_color_16 default_background = {0, 0xffff, 0xffff, 0xffff, 0xffff}; /* screen mode image result * any 8bpp or less (palette) 8bpp sprite * 8bpp or less 16 or 24bpp dither to 8bpp * 16 or 24bpp 16 or 24bpp sprite of same depth */ png_get_IHDR(png, info, &width, &height, &bit_depth, &color_type, &interlace, 0, 0); png_get_PLTE(png, info, &png_palette, &palette_size); if (interlace == PNG_INTERLACE_ADAM7) ; /*png_set_interlace_handling(png);*/ if (png_get_bKGD(png, info, &png_background)) png_set_background(png, png_background, PNG_BACKGROUND_GAMMA_FILE, 1, 1.0); else png_set_background(png, &default_background, PNG_BACKGROUND_GAMMA_SCREEN, 0, 1.0); xos_read_mode_variable(os_CURRENT_MODE, os_MODEVAR_LOG2_BPP, &log2bpp, 0); /* make sprite */ sprite_size = sizeof(*sprite_area) + sizeof(*sprite); if (color_type == PNG_COLOR_TYPE_PALETTE) sprite_size += 8 * 256 + height * ((width + 3) & ~3u); else if (log2bpp < 4) sprite_size += height * ((width + 3) & ~3u); else sprite_size += height * ((width + 3) & ~3u) * 4; sprite_area = xcalloc(sprite_size + 1000, 1); sprite_area->size = sprite_size; sprite_area->sprite_count = 1; sprite_area->first = sizeof(*sprite_area); sprite_area->used = sprite_size; sprite = (osspriteop_header *) (sprite_area + 1); sprite->size = sprite_size - sizeof(*sprite_area); strcpy(sprite->name, "png"); sprite->height = height - 1; c->data.png.sprite_area = sprite_area; if (color_type == PNG_COLOR_TYPE_PALETTE) { /* making 256 colour sprite with PNG's palette */ LOG(("palette with %i entries", palette_size)); c->data.png.type = PNG_PALETTE; sprite->width = ((width + 3) & ~3u) / 4 - 1; sprite->left_bit = 0; sprite->right_bit = (8 * (((width - 1) % 4) + 1)) - 1; sprite->mask = sprite->image = sizeof(*sprite) + 8 * 256; sprite->mode = (os_mode) 21; sprite_palette = (os_sprite_palette *) (sprite + 1); for (i = 0; i != palette_size; i++) sprite_palette->entries[i].on = sprite_palette->entries[i].off = png_palette[i].blue << 24 | png_palette[i].green << 16 | png_palette[i].red << 8 | 16; /* make 8bpp */ if (bit_depth < 8) png_set_packing(png); } else /*if (log2bpp < 4)*/ { /* making 256 colour sprite with no palette */ LOG(("dithering down")); c->data.png.type = PNG_DITHER; sprite->width = ((width + 3) & ~3u) / 4 - 1; sprite->left_bit = 0; sprite->right_bit = (8 * (((width - 1) % 4) + 1)) - 1; sprite->mask = sprite->image = sizeof(*sprite); sprite->mode = (os_mode) 21; if (color_type == PNG_COLOR_TYPE_GRAY && bit_depth < 8) png_set_gray_1_2_4_to_8(png); if (color_type == PNG_COLOR_TYPE_GRAY || color_type == PNG_COLOR_TYPE_GRAY_ALPHA) png_set_gray_to_rgb(png); if (bit_depth == 16) png_set_strip_16(png); } /*else {*/ /* convert everything to 24-bit RGB (actually 32-bit) */ /* LOG(("24-bit")); c->data.png.type = PNG_DEEP; if (color_type == PNG_COLOR_TYPE_PALETTE) png_set_palette_to_rgb(png); if (color_type == PNG_COLOR_TYPE_GRAY && bit_depth < 8) png_set_gray_1_2_4_to_8(png); if (color_type == PNG_COLOR_TYPE_GRAY || color_type == PNG_COLOR_TYPE_GRAY_ALPHA) png_set_gray_to_rgb(png); if (bit_depth == 16) png_set_strip_16(png); if (color_type == PNG_COLOR_TYPE_RGB) png_set_filler(png, 0xff, PNG_FILLER_AFTER); }*/ png_read_update_info(png, info); c->data.png.rowbytes = rowbytes = png_get_rowbytes(png, info); c->data.png.interlace = (interlace == PNG_INTERLACE_ADAM7); c->data.png.sprite_image = ((char *) sprite) + sprite->image; c->width = width; c->height = height; LOG(("size %li * %li, bpp %i, rowbytes %lu", width, height, bit_depth, rowbytes)); } static unsigned int interlace_start[8] = {0, 4, 0, 2, 0, 1, 0}; static unsigned int interlace_step[8] = {8, 8, 4, 4, 2, 2, 1}; static unsigned int interlace_row_start[8] = {0, 0, 4, 0, 2, 0, 1}; static unsigned int interlace_row_step[8] = {8, 8, 8, 4, 4, 2, 2}; void row_callback(png_structp png, png_bytep new_row, png_uint_32 row_num, int pass) { struct content *c = png_get_progressive_ptr(png); unsigned long i, j, rowbytes = c->data.png.rowbytes; unsigned int start = 0, step = 1; int red, green, blue, alpha; char *row = c->data.png.sprite_image + row_num * ((c->width + 3) & ~3u); os_colour_number col; /*LOG(("PNG row %li, pass %i, row %p, new_row %p", row_num, pass, row, new_row));*/ if (new_row == 0) return; if (c->data.png.interlace) { start = interlace_start[pass]; step = interlace_step[pass]; row_num = interlace_row_start[pass] + interlace_row_step[pass] * row_num; row = c->data.png.sprite_image + row_num * ((c->width + 3) & ~3u); } if (c->data.png.type == PNG_PALETTE) for (j = 0, i = start; i < rowbytes; i += step) row[i] = new_row[j++]; else if (c->data.png.type == PNG_DITHER) { for (j = 0, i = start; i * 3 < rowbytes; i += step) { red = new_row[j++]; green = new_row[j++]; blue = new_row[j++]; row[i] = colour_table[(red >> 4) << 8 | (green >> 4) << 4 | (blue >> 4)]; } } } void end_callback(png_structp png, png_infop info) { struct content *c = png_get_progressive_ptr(png); LOG(("PNG end")); /*xosspriteop_save_sprite_file(osspriteop_USER_AREA, c->data.png.sprite_area, "png");*/ } int nspng_convert(struct content *c, unsigned int width, unsigned int height) { png_destroy_read_struct(&c->data.png.png, &c->data.png.info, 0); c->title = xcalloc(100, 1); sprintf(c->title, "png image (%ux%u)", c->width, c->height); c->status = CONTENT_STATUS_DONE; return 0; } void nspng_revive(struct content *c, unsigned int width, unsigned int height) { } void nspng_reformat(struct content *c, unsigned int width, unsigned int height) { } void nspng_destroy(struct content *c) { xfree(c->title); xfree(c->data.png.sprite_area); } void nspng_redraw(struct content *c, long x, long y, unsigned long width, unsigned long height, long clip_x0, long clip_y0, long clip_x1, long clip_y1) { int size; osspriteop_trans_tab *table; os_factors factors; xcolourtrans_generate_table_for_sprite(c->data.png.sprite_area, (osspriteop_id) (c->data.png.sprite_area + 1), colourtrans_CURRENT_MODE, colourtrans_CURRENT_PALETTE, 0, colourtrans_GIVEN_SPRITE, 0, 0, &size); table = xcalloc(size, 1); xcolourtrans_generate_table_for_sprite(c->data.png.sprite_area, (osspriteop_id) (c->data.png.sprite_area + 1), colourtrans_CURRENT_MODE, colourtrans_CURRENT_PALETTE, table, colourtrans_GIVEN_SPRITE, 0, 0, 0); factors.xmul = width; factors.ymul = height; factors.xdiv = c->width * 2; factors.ydiv = c->height * 2; xosspriteop_put_sprite_scaled(osspriteop_PTR, c->data.png.sprite_area, (osspriteop_id) (c->data.png.sprite_area + 1), x, y - height, os_ACTION_OVERWRITE, &factors, table); xfree(table); }
__label__pos
0.980077
static class methods in Python? Aahz Maruch aahz at netcom.com Sat Feb 19 02:07:06 CET 2000 In article <slrn8arn3d.9md.neelk at brick.cswv.com>, Neel Krishnaswami <neelk at alum.mit.edu> wrote: > >This is probably a stupid question, but what /is/ a class method? >I've never programmed in C++ (or Java), so an explanation would be >appreciated. Class methods are used to access and modify class variables. For example: class foo: bar = None def __init__(self): pass classdef isdone(): if foo.bar is not None: return "!!!" classdef clear(): foo.bar = None Clearer? -- --- Aahz (Copyright 2000 by aahz at netcom.com) Androgynous poly kinky vanilla queer het <*> http://www.rahul.net/aahz/ Hugs and backrubs -- I break Rule 6 Our society has become so fractured that the pendulum is swinging several different directions at the same time More information about the Python-list mailing list
__label__pos
0.509768
You are here function content_taxonomy_field_get_parent in Content Taxonomy 6.2 Same name and namespace in other branches 1. 6 content_taxonomy.module \content_taxonomy_field_get_parent() Returns the parent term ID for one field can be 0 is no parent is selected or if the php code returns 0 use this function instead of directly accessing $field['parent'] because of possible php code Parameters $field The Content Taxonomy Field: 6 calls to content_taxonomy_field_get_parent() content_taxonomy_allowed_values in ./content_taxonomy.module Called by content_allowed_values to create the $options array for the content_taxonomy_options content_taxonomy_allowed_values_groups in ./content_taxonomy.module Creating Opt Groups for content_taxonomy_options content_taxonomy_autocomplete_form2data in ./content_taxonomy_autocomplete.module Helper function to transpose the values returned by submitting the content_taxonomy_autcomplete to the format to be stored in the field content_taxonomy_autocomplete_load in ./content_taxonomy_autocomplete.module Retrieve a pipe delimited string of autocomplete suggestions content_taxonomy_autocomplete_validate in ./content_taxonomy_autocomplete.module Validation function for the content_taxonomy_autocomplete element ... See full list File ./content_taxonomy.module, line 322 Defines a field type for referencing a taxonomy term. Code function content_taxonomy_field_get_parent($field) { if (!empty($field['parent_php_code'])) { return eval($field['parent_php_code']); } return $field['parent']; }
__label__pos
0.926256
// Tutorial // How To Set Up Sass on your VPS Running on Ubuntu Published on August 14, 2013 Default avatar By dannysipos Developer and author at DigitalOcean. How To Set Up Sass on your VPS Running on Ubuntu Introduction Sass is a CSS preprocessor that lets you create stylesheets in a much more efficient and intelligent manner than using simple flat CSS. It provides a number of dynamic components that will make your code smaller, more reusable and more scalable. Its syntax is fairly easy to understand and rather adds on top of regular CSS than replaces it. In this tutorial, we will see how you can install Sass and get started using it. For this, it assumes you are already running your own VPS with Ubuntu and a web server installed on it if you want to see something in the browser (but not necessary at this level). Please note though that you can install Sass also on other operating systems like Windows and OS X. You can check out this article for getting you up and running with your VPS. Installing Sass In order to install Sass, we’ll need to first have Ruby on the system, so we’ll have to get that installed first. In addition, we’ll have to install rubygems (the package management system for Ruby). Let’s do both of these tasks with the following commands: sudo apt-get update sudo apt-get install ruby-full rubygems Next up, we can use the gem command to install Sass: sudo gem install sass Now that Sass is installed, we can get started. Using Sass Let’s create a stylesheet to play with. Navigate to your web server’s root folder (for Apache it should be /var/www) and create a file called style.scss: cd /var/www nano style.scss Inside this file, paste in the following css rule: .box { padding:20px; background-color:red; } As you can see, this is some basic css. Save the file and exit. Now, we’ll need to tell Sass to translate this file into a regular css format file (ending with the .css extension): sass --watch style.scss:style.css With this command, Sass will generate the .css file and watch over the .scss file for any changes. If they occur, the .css file will get automatically updated. When running this command for the first time, you get this error: [Listen warning]: Missing dependency 'rb-inotify' (version '~> 0.9')! Please run the following to satisfy the dependency: gem install --version '~> 0.9' rb-inotify You can run the following command to satisfy the dependency: gem install --version '~> 0.9' rb-inotify This will do trick. Now, if you are dealing with multiple Sass files, you can run the --watch command and make it compile an entire folder of .scss files: sass --watch stylesheets/sass:stylesheets/css This will make it keep track of all the .scss files in the stylesheets/sass folder, automatically compiling them and turn them into their equivalent in the stylesheets/css folder. Once you run one of these commands though, Sass will be in this "watch mode" until you tell it to stop. You can press Ctrl+C to make it stop watching over the files. After that, changes you make to the .scss file(s) will not be automatically reflected in the .css file(s) until you run again the --watch command. So what’s the deal? All we did was write some css into a file and then have it copied into another. But there is more to Sass and this is why you should use it. So let’s see what else you can do. Nesting Nesting is a great way to avoid having to write the same selector over and over. Say for instance you have 3 selectors that begin with the same thing: ".box ul", ".box li" and ".box li a". Normally, you’d have to create three different rules for these: .box ul { ... } .box li { ... } .box li a { ... } But with Sass, you can nest them like so: .box { padding:20px; background-color:red; ul { margin:10px; } li { float:left; a { color:#eee; } } } As you can see, this way you avoided having to repeat writing the .box part of the selector all 3 times. Additionally, it looks very simple and logical. Now if you use the --watch command to generate the .css equivalent, it will automatically create all 3 of those css blocks for you: .box { padding: 20px; background-color: red; } .box ul { margin: 10px; } .box li { float: left; } .box li a { color: #eee; } In addition, you can nest proprieties using the same logic. For instance, you can write something like this: .box { padding: { top:20px; right:10px; bottom:15px; left:10px; } } This saves you the time of having to write 4 times the word "padding". Variables Another time saving and simply awesome feature of Sass is the use of variables. Similar to programming languages like PHP or javascript, this allows you to declare a variable once and use it later in your code as many times as you want. For instance you can do something like this: $color: #eee; a { color: $color; } Sass will then replace all instances of the $color variable in the entire file with the actual color code you declared once: #eee. Mixins These are probably the most powerful Sass feature and they behave basically like functions. You can reuse entire style declarations and even pass them arguments. Similar to a function, first you declare them. So let’s declare 2 slightly different mixins. @mixin box-size { width:200px; height:200px; padding:10px; margin:0px; } @mixin border($width) { border: $width solid #eee; } As you can see, the first one does not take any arguments. We can make use of it like so: .box { @include box-size; } This will output the following css: .box { width:200px; height:200px; padding:10px; margin:0px; } The second mixin we can use by passing it an argument: .box2 { @include border(1px); } This will use the rule defined in the mixin and pass it the size argument for even bigger flexibility. This will output the following css: .box2 { border: 1px solid #eee; } These are some but not all of the features that make Sass awesome. You can make various computations on a number of possible values and other awesome things. To find out more information and examples of how to use it, you can check out the Sass website. Output style Running the --watch command we saw above will make Sass output the resulting CSS in the .css file in its default way: nested. There are 4 different types of output style you can choose from: • Nested: reflects the structure of the CSS styles and the HTML document they’re styling. • Expanded: with each property and rule taking up one line • Compact: each CSS rule takes up only one line, with every property defined on that line. • Compressed: has no whitespace except that necessary to separate selectors and a newline at the end of the file. You can read more about these different styles here. But an easy way to switch between them is in the --watch command itself by adding a flag at the end. For instance, if we want to use the expanded style, we run the command like this: sass --watch style.scss:style.css --style=expanded Conclusion Sass is very powerful and once you get used to it, you will have a much easier front-end experience. It adds intelligence to the way CSS is thought of and provides tools to make it work more efficient. Article Submitted by: Danny If you’ve enjoyed this tutorial and our broader community, consider checking out our DigitalOcean products which can also help you achieve your development goals. Learn more here About the authors Default avatar Developer and author at DigitalOcean. Still looking for an answer? Was this helpful? 7 Comments This textbox defaults to using Markdown to format your answer. You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link! Is there an updated way of doing this; for Ubuntu 16.04? I get: $ sudo gem install sass Building native extensions. This could take a while… ERROR: Error installing sass: ERROR: Failed to build gem native extension. current directory: /var/lib/gems/2.3.0/gems/ffi-1.9.25/ext/ffi_c /usr/bin/ruby2.3 -r ./siteconf20180724-2121-11wbtob.rb extconf.rb checking for ffi.h… *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: –with-opt-dir –without-opt-dir –with-opt-include –without-opt-include=${opt-dir}/include –with-opt-lib –without-opt-lib=${opt-dir}/lib –with-make-prog –without-make-prog –srcdir=. –curdir –ruby=/usr/bin/$(RUBY_BASE_NAME)2.3 –with-ffi_c-dir –without-ffi_c-dir –with-ffi_c-include –without-ffi_c-include=${ffi_c-dir}/include –with-ffi_c-lib –without-ffi_c-lib=${ffi_c-dir}/lib –with-libffi-config –without-libffi-config –with-pkg-config –without-pkg-config /usr/lib/ruby/2.3.0/mkmf.rb:456:in try_do': The compiler failed to generate an executable file. (RuntimeError) You have to install development tools first. from /usr/lib/ruby/2.3.0/mkmf.rb:587:in try_cpp’ from /usr/lib/ruby/2.3.0/mkmf.rb:1091:in block in have_header' from /usr/lib/ruby/2.3.0/mkmf.rb:942:in block in checking_for’ from /usr/lib/ruby/2.3.0/mkmf.rb:350:in block (2 levels) in postpone' from /usr/lib/ruby/2.3.0/mkmf.rb:320:in open’ from /usr/lib/ruby/2.3.0/mkmf.rb:350:in block in postpone' from /usr/lib/ruby/2.3.0/mkmf.rb:320:in open’ from /usr/lib/ruby/2.3.0/mkmf.rb:346:in postpone' from /usr/lib/ruby/2.3.0/mkmf.rb:941:in checking_for’ from /usr/lib/ruby/2.3.0/mkmf.rb:1090:in have_header' from extconf.rb:16:in <main>’ To see why this extension failed to compile, please check the mkmf.log which can be found here: /var/lib/gems/2.3.0/extensions/x86_64-linux/2.3.0/ffi-1.9.25/mkmf.log extconf failed, exit code 1 Gem files will remain installed in /var/lib/gems/2.3.0/gems/ffi-1.9.25 for inspection. Results logged to /var/lib/gems/2.3.0/extensions/x86_64-linux/2.3.0/ffi-1.9.25/gem_make.out $ This comment has been deleted When I attempt sudo apt-get install ruby-full rubygems I get the following: Reading package lists… Done Building dependency tree Reading state information… Done Package rubygems is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: ruby E: Package ‘rubygems’ has no installation candidate So I then i use: sudo gem install sass which seems to work fine, but when I try to convert a sass file (sasstest.scss) with sass --watch sasstest.scss:sasstest.css I get: /usr/lib/ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find sass (>= 0) amongst [bundler-1.7.3, bundler-unload-1.0.2, executable-hooks-1.3.2, ffi-1.9.4, gem-wrappers-1.2.5, rb-inotify-0.9.5, rubygems-bundler-1.4.4, rvm-1.11.3.9] (Gem::LoadError) from /usr/lib/ruby/1.9.1/rubygems/dependency.rb:256:in to_spec’ from /usr/lib/ruby/1.9.1/rubygems.rb:1231:in gem' from /usr/local/bin/sass:22:in <main>’ which differs from the response you list above. Just a note that Digital Ocean has <a href=“https://www.digitalocean.com/community/articles/how-to-install-ruby-on-rails-on-ubuntu-12-04-lts-precise-pangolin-with-rvm”>another help article</a> on how to install Ruby that provides a different method than the one given above. Probably worth a mention that as I went through these steps in March 2014, the installation of rb-inotify specifying the version: gem install --version ‘~> 0.9’ rb-inotify was unsuccessful. However, just running gem install rb-inotify worked fine, and the sass watch command worked fine from then on. @tesh: Not necessarily. For instance, you can have *.css in .gitignore so that the repo only contains .scss files, and have an on-receive hook that recompiles all .scss files. This way your repo stays clean while your webserver is serving up-to-date css files. Is this mostly for those who have a remote VPS as a development environment?
__label__pos
0.502258
[Free] 2018(June) Dumps4cert Cisco 300-320 Dumps with VCE and PDF Download 261-270 Dumps4cert.com : Latest Dumps with PDF and VCE Files 2018 May Cisco Official New Released 300-320 100% Free Download! 100% Pass Guaranteed! Designing Cisco Network Service Architectures Question No: 261 Summary address blocks can be used to support which network application? 1. QoS 2. IPsec tunneling 3. Cisco TrustSec 4. NAT 5. DiffServ Answer: D Explanation: http://www.ciscopress.com/articles/article.asp?p=1763921 Summary address blocks can be used to support several network applications: Separate VLANs for voice and data, and even role-based addressing Bit splitting for route summarization Addressing for virtual private network (VPN) clients Network Address Translation (NAT) Question No: 262 Which statement about IPS and IDS solutions is true? 1. IDS and IPS read traffic only in inline mode. 2. IDS and IPS read traffic only in promiscuous mode. 3. An IDS reads traffic in inline mode, and an IPS reads traffic in promiscuous mode. 4. An IDS reads traffic in promiscuous mode, and an IPS reads traffic in inline mode. Answer: D Question No: 263 Which technology allows multiple instances of a routing table to coexist on the same router simultaneously? 1. VRF 2. Cisco virtual router 3. Instanced virtuer router 4. IS-IS Answer: A Question No: 264 Refer to the exhibit. Dumps4Cert 2018 PDF and VCE Which two features can enable high availability for first-hop Layer 3 redundancy? (Choose two.) 1. VPC 2. IGMP V2 3. VRRP 4. PIM 5. HSRP Answer: C,E Question No: 265 Which three options are recommended practices when configuring VTP? (Choose three.) 1. Set the switch to transparent mode. 2. Set the switch to server mode. 3. Enable VLAN pruning. 4. Disable VLAN pruning. 5. Specify a domain name. 6. Clear the domain name. Answer: A,D,E Explanation: http://www.ciscopress.com/articles/article.asp?p=1315434amp;seqNum=2 Question No: 266 Which option is a benefit of the vPC feature? 1. Cisco FabricPath is not required in the network domain. 2. This feature provides fault domain separation. 3. Nonfabric devices, such as a server or a classic Ethernet switch, can be connected to two fabric switches that are configured with vPC. 4. The control plane and management plane are combined into one logical plane. Answer: C Question No: 267 With which technology can VSS be combined to achieve better performance? 1. MEC 2. NSF 3. BFD 4. UDLD Answer: B Question No: 268 Merging two company network. No subnets overlap, but engineer must limit the networks advertised to new organization. Which feature implements this requirement? 1. interface ACL 2. stub area 3. passive interface 4. route filtering 5. route summary Answer: E Question No: 269 What are the three configuration requirements for implementing Modular QoS on a router? (Choose three.) 1. CoS 2. class map 3. precedence 4. service policy 5. priority 6. policy map Answer: B,D,F Question No: 270 Which option is correct when using Virtual Switching System? 1. Both control planes forward traffic simultaneously 2. Only the active switch forward traffic 3. Both data planes forward traffic simultaneously 4. Only the active switch handle the control plane Answer: C 100% Dumps4cert Free Download! Download Free Demo:300-320 Demo PDF 100% Dumps4cert Pass Guaranteed! 300-320 Dumps Dumps4cert ExamCollection Testking Lowest Price Guarantee Yes No No Up-to-Dated Yes No No Real Questions Yes No No Explanation Yes No No PDF VCE Yes No No Free VCE Simulator Yes No No Instant Download Yes No No
__label__pos
0.999795
Fraction calculator This fraction calculator performs basic and advanced fraction operations, expressions with fractions combined with integers, decimals, and mixed numbers. It also shows detailed step-by-step information about the fraction calculation procedure. The calculator helps in finding value from multiple fractions operations. Solve problems with two, three, or more fractions and numbers in one expression. The result: 4 - 2/5 = 18/5 = 3 3/5 = 3.6 Spelled result in words is eighteen fifths (or three and three fifths). How do we solve fractions step by step? 1. Subtract: 4 - 2/5 = 4/1 - 2/5 = 4 · 5/1 · 5 - 2/5 = 20/5 - 2/5 = 20 - 2/5 = 18/5 It is suitable to adjust both fractions to a common (equal, identical) denominator for adding, subtracting, and comparing fractions. The common denominator you can calculate as the least common multiple of both denominators - LCM(1, 5) = 5. It is enough to find the common denominator (not necessarily the lowest) by multiplying the denominators: 1 × 5 = 5. In the following intermediate step, it cannot further simplify the fraction result by canceling. In other words - four minus two fifths is eighteen fifths. Rules for expressions with fractions: Fractions - use a forward slash to divide the numerator by the denominator, i.e., for five-hundredths, enter 5/100. If you use mixed numbers, leave a space between the whole and fraction parts. Mixed numerals (mixed numbers or fractions) keep one space between the integer and fraction and use a forward slash to input fractions i.e., 1 2/3 . An example of a negative mixed fraction: -5 1/2. Because slash is both sign for fraction line and division, use a colon (:) as the operator of division fractions i.e., 1/2 : 1/3. Decimals (decimal numbers) enter with a decimal point . and they are automatically converted to fractions - i.e. 1.45. Math Symbols SymbolSymbol nameSymbol MeaningExample +plus signaddition 1/2 + 1/3 -minus signsubtraction 1 1/2 - 2/3 *asteriskmultiplication 2/3 * 3/4 ×times signmultiplication 2/3 × 5/6 :division signdivision 1/2 : 3 /division slashdivision 1/3 / 5 :coloncomplex fraction 1/2 : 1/3 ^caretexponentiation / power 1/4^3 ()parenthesescalculate expression inside first-3/5 - (-1/4) The calculator follows well-known rules for the order of operations. The most common mnemonics for remembering this order of operations are: PEMDAS - Parentheses, Exponents, Multiplication, Division, Addition, Subtraction. BEDMAS - Brackets, Exponents, Division, Multiplication, Addition, Subtraction BODMAS - Brackets, Of or Order, Division, Multiplication, Addition, Subtraction. GEMDAS - Grouping Symbols - brackets (){}, Exponents, Multiplication, Division, Addition, Subtraction. MDAS - Multiplication and Division have the same precedence over Addition and Subtraction. The MDAS rule is the order of operations part of the PEMDAS rule. Be careful; always do multiplication and division before addition and subtraction. Some operators (+ and -) and (* and /) have the same priority and must be evaluated from left to right. Fractions in word problems: more math problems »
__label__pos
0.995389
[Tutor] Is it thread safe to collect data from threads where run has finished? Wesley Brooks wesbrooks at gmail.com Sat Nov 1 13:04:33 CET 2008 Dear Users, I've got a few tasks that block for a while and cause my wxPython interface to lock up while they process. I'm thinking about migrating these to threads which I kick off when I want the task done. In the run bit of the thread the main work will be done, it will store the information as part of the object and when done post an event to the user interface for it to collect of the information and dispose of the thread. So there'll be a part of a wx event that looks something like: *self.loadThread = FileLoadThread(fileName, doneEvent) self.loadThread.start() * The FileLoadThread object would look like: *class FileLoadThread(threading.Thread): def __init__(self, mainGUI, fName, doneEvent): self.mainGUI = mainGUI self.fName = fName self.event = doneEvent threading.Thread.__init__(self) def run(self): self.dataObject = self.LoadFile(fName) wx.PostEvent(mainGUI, doneEvent)* ...where doneEvent is a custom event that signals to the user interface that it can collect the dataObject by doing the following: *self.dataObject = self.loadThread.dataObject* *del self.loadThread* Is this the best way to do this or should I just attach the dataObject to the event? Is the use of wx.PostEvent thread safe? Thanks in advance of any advice, Wesley Brooks -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/tutor/attachments/20081101/31986ced/attachment.htm> More information about the Tutor mailing list
__label__pos
0.530468
¾¤Î¥Ð¡¼¥¸¥ç¥ó¤Îʸ½ñ ¡§ 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9.6 | 9.5 | 9.4 | 9.3 | 9.2 | 9.1 | 9.0 | 8.4 | 8.3 | 8.2 | 8.1 | 8.0 | 7.4 | 7.3 | 7.2 33.2. ¥Ç¡¼¥¿¥Ù¡¼¥¹Àܳ¤Î´ÉÍý ¤³¤ÎÀá¤Ç¤Ï¡¢¥Ç¡¼¥¿¥Ù¡¼¥¹Àܳ¤Î³«»Ï¡¢½ªÎ»¡¢¤ª¤è¤ÓÀÚ¤êÂØ¤¨Êý¤Ë¤Ä¤¤¤Æ²òÀ⤷¤Þ¤¹¡£ 33.2.1. ¥Ç¡¼¥¿¥Ù¡¼¥¹¥µ¡¼¥Ð¤Ø¤ÎÀܳ °Ê²¼¤ÎSQLʸ¤ò»ÈÍѤ·¤Æ¡¢¥Ç¡¼¥¿¥Ù¡¼¥¹¤ØÀܳ¤·¤Þ¤¹¡£ EXEC SQL CONNECT TO target [AS connection-name] [USER user-name]; target¤Ï°Ê²¼¤ÎÊýË¡¤Ç»ØÄꤵ¤ì¤Þ¤¹¡£ ÀܳÂоݤò¥ê¥Æ¥é¥ë¡Ê¤Ä¤Þ¤ê¡¢ÊÑ¿ô¤ò»²¾È¤·¤Ê¤¤·Á¡Ë¤Ç»ØÄꤷ¡¢¤½¤ÎÃͤò°úÍÑÉä¤Ç¤¯¤¯¤é¤Ê¤«¤Ã¤¿¾ì¹ç¡¢Âçʸ»ú¾®Ê¸»ú¤Î¶èÊ̤˴ؤ·¤ÆÄ̾ï¤ÎSQL¤Îµ¬Â§¤¬Å¬ÍѤµ¤ì¤Þ¤¹¡£ ¤Þ¤¿¡¢¤³¤Î¾ì¹ç¡¢É¬Íפ˱þ¤¸¤Æ¸Ä¡¹¤Î¥Ñ¥é¥á¡¼¥¿¤òÆó½Å°úÍÑÉä¤ÇÊÌ¡¹¤Ë¤¯¤¯¤ë¤³¤È¤â¤Ç¤­¤Þ¤¹¡£ ¼ÂºÝ¤Ë¤Ï¡¢¤ª¤½¤é¤¯¡Êñ°ì°úÍÑÉä¤Ç¤¯¤¯¤é¤ì¤¿¡Ëʸ»úÎó¥ê¥Æ¥é¥ë¤â¤·¤¯¤ÏÊÑ¿ô¤Î»²¾È¤ò»ÈÍѤ·¤¿Êý¤¬¥¨¥é¡¼¤ò¤è¤êËɻߤ¹¤ë¤³¤È¤¬¤Ç¤­¤Þ¤¹¡£ DEFAULTÀܳÂоݤϡ¢¥Ç¥Õ¥©¥ë¥È¥Ç¡¼¥¿¥Ù¡¼¥¹¡¢¥Ç¥Õ¥©¥ë¥È¤Î¥æ¡¼¥¶Ì¾¤ÇÀܳ¤ò½é´ü²½¤·¤Þ¤¹¡£ ¤³¤Î¾ì¹ç¤Ï¡¢¥æ¡¼¥¶Ì¾¤ÈÀܳ̾¤òʬ¤±¤Æ»ØÄꤹ¤ë¤³¤È¤¬¤Ç¤­¤Þ¤»¤ó¡£ ¥æ¡¼¥¶Ì¾¤ò»ØÄꤹ¤ë¤Ë¤Ï¡¢Ê̤ÎÊýË¡¤â¤¢¤ê¤Þ¤¹¡£ ¤³¤ì¤Þ¤ÇƱÍÍ¡¢username¤Èpassword¤Ï¡¢SQL¼±Ê̻ҡ¢SQLʸ»úÎó¥ê¥Æ¥é¥ë¡¢Ê¸»ú·¿ÊÑ¿ô¤Ø¤Î»²¾È¤ò¼è¤ë¤³¤È¤¬¤Ç¤­¤Þ¤¹¡£ 1¤Ä¤Î¥×¥í¥°¥é¥àÆâ¤ÇÊ£¿ô¤ÎÀܳ¤ò½èÍý¤¹¤ë¾ì¹ç¤Ë¤Ï¡¢connection-name¤ò»ÈÍѤ·¤Þ¤¹¡£ ¥×¥í¥°¥é¥à¤Ç1¤Ä¤·¤«Àܳ¤ò»È¤ï¤Ê¤¤¾ì¹ç¤Ï¾Êά¤·¤Æ¹½¤¤¤Þ¤»¤ó¡£ ºÇ¤âºÇ¶á¤Ë³«¤«¤ì¤¿Àܳ¤¬¸½ºß¤ÎÀܳ¤Ë¤Ê¤ê¡¢SQLʸ¤ò¼Â¹Ô¤·¤è¤¦¤È¤¹¤ë»þ¤Ë¥Ç¥Õ¥©¥ë¥È¤Ç¤³¤ÎÀܳ¤¬»ÈÍѤµ¤ì¤Þ¤¹¡ÊËܾϤθå¤ÇÀâÌÀ¤·¤Þ¤¹¡Ë¡£ °Ê²¼¤ËCONNECTʸ¤Ë¤Ä¤¤¤Æ¡¢¿ôÎã¤ò¼¨¤·¤Þ¤¹¡£ EXEC SQL CONNECT TO [email protected]; EXEC SQL CONNECT TO unix:postgresql://sql.mydomain.com/mydb AS myconnection USER john; EXEC SQL BEGIN DECLARE SECTION; const char *target = "[email protected]"; const char *user = "john"; const char *passwd = "secret"; EXEC SQL END DECLARE SECTION; ... EXEC SQL CONNECT TO :target USER :user USING :passwd; /* or EXEC SQL CONNECT TO :target USER :user/:passwd; */ ºÇ¸å¤Î·Á¼°¤Ç¤Ï¡¢Ê¸»úÊÑ¿ô»²¾È¤È¤·¤Æ¾å¤ò»²¾È¤¹¤ëÊÑ¿ô¤ò»ÈÍѤ·¤Æ¤¤¤Þ¤¹¡£ ¸å¤ÎÀá¤Ç¡¢ÀÜÆ¬¼­¤Ë¥³¥í¥ó¤ò»ý¤Ä¾ì¹ç¤ÎSQLʸÆâ¤Ç¤ÎCÊÑ¿ô¤Î»ÈÍÑÊýË¡¤Ë¤Ä¤¤¤ÆÀâÌÀ¤·¤Þ¤¹¡£ ÀܳÂоݤνñ¼°¤Ïɸ½àSQL¤Ç¤Ïµ¬Äꤵ¤ì¤Æ¤¤¤Ê¤¤¤³¤È¤ËÃí°Õ¤·¤Æ¤¯¤À¤µ¤¤¡£ ¤½¤Î¤¿¤á¡¢°Ü¿¢²Äǽ¤Ê¥¢¥×¥ê¥±¡¼¥·¥ç¥ó¤ò³«È¯¤·¤¿¤¤¤Î¤Ç¤¢¤ì¤Ð¡¢¾å¤ÎÎã¤ÎºÇ¸å¤ÎÊýË¡¤ò´ð¤Ë¤·¤Æ¡¢ÀܳÂоÝʸ»úÎó¤ò¤É¤³¤«¤Ë¥«¥×¥»¥ë²½¤·¤Æ¤¯¤À¤µ¤¤¡£ 33.2.2. Àܳ¤ÎÁªÂò Á°Àá¤Ç¼¨¤·¤¿SQLʸ¤Ï¸½ºß¤ÎÀܳ¡¢¤Ä¤Þ¤ê¡¢ºÇ¤âºÇ¶á¤Ë³«¤¤¤¿Àܳ¾å¤Ç¼Â¹Ô¤µ¤ì¤Þ¤¹¡£ Ê£¿ô¤ÎÀܳ¤ò´ÉÍý¤¹¤ëɬÍפ¬¤¢¤ë¥¢¥×¥ê¥±¡¼¥·¥ç¥ó¤Ç¤Ï¡¢¤³¤ì¤ò½èÍý¤¹¤ë2¤Ä¤ÎÊýË¡¤¬¤¢¤ê¤Þ¤¹¡£ 1¤ÄÌܤÎÁªÂò»è¤Ï¡¢³ÆSQLʸ¤ÇÌÀ¼¨Åª¤ËÀܳ¤òÁªÂò¤¹¤ë¤³¤È¤Ç¤¹¡£ °Ê²¼¤ËÎã¤ò¼¨¤·¤Þ¤¹¡£ EXEC SQL AT connection-name SELECT ...; ¥¢¥×¥ê¥±¡¼¥·¥ç¥ó¤¬Ê£¿ô¤ÎÀܳ¤òÉÔÆÃÄê¤Ê½çÈ֤ǻÈÍѤ¹¤ëɬÍפ¬¤¢¤ë¾ì¹ç¡¢¤³¤ÎÁªÂò»è¤ÏÆÃ¤ËŬ¤·¤Æ¤¤¤Þ¤¹¡£ ¥¢¥×¥ê¥±¡¼¥·¥ç¥ó¤Î¼Â¹Ô¤ËÊ£¿ô¥¹¥ì¥Ã¥É¤ò»ÈÍѤ¹¤ë¾ì¹ç¡¢¥¹¥ì¥Ã¥É´Ö¤ÇÀܳ¤òƱ»þ¤Ë¶¦Í­¤Ç¤­¤Þ¤»¤ó¡£ Àܳ¤Ø¤Î¥¢¥¯¥»¥¹¤ò¡Ê¥ß¥å¡¼¥Æ¥¯¥¹¤ò»ÈÍѤ·¤Æ¡ËÌÀ¼¨Åª¤ËÀ©¸æ¤¹¤ë¤«¡¢¤Þ¤¿¤Ï³Æ¥¹¥ì¥Ã¥ÉÍѤÎÀܳ¤ò»ÈÍѤ¹¤ë¤«¤ò¹Ô¤ï¤Ê¤±¤ì¤Ð¤Ê¤ê¤Þ¤»¤ó¡£ ³Æ¥¹¥ì¥Ã¥É¤¬¸ÄÊ̤ÎÀܳ¤ò»ÈÍѤ¹¤ë¾ì¹ç¡¢AT¶ç¤ò»ÈÍѤ·¤Æ¤½¤Î¥¹¥ì¥Ã¥É¤¬»ÈÍѤ¹¤ëÀܳ¤ò»ØÄꤹ¤ëɬÍפ¬¤¢¤ê¤Þ¤¹¡£ 2ÈÖÌܤÎÁªÂò»è¤Ï¡¢¸½ºß¤ÎÀܳ¤òÀÚ¤êÂØ¤¨¤ëSQLʸ¤ò¼Â¹Ô¤¹¤ë¤³¤È¤Ç¤¹¡£ °Ê²¼¤ÎSQLʸ¤Ç¤¹¡£ EXEC SQL SET CONNECTION connection-name; ¿¤¯¤ÎSQLʸ¤òƱ°ìÀܳ¤ËÂФ·¤Æ»ÈÍѤ¹¤ë¾ì¹ç¡¢¤³¤ÎÁªÂò»è¤ÏÆÃ¤ËÊØÍø¤Ç¤¹¡£ ¤³¤ì¤Ï¥¹¥ì¥Ã¥É¤ò¹Íθ¤·¤Æ¤¤¤Þ¤»¤ó¡£ °Ê²¼¤ËÊ£¿ô¤Î¥Ç¡¼¥¿¥Ù¡¼¥¹¥³¥Í¥¯¥·¥ç¥ó¤ò´ÉÍý¤·¤Æ¤¤¤ë¥×¥í¥°¥é¥à¤ÎÎã¤ò¼¨¤·¤Þ¤¹¡£ #include <stdio.h> EXEC SQL BEGIN DECLARE SECTION; char dbname[1024]; EXEC SQL END DECLARE SECTION; int main() { EXEC SQL CONNECT TO testdb1 AS con1 USER testuser; EXEC SQL CONNECT TO testdb2 AS con2 USER testuser; EXEC SQL CONNECT TO testdb3 AS con3 USER testuser; /* This query would be executed in the last opened database "testdb3". */ EXEC SQL SELECT current_database() INTO :dbname; printf("current=%s (should be testdb3)\n", dbname); /* Using "AT" to run a query in "testdb2" */ EXEC SQL AT con2 SELECT current_database() INTO :dbname; printf("current=%s (should be testdb2)\n", dbname); /* Switch the current connection to "testdb1". */ EXEC SQL SET CONNECTION con1; EXEC SQL SELECT current_database() INTO :dbname; printf("current=%s (should be testdb1)\n", dbname); EXEC SQL DISCONNECT ALL; return 0; } ¤³¤ÎÎã¤Ï¡¢¼¡¤Î¤è¤¦¤Ê½ÐÎϤòÀ¸À®¤·¤Þ¤¹¡£ current=testdb3 (should be testdb3) current=testdb2 (should be testdb2) current=testdb1 (should be testdb1) 33.2.3. Àܳ¤òÊĤ¸¤ë Àܳ¤òÊĤ¸¤ë¤Ë¤Ï°Ê²¼¤ÎSQLʸ¤ò»ÈÍѤ·¤Þ¤¹¡£ EXEC SQL DISCONNECT [connection]; connection¤Ï°Ê²¼¤ÎÊýË¡¤Ç»ØÄꤵ¤ì¤Þ¤¹¡£ Àܳ̾¤Î»ØÄ꤬¤Ê¤±¤ì¤Ð¡¢¸½ºß¤ÎÀܳ¤¬ÊĤ¸¤é¤ì¤Þ¤¹¡£ ¥¢¥×¥ê¥±¡¼¥·¥ç¥ó¤Ç¤Ï¡¢²áµî¤Ë³«¤¤¤¿¤¹¤Ù¤Æ¤ÎÀܳ¤òÌÀ¼¨Åª¤ËÊĤ¸¤ë¤³¤È¤ò¿ä¾©¤·¤Þ¤¹¡£
__label__pos
0.563462
OAuth (React) The OAuth2 component provides a simple way to authenticate users so they can gain access to your cloud data sources. When users sign-in using OAuth2, your application gets an accessToken that identifies the user, which may provide them with permissions to read and write to the data source. Learn about FlexGrid | OAuth2 Documentation | Cloud API Reference This example uses React. import 'bootstrap.css'; import '@grapecity/wijmo.styles/wijmo.css'; import './app.css'; import * as React from 'react'; import * as ReactDOM from 'react-dom'; import { Firestore, OAuth2 } from '@grapecity/wijmo.cloud'; import { Tooltip, PopupPosition, SortDescription } from '@grapecity/wijmo'; import { DataMap } from '@grapecity/wijmo.grid'; import { FlexGrid, FlexGridColumn } from '@grapecity/wijmo.react.grid'; const API_KEY = 'AIzaSyCvuXEzP57I5CQ9ifZDG2_K8M3nDa1LOPE'; class App extends React.Component { constructor(props) { super(props); // create the Firestore data source const PROJECT_ID = 'test-9c0be'; this._fs = new Firestore(PROJECT_ID, API_KEY, { collections: ['Products', 'Categories', 'Suppliers'] }); // expose products, current user let products = this._fs.getCollection('Products'); products.sortDescriptions.push(new SortDescription('ProductID', true)); let mapCat = new DataMap(this._fs.getCollection('Categories'), 'CategoryID', 'CategoryName'); let mapSup = new DataMap(this._fs.getCollection('Suppliers'), 'SupplierID', 'CompanyName'); this.state = { products: products, mapCat: mapCat, mapSup: mapSup, user: null }; } componentDidMount() { // create the OAuth2 component const CLIENT_ID = '60621001861-h0u4ek4kmd3va9o2bubhq9ean0bgrhu2.apps.googleusercontent.com'; const SCOPES = ['https://www.googleapis.com/auth/userinfo.email']; const auth = new OAuth2(API_KEY, CLIENT_ID, SCOPES, { error: (s, e) => { console.log(JSON.stringify(e.error, null, 2)); } }); // button to log in/out let oAuthBtn = document.getElementById('auth-btn'), oAuthTip = new Tooltip({ cssClass: 'auth-tip', position: PopupPosition.BelowRight, gap: 0 }); // click button to log user in or out oAuthBtn.addEventListener('click', () => { if (auth.user) { auth.signOut(); } else { auth.signIn(); } }); // update button/sheet state when user changes auth.userChanged.addHandler((s) => { let user = s.user; oAuthBtn.textContent = user ? 'Sign Out' : 'Sign In'; oAuthTip.setTooltip(oAuthBtn, user ? `<b>Signed in as</b><br/> ${user.firstName}<br/> <img src="${user.imageUrl}"/><br/> <span class="e-mail">${user.eMail}</span>` : null); // update Firestore id token this._fs.idToken = user ? s.idToken : null; // Firestore authentication //this._fs.accessToken = user ? s.accessToken : null; // IAM authentication // allow editing if we are authenticated this.setState({ user: user }); // update message document.getElementById('auth-msg').textContent = user ? 'You are signed in, so you may edit the grid (if you have permissions).' : 'You are not signed in, so you cannot edit the grid.'; }); } render() { return <div className='container-fluid'> <div className="auth"> <div id="auth-msg"></div> <button id="auth-btn" className="btn btn-primary"> Sign In </button> </div> <FlexGrid selectionMode="MultiRange" showMarquee={true} allowAddNew={true} allowDelete={true} isReadOnly={!this.state.user} itemsSource={this.state.products} autoGenerateColumns={false}> <FlexGridColumn binding="ProductID" header="ID"/> <FlexGridColumn binding="ProductName" header="Product Name" width={200}/> <FlexGridColumn binding="CategoryID" header="Category" width={150} dataMap={this.state.mapCat}/> <FlexGridColumn binding="SupplierID" header="Supplier" width={150} dataMap={this.state.mapSup}/> <FlexGridColumn binding="UnitPrice" header="Unit Price" format="n2"/> <FlexGridColumn binding="QuantityPerUnit" header="Qty per Unit" width={150}/> <FlexGridColumn binding="UnitsInStock" header="Units in Stock"/> <FlexGridColumn binding="Discontinued"/> </FlexGrid> </div>; } } ReactDOM.render(<App />, document.getElementById('app')); <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <title>GrapeCity Wijmo OAuth2</title> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <!-- SystemJS --> <script src="node_modules/systemjs/dist/system.src.js"></script> <script src="systemjs.config.js"></script> <script> System.import('./src/app'); </script> </head> <body> <div id="app"></div> </body> </html> .wj-flexgrid { height: 350px; } .auth { display: flex; align-items: center; justify-content: space-between; padding: 12px; } .auth-tip { text-align: right; background: #fffacf8e; border-radius: 0; } .auth-tip .e-mail { font-size: 70%; font-style: italic; } (function (global) { System.config({ transpiler: 'plugin-babel', babelOptions: { es2015: true, react: true }, meta: { '*.css': { loader: 'css' } }, paths: { // paths serve as alias 'npm:': 'node_modules/' }, // map tells the System loader where to look for things map: { 'jszip': 'npm:jszip/dist/jszip.js', '@grapecity/wijmo': 'npm:@grapecity/wijmo/index.js', '@grapecity/wijmo.input': 'npm:@grapecity/wijmo.input/index.js', '@grapecity/wijmo.styles': 'npm:@grapecity/wijmo.styles', '@grapecity/wijmo.cultures': 'npm:@grapecity/wijmo.cultures', '@grapecity/wijmo.chart': 'npm:@grapecity/wijmo.chart/index.js', '@grapecity/wijmo.chart.analytics': 'npm:@grapecity/wijmo.chart.analytics/index.js', '@grapecity/wijmo.chart.animation': 'npm:@grapecity/wijmo.chart.animation/index.js', '@grapecity/wijmo.chart.annotation': 'npm:@grapecity/wijmo.chart.annotation/index.js', '@grapecity/wijmo.chart.finance': 'npm:@grapecity/wijmo.chart.finance/index.js', '@grapecity/wijmo.chart.finance.analytics': 'npm:@grapecity/wijmo.chart.finance.analytics/index.js', '@grapecity/wijmo.chart.hierarchical': 'npm:@grapecity/wijmo.chart.hierarchical/index.js', '@grapecity/wijmo.chart.interaction': 'npm:@grapecity/wijmo.chart.interaction/index.js', '@grapecity/wijmo.chart.radar': 'npm:@grapecity/wijmo.chart.radar/index.js', '@grapecity/wijmo.chart.render': 'npm:@grapecity/wijmo.chart.render/index.js', '@grapecity/wijmo.chart.webgl': 'npm:@grapecity/wijmo.chart.webgl/index.js', '@grapecity/wijmo.chart.map': 'npm:@grapecity/wijmo.chart.map/index.js', '@grapecity/wijmo.gauge': 'npm:@grapecity/wijmo.gauge/index.js', '@grapecity/wijmo.grid': 'npm:@grapecity/wijmo.grid/index.js', '@grapecity/wijmo.grid.detail': 'npm:@grapecity/wijmo.grid.detail/index.js', '@grapecity/wijmo.grid.filter': 'npm:@grapecity/wijmo.grid.filter/index.js', '@grapecity/wijmo.grid.search': 'npm:@grapecity/wijmo.grid.search/index.js', '@grapecity/wijmo.grid.grouppanel': 'npm:@grapecity/wijmo.grid.grouppanel/index.js', '@grapecity/wijmo.grid.multirow': 'npm:@grapecity/wijmo.grid.multirow/index.js', '@grapecity/wijmo.grid.transposed': 'npm:@grapecity/wijmo.grid.transposed/index.js', '@grapecity/wijmo.grid.transposedmultirow': 'npm:@grapecity/wijmo.grid.transposedmultirow/index.js', '@grapecity/wijmo.grid.pdf': 'npm:@grapecity/wijmo.grid.pdf/index.js', '@grapecity/wijmo.grid.sheet': 'npm:@grapecity/wijmo.grid.sheet/index.js', '@grapecity/wijmo.grid.xlsx': 'npm:@grapecity/wijmo.grid.xlsx/index.js', '@grapecity/wijmo.grid.selector': 'npm:@grapecity/wijmo.grid.selector/index.js', '@grapecity/wijmo.grid.cellmaker': 'npm:@grapecity/wijmo.grid.cellmaker/index.js', '@grapecity/wijmo.grid.immutable': 'npm:@grapecity/wijmo.grid.immutable/index.js', '@grapecity/wijmo.touch': 'npm:@grapecity/wijmo.touch/index.js', '@grapecity/wijmo.cloud': 'npm:@grapecity/wijmo.cloud/index.js', '@grapecity/wijmo.nav': 'npm:@grapecity/wijmo.nav/index.js', '@grapecity/wijmo.odata': 'npm:@grapecity/wijmo.odata/index.js', '@grapecity/wijmo.olap': 'npm:@grapecity/wijmo.olap/index.js', '@grapecity/wijmo.rest': 'npm:@grapecity/wijmo.rest/index.js', '@grapecity/wijmo.pdf': 'npm:@grapecity/wijmo.pdf/index.js', '@grapecity/wijmo.pdf.security': 'npm:@grapecity/wijmo.pdf.security/index.js', '@grapecity/wijmo.viewer': 'npm:@grapecity/wijmo.viewer/index.js', '@grapecity/wijmo.xlsx': 'npm:@grapecity/wijmo.xlsx/index.js', '@grapecity/wijmo.undo': 'npm:@grapecity/wijmo.undo/index.js', '@grapecity/wijmo.interop.grid': 'npm:@grapecity/wijmo.interop.grid/index.js', '@grapecity/wijmo.barcode': 'npm:@grapecity/wijmo.barcode/index.js', '@grapecity/wijmo.barcode.common': 'npm:@grapecity/wijmo.barcode.common/index.js', '@grapecity/wijmo.barcode.composite': 'npm:@grapecity/wijmo.barcode.composite/index.js', '@grapecity/wijmo.barcode.specialized': 'npm:@grapecity/wijmo.barcode.specialized/index.js', "@grapecity/wijmo.react.chart.analytics": "npm:@grapecity/wijmo.react.chart.analytics/index.js", "@grapecity/wijmo.react.chart.animation": "npm:@grapecity/wijmo.react.chart.animation/index.js", "@grapecity/wijmo.react.chart.annotation": "npm:@grapecity/wijmo.react.chart.annotation/index.js", "@grapecity/wijmo.react.chart.finance.analytics": "npm:@grapecity/wijmo.react.chart.finance.analytics/index.js", "@grapecity/wijmo.react.chart.finance": "npm:@grapecity/wijmo.react.chart.finance/index.js", "@grapecity/wijmo.react.chart.hierarchical": "npm:@grapecity/wijmo.react.chart.hierarchical/index.js", "@grapecity/wijmo.react.chart.interaction": "npm:@grapecity/wijmo.react.chart.interaction/index.js", "@grapecity/wijmo.react.chart.radar": "npm:@grapecity/wijmo.react.chart.radar/index.js", "@grapecity/wijmo.react.chart": "npm:@grapecity/wijmo.react.chart/index.js", "@grapecity/wijmo.react.core": "npm:@grapecity/wijmo.react.core/index.js", '@grapecity/wijmo.react.chart.map': 'npm:@grapecity/wijmo.react.chart.map/index.js', "@grapecity/wijmo.react.gauge": "npm:@grapecity/wijmo.react.gauge/index.js", "@grapecity/wijmo.react.grid.detail": "npm:@grapecity/wijmo.react.grid.detail/index.js", "@grapecity/wijmo.react.grid.filter": "npm:@grapecity/wijmo.react.grid.filter/index.js", "@grapecity/wijmo.react.grid.grouppanel": "npm:@grapecity/wijmo.react.grid.grouppanel/index.js", '@grapecity/wijmo.react.grid.search': 'npm:@grapecity/wijmo.react.grid.search/index.js', "@grapecity/wijmo.react.grid.multirow": "npm:@grapecity/wijmo.react.grid.multirow/index.js", "@grapecity/wijmo.react.grid.sheet": "npm:@grapecity/wijmo.react.grid.sheet/index.js", '@grapecity/wijmo.react.grid.transposed': 'npm:@grapecity/wijmo.react.grid.transposed/index.js', '@grapecity/wijmo.react.grid.transposedmultirow': 'npm:@grapecity/wijmo.react.grid.transposedmultirow/index.js', '@grapecity/wijmo.react.grid.immutable': 'npm:@grapecity/wijmo.react.grid.immutable/index.js', "@grapecity/wijmo.react.grid": "npm:@grapecity/wijmo.react.grid/index.js", "@grapecity/wijmo.react.input": "npm:@grapecity/wijmo.react.input/index.js", "@grapecity/wijmo.react.olap": "npm:@grapecity/wijmo.react.olap/index.js", "@grapecity/wijmo.react.viewer": "npm:@grapecity/wijmo.react.viewer/index.js", "@grapecity/wijmo.react.nav": "npm:@grapecity/wijmo.react.nav/index.js", "@grapecity/wijmo.react.base": "npm:@grapecity/wijmo.react.base/index.js", '@grapecity/wijmo.react.barcode.common': 'npm:@grapecity/wijmo.react.barcode.common/index.js', '@grapecity/wijmo.react.barcode.composite': 'npm:@grapecity/wijmo.react.barcode.composite/index.js', '@grapecity/wijmo.react.barcode.specialized': 'npm:@grapecity/wijmo.react.barcode.specialized/index.js', 'jszip': 'npm:jszip/dist/jszip.js', 'react': 'npm:react/umd/react.production.min.js', 'react-dom': 'npm:react-dom/umd/react-dom.production.min.js', 'redux': 'npm:redux/dist/redux.min.js', 'react-redux': 'npm:react-redux/dist/react-redux.min.js', 'bootstrap.css': 'npm:bootstrap/dist/css/bootstrap.min.css', 'css': 'npm:systemjs-plugin-css/css.js', 'plugin-babel': 'npm:systemjs-plugin-babel/plugin-babel.js', 'systemjs-babel-build':'npm:systemjs-plugin-babel/systemjs-babel-browser.js' }, // packages tells the System loader how to load when no filename and/or no extension packages: { src: { defaultExtension: 'jsx' }, "node_modules": { defaultExtension: 'js' }, } }); })(this);
__label__pos
0.648942
What Are Blockchain Bridges and Why Do They Keep Getting Hacked? There are one-way (unidirectional) bridges and two-way (bidirectional) bridges. A one-way bridge means users can only bridge assets to one destination blockchain but not back to its native blockchain. To understand what a blockchain bridge is, you need to first understand what a blockchain is. Risks of Blockchain Bridges The bottom line is that crypto as a whole takes a reputational and financial hit every time an exploit makes waves. The answer is to learn from the mistakes hackers teach us time and again, becoming more proactive in our efforts to prevent repeat performances. The backend server needs to validate the structure of the transaction’s emitted event, as well as the contract address that emitted the event. If the latter is neglected, an attacker could deploy a malicious contract to forge a deposit event with the same structure as a legitimate deposit event. To enhance the security of bridges, it’s valuable to understand common bridge security vulnerabilities and test the bridges for them before launch. How $323M in crypto was stolen from a blockchain bridge called Wormhole You can find a few blockchain bridge projects making their way towards popularity. The bridges provide seamless transactions between popular blockchain networks. In addition, every bridge has a different approach to operations based on its time. Therefore, you are more likely to identify profound variations in the transfer times for every bridge. Blockchain technology has covered quite an extensive journey since its introduction to the world in 2008 with the Bitcoin whitepaper. Additionally, it makes it easier for developers from different networks to work together to create new user platforms. Cross-chain technology encourages quicker transaction processing times and immediate token exchanges from the user’s perspective. This allows for more cross-chain transactions and the ability to access different DeFi services on different chains, leading to more innovation and growth in the DeFi space. Trustless bridges are much more complicated on a technical level than some custodial bridges. This type of bridge can include many ins and outs across the blockchains they operate. As such, trustless bridges have faced many different attacks and exploits in recent years. Exploring Various Ecosystem dApps A blockchain bridge is a protocol connecting two blockchains to allow interactions between them. If you own bitcoin but want to participate in DeFi activity on the Ethereum network, a blockchain bridge enables you to do so without https://www.xcritical.com/ selling your bitcoin. Bridge services “wrap” cryptocurrency to convert one type of coin into another. So if you go to a bridge to use another currency, like Bitcoin (BTC), the bridge will spit out wrapped bitcoins (WBTC). Risks of Blockchain Bridges Various newer blockchains based on different consensus protocols came into existence shortly afterward. Bridges have rightly earned a reputation as Web3’s weak link after a string of exploits this year. Cross-chain bridges make interoperability within the blockchain sphere possible. They enable protocols to communicate with one another, share data and build exciting new use cases that are helping propel Web3 into new frontiers. But as this month’s BNB Smart Chain exploit reminds us, they are vulnerable to attack. Immediately tracing and labeling funds in the Chainalysis platform can make the difference in preventing bad actors from cashing out their ill-gotten gains. Defining bridges in blockchain This can be problematic since passing the zero address to the function can bypass the whitelist verification even if implemented incorrectly. If the backend server does not verify which address emitted the event, it would consider this a valid transaction and sign the message. The attacker could then send the transaction hash to the backend, bypassing verification and allowing them to withdraw the tokens from the target chain. The attackers also need victims to approve the bridge contract to transfer tokens using the function “transferFrom” to drain assets from the bridge contract. The other token issuance method some bridges employ is known as the “liquidity pool method”. This process works similarly to liquidity farming and relies on network participants to succeed. • A blockchain bridge facilitates the conversion of one native asset from one blockchain to its equivalent on another blockchain. • Just a few years ago, centralized exchanges were by far the most frequent targets of hacks in the industry. • The most common example in practice is when users leverage centralized exchanges to swap or bridge their own tokens. • In the case of this bridge hack, it seems attackers used social engineering to trick their way into accessing the private encryption keys used to verify transactions on the network. • But as this month’s BNB Smart Chain exploit reminds us, they are vulnerable to attack. Wrapped asset bridges facilitate the transfer of non-native assets between blockchains. A great example would be Wrapped BTC, which mints WBTC on Ethereum for trading and DeFi purposes. Bridges use wrapped tokens, which lock tokens in one blockchain into a smart contract. After a decentralized cross-chain oracle called a “guardian” certifies that the coins have been properly locked on one chain, the bridge mints or releases tokens of the same value on the other chain. Wormhole bridges the Solana blockchain with other blockchains, including those for Avalanche, Oasis, Binance Smart Chain, Ethereum, Polygon, and Terra. Non-custodial bridges operate in a decentralized manner, relying on smart contracts to manage the crypto locking and minting processes, removing the need to trust a bridge operator. Blockchain bridges by mechanisms The subsequent rise in the number of cryptocurrencies and development of blockchain networks with programmability, such as Ethereum, have created a completely new ecosystem. Blockchain promises the value of decentralization and freedom from the control of any individual or institution. However, majority of blockchain networks exist in the form of isolated communities with their own economies. Therefore, blockchain bridges have become one of the inevitable necessities for the decentralized application ecosystem. Blockchain bridges play a crucial role in achieving interoperability across different blockchain networks. Risks of Blockchain Bridges Under this type of bridge, members are obliged to cede control of their assets to a governing body. However, there are not as many reliable services available today, which could force users to trust smaller and less-known companies. One of the most popular trusted bridge initiatives is Wrapped Bitcoin (wBTC), which allows sBitcoin users to pursue the opportunities of Ethereum. Custodial vs Non-custodial Bridges Interoperability has the potential to be the catalyst for Internet innovation. Improving blockchain networks’ interoperability and their widespread adoption depends on using blockchain bridges. The number https://www.xcritical.com/blog/what-is-a-blockchain-bridge-and-how-it-works/ of users, bridges, and overall transaction volume on these bridges have all increased exceptionally. As the Internet transitions to Web3, the blockchain bridge will also keep expanding in the future. Risks of Blockchain Bridges Leave a Comment
__label__pos
0.715575
Coders Packet Bus Seat Booking System Using Java By MASNA VISHNUDEV In this module, we are going to build a code for Bus seat booking system using java. In this, we can book a window seat or a non-window seat. Implementation Of Booking System In this, the passenger can have a choice to book a window seat or an aisle seat. Actually, some people like the window seat, and some people like the aisle seat. So this is the Bus ticket reservation gives the choice to book their own type of seat. A person can book multiple seats of their choice until the seats are available. To build this project different loops are used.   The following code gives the complete description: import java.util.Scanner; import java.util.Date; public class HYD { private static int[] seats = new int[12]; public static void main(String args[]) { System.out.println("Welcome to the HYD Bus reservation system!"); System.out.println("Have a fabulous HYD ride!"); System.out.println(); 1 for (int i = 0; i < 12; i++) { seats[i] = 0; } Scanner s = new Scanner(System.in); int choice = 1; System.out.print("Please enter your choice\n1.window seat\n2.Aisle seat\n0.Exit.\n"); choice = s.nextInt(); while (choice != 0) { int seatnumber = 0; if (choice == 1) { seatnumber = bookWindow(); if (seatnumber == -1) { seatnumber = bookAisle(); if (seatnumber != -1) { System. out.println("Sorry, we were not able to book a window seat. But do have an aisle seat."); printBoardingPass(seatnumber); } } else { System.out.println("Congratulations, we have a window seat available!"); printBoardingPass(seatnumber); } } else if (choice == 2) { seatnumber = bookAisle(); if (seatnumber == -1) { seatnumber = bookWindow(); if (seatnumber != -1) { System.out.println("Sorry, we were not able to book an aisle seat. But do have a window seat."); printBoardingPass(seatnumber); } } else { System.out.println("Congratulations, we have an aisle seat available!"); printBoardingPass(seatnumber); } } else { System.out.println("Invalid choice made. Please try again!"); choice = 0; } if (seatnumber == -1) { System.out.println("We are sorry, there are no window or aisle seats"); System.out.println(); } System.out.print("Please enter your choice\n1.window seat\n2.Aisle seat\n0.Exit.\n"); choice = s.nextInt(); } } private static int bookWindow() { for (int i = 0; i < 6; i++) { if (seats[i] == 0) { seats[i] = 1; return i + 1; } } return -1; } private static int bookAisle() { for (int i = 6; i < 12; i++) { if (seats[i] == 0) { seats[i] = 1; return i + 1; } } return -1; } private static void printBoardingPass(int seatnumber) { Date timenow = new Date(); System.out.println(); System.out.println("Date: " + timenow.toString()); System.out.println("Boarding pass for seat number: " + seatnumber); System.out.println("Your Booking Successful!"); System.out.println("This ticket is non-refundable and non-transferable."); System.out.println("Please be curteous, do not smoke. Enjoy your trip."); System.out.println("Have a nice day"); System.out.println(); } }   Output1: Booking of a Window seat.     Output 2: Booking for an Aisle seat.   And this is all about the bus ticket reservation system using java. Thank You.   Download Complete Code Comments No comments yet
__label__pos
0.999225
PageRenderTime 53ms CodeModel.GetById 22ms RepoModel.GetById 1ms app.codeStats 0ms /scripts/ca/get_legislation.py https://github.com/hoverbird/fiftystates Python | 112 lines | 106 code | 4 blank | 2 comment | 1 complexity | 4dee946dd506e7f4a2c61be185bc7e32 MD5 | raw file 1. #!/usr/bin/env python 2. import urllib2 3. import re 4. import datetime as dt 5. from BeautifulSoup import BeautifulSoup 6. # ugly hack 7. import sys 8. sys.path.append('./scripts') 9. from pyutils.legislation import LegislationScraper, NoDataForYear 10. class CALegislationScraper(LegislationScraper): 11. state = 'ca' 12. def get_bill_info(self, chamber, session, bill_id): 13. detail_url = 'http://www.leginfo.ca.gov/cgi-bin/postquery?bill_number=%s_%s&sess=%s' % (bill_id[:2].lower(), bill_id[2:], session.replace('-', '')) 14. # Get the details page and parse it with BeautifulSoup. These 15. # pages contain a malformed 'p' tag that (certain versions of) 16. # BS choke on, so we replace it with a regex before parsing. 17. details_raw = urllib2.urlopen(detail_url).read() 18. details_raw = details_raw.replace('<P ALIGN=CENTER">', '') 19. details = BeautifulSoup(details_raw) 20. # Get the history page (following a link from the details page). 21. # Once again, we remove tags that BeautifulSoup chokes on 22. # (including all meta tags, because bills with quotation marks 23. # in the title come to us w/ malformed meta tags) 24. hist_link = details.find(href=re.compile("_history.html")) 25. hist_url = 'http://www.leginfo.ca.gov%s' % hist_link['href'] 26. history_raw = urllib2.urlopen(hist_url).read() 27. history_raw = history_raw.replace('<! ****** document data starts here ******>', '') 28. rem_meta = re.compile('</title>.*</head>', re.MULTILINE | re.DOTALL) 29. history_raw = rem_meta.sub('</title></head>', history_raw) 30. history = BeautifulSoup(history_raw) 31. # Find title and add bill 32. title_match = re.search('TOPIC\t:\s(\w.+\n(\t\w.*\n){0,})', history_raw, re.MULTILINE) 33. bill_title = title_match.group(1).replace('\n', '').replace('\t', ' ') 34. self.add_bill(chamber, session, bill_id, bill_title) 35. # Find author (primary sponsor) 36. sponsor_match = re.search('^AUTHOR\t:\s(.*)$', history_raw, re.MULTILINE) 37. bill_sponsor = sponsor_match.group(1) 38. self.add_sponsorship(chamber, session, bill_id, 'primary', bill_sponsor) 39. # Get all versions of the bill 40. text_re = '%s_%s_bill\w*\.html' % (bill_id[:2].lower(), bill_id[2:]) 41. links = details.find(text='Bill Text').parent.findAllNext(href=re.compile(text_re)) 42. for link in links: 43. version_url = "http://www.leginfo.ca.gov%s" % link['href'] 44. # This name is not necessarily unique (for example, there may 45. # be many versions called simply "Amended"). Perhaps we should 46. # add a date or something to make it unique? 47. version_name = link.parent.previousSibling.previousSibling.b.font.string 48. self.add_bill_version(chamber, session, bill_id, 49. version_name, version_url) 50. # Get bill actions 51. action_re = re.compile('^(\d{4})|^([\w.]{4,6}\s+\d{1,2})\s+(.*(\n\s+.*){0,})', re.MULTILINE) 52. act_year = None 53. for act_match in action_re.finditer(history.find('pre').contents[0]): 54. # If we didn't match group 2 then this must be a year change 55. if act_match.group(2) == None: 56. act_year = act_match.group(1) 57. continue 58. # If not year change, must be an action 59. act_date = act_match.group(2) 60. action = act_match.group(3).replace('\n', '').replace(' ', ' ').replace('\t', ' ') 61. self.add_action(chamber, session, bill_id, chamber, 62. action, "%s, %s" % (act_date, act_year)) 63. def scrape_session(self, chamber, session): 64. if chamber == 'upper': 65. chamber_name = 'senate' 66. bill_abbr = 'SB' 67. elif chamber == 'lower': 68. chamber_name = 'assembly' 69. bill_abbr = 'AB' 70. # Get the list of all chamber bills for the given session 71. # (text format, sorted by author) 72. url = "http://www.leginfo.ca.gov/pub/%s/bill/index_%s_author_bill_topic" % (session, chamber_name) 73. self.be_verbose("Getting bill list for %s %s" % (chamber, session)) 74. bill_list = urllib2.urlopen(url).read() 75. bill_re = re.compile('\s+(%s\s+\d+)(.*(\n\s{31}.*){0,})' % bill_abbr, 76. re.MULTILINE) 77. for bill_match in bill_re.finditer(bill_list): 78. bill_id = bill_match.group(1).replace(' ', '') 79. self.get_bill_info(chamber, session, bill_id) 80. def scrape_bills(self, chamber, year): 81. # CA makes data available from 1993 on 82. if int(year) < 1993 or int(year) > dt.date.today().year: 83. raise NoDataForYear(year) 84. # We expect the first year of a session (odd) 85. if int(year) % 2 != 1: 86. raise NoDataForYear(year) 87. year1 = year[2:] 88. year2 = str((int(year) + 1))[2:] 89. session = "%s-%s" % (year1, year2) 90. self.scrape_session(chamber, session) 91. if __name__ == '__main__': 92. CALegislationScraper().run()
__label__pos
0.967623
How can I test if a string is a number in c# The blog post is related to “How can I test if a string is a number in c#?” In C#, you can test if a string is a number by using the int.TryParse or double.TryParse methods, depending on the type of number you want to check for. Here is an example using int.TryParse: string strNum = "12345"; int number; bool isNumeric = int.TryParse(strNum, out number); if (isNumeric) { Console.WriteLine("The string is a valid number: " + number); } else { Console.WriteLine("The string is not a valid number."); } In this example, we use int.TryParse to try to parse the string "12345" into an int variable named number. The method returns a boolean value indicating whether the parsing was successful or not. If the string is a valid number, the method sets the number variable to the parsed value and returns true. If the string is not a valid number, the method returns false. You can use similar code with double.TryParse to check for decimal numbers: string strDecimal = "3.14"; double decimalNumber; bool isDecimal = double.TryParse(strDecimal, out decimalNumber); if (isDecimal) { Console.WriteLine("The string is a valid decimal number: " + decimalNumber); } else { Console.WriteLine("The string is not a valid decimal number."); } In this example, we use double.TryParse to try to parse the string "3.14" into a double variable named decimalNumber. If the string is a valid decimal number, the method sets the decimalNumber variable to the parsed value and returns true. If the string is not a valid decimal number, the method returns false.
__label__pos
0.999816
loading Import data from email accounts or other providers We provide a fully managed data import of customer records and conversations from email accounts and other popular providers. To find out if your current provider is supported or to get started, contact us. How it works Though the process for data imports depends on the source, here's a high level outline of what to expect: 1. Your team migrates over into Enchant and stops using the old system completely. 2. You provide access to the data to be imported, either via API key or login details. 3. We extract the data and convert it into a format appropriate to import into Enchant. 4. We import a sample of 100 tickets into a temporary account for you to assess any import issues. 5. Once approved, we load the entirety of the data into into your account. Is there a limit to how much data can be imported? We generally try not to impose any limits on the imported data. If there is a lot of data to be imported, we may choose to only import attachments for the last 12 months or to limit the number of years of data imported. Can another import be done later to "catch up" the data? No, an import can only be done once. It is important you completely stop using old provider before we start our data import processes. What requirements must be met before an import can be scheduled? There are two primary requirements: • Data should not be changing in the source system: Your team's migration to Enchant should be complete. New emails/messages should not be routing to the source system anymore. • You must have a paid Enchant account: Imports are managed by our dev team. To minimize wasted efforts, we cannot schedule an import during your trial period. Is there a cost for a data import? Data imports are a free service we offer to teams migrating in from email systems or other service providers. Are imported tickets any different from normal tickets? Imported tickets look like normal tickets and can be discovered via search or customer history. However, imported tickets are different from normal tickets in a number of ways: • They cannot be changed. The only thing you can to do them is trash them.. This means they cannot be assigned, have labels added or removed, notes added, etc. • They are not accounted for in reports. • They are not visible in live folders. Can my team continue working on the imported tickets in Enchant? No, imported tickets cannot be changed. This means you cannot reply to them, assign them, etc. Imports are not meant to enable continuation of tickets as part of a migration to Enchant. The purpose of an import is to bring in your conversation histories after your migration to Enchant has completed. What kind of information do I need to provide? If you're migrating data from another service provider, we will use their API to download data. You will need to provide API keys. If you're migrating data from an email account, then we would need login details for IMAP or POP3 connectivity. Alternatively, we can also work with a PST file generated by Outlook or a MBOX file generated by Google Takeout.
__label__pos
0.993162
test suite reviews and discussions help / color / mirror / Atom feed From: Haiyang Zhao <[email protected]> To: [email protected] Cc: [email protected], Haiyang Zhao <[email protected]> Subject: [dts] [PATCH V1 2/5] framework/test_case: handle the VerifySkip exception and add some functions Date: Wed, 17 Mar 2021 15:16:22 +0800 Message-ID: <[email protected]> (raw) In-Reply-To: <[email protected]> handle the VerfiySkip exception and mark the related case with N/A in result. add some functions to check if the nic or pkg support current case. Signed-off-by: Haiyang Zhao <[email protected]> --- framework/test_case.py | 102 +++++++++++++++++++++++++++++++++++------ 1 file changed, 89 insertions(+), 13 deletions(-) diff --git a/framework/test_case.py b/framework/test_case.py index 57bea562..3347adad 100644 --- a/framework/test_case.py +++ b/framework/test_case.py @@ -38,7 +38,7 @@ import traceback import signal import time -from exception import VerifyFailure, TimeoutException +from exception import VerifyFailure, VerifySkip, TimeoutException from settings import DRIVERS, NICS, get_nic_name, load_global_setting from settings import PERF_SETTING, FUNC_SETTING, DEBUG_SETTING from settings import DEBUG_CASE_SETTING, HOST_DRIVER_SETTING @@ -48,6 +48,7 @@ from test_result import ResultTable, Result from logger import getLogger from config import SuiteConf from utils import BLUE, RED +from functools import wraps class TestCase(object): @@ -68,15 +69,10 @@ class TestCase(object): self._check_and_reconnect(crb=self.tester) # convert netdevice to codename - self.nics = [] - for portid in range(len(self.dut.ports_info)): - nic_type = self.dut.ports_info[portid]['type'] - self.nics.append(get_nic_name(nic_type)) - if len(self.nics): - self.nic = self.nics[0] - else: - self.nic = '' - self.kdriver = self._get_nic_driver(self.nic) + self.nic = self.dut.nic.name + self.nic_obj = self.dut.nic + self.kdriver = self.dut.nic.default_driver + self.pkg = self.dut.nic.pkg # result object for save suite result self._suite_result = Result() @@ -168,6 +164,12 @@ class TestCase(object): print(RED("History dump finished.")) raise VerifyFailure(description) + def skip_case(self, passed, description): + if not passed: + if self._enable_debug: + print("skip case: \"%s\" " % RED(description)) + raise VerifySkip(description) + def _get_nic_driver(self, nic_name): if nic_name in list(DRIVERS.keys()): return DRIVERS[nic_name] @@ -257,17 +259,28 @@ class TestCase(object): try: self.set_up_all() return True - except Exception: + except VerifySkip as v: + self.logger.info('set_up_all SKIPPED:\n' + traceback.format_exc()) + # record all cases N/A + if self._enable_func: + for case_obj in self._get_functional_cases(): + self._suite_result.test_case = case_obj.__name__ + self._suite_result.test_case_skip(str(v)) + if self._enable_perf: + for case_obj in self._get_performance_cases(): + self._suite_result.test_case = case_obj.__name__ + self._suite_result.test_case_skip(str(v)) + except Exception as v: self.logger.error('set_up_all failed:\n' + traceback.format_exc()) # record all cases blocked if self._enable_func: for case_obj in self._get_functional_cases(): self._suite_result.test_case = case_obj.__name__ - self._suite_result.test_case_blocked('set_up_all failed') + self._suite_result.test_case_blocked('set_up_all failed: {}'.format(str(v))) if self._enable_perf: for case_obj in self._get_performance_cases(): self._suite_result.test_case = case_obj.__name__ - self._suite_result.test_case_blocked('set_up_all failed') + self._suite_result.test_case_blocked('set_up_all failed: {}'.format(str(v))) return False def _execute_test_case(self, case_obj): @@ -328,6 +341,10 @@ class TestCase(object): self._suite_result.test_case_failed(str(v)) self._rst_obj.write_result("FAIL") self.logger.error('Test Case %s Result FAILED: ' % (case_name) + str(v)) + except VerifySkip as v: + self._suite_result.test_case_skip(str(v)) + self._rst_obj.write_result("N/A") + self.logger.info('Test Case %s N/A: ' % (case_name)) except KeyboardInterrupt: self._suite_result.test_case_blocked("Skipped") self.logger.error('Test Case %s SKIPPED: ' % (case_name)) @@ -504,3 +521,62 @@ class TestCase(object): bitrate *= 100 return bitrate * num_ports / 8 / (frame_size + 20) + + +def skip_unsupported_pkg(pkgs): + """ + Skip case which are not supported by the input pkgs + """ + if isinstance(pkgs, str): + pkgs = [pkgs] + + def decorator(func): + @wraps(func) + def wrapper(*args, **kwargs): + test_case = args[0] + pkg_type = test_case.pkg.get('type') + pkg_version = test_case.pkg.get('version') + if not pkg_type or not pkg_version: + raise VerifyFailure('Failed due to pkg is empty'.format(test_case.pkg)) + for pkg in pkgs: + if pkg in pkg_type: + raise VerifySkip('{} {} do not support this case'.format(pkg_type, pkg_version)) + return func(*args, **kwargs) + return wrapper + return decorator + + +def skip_unsupported_nic(nics): + """ + Skip case which are not supported by the input nics + """ + if isinstance(nics, str): + nics = [nics] + + def decorator(func): + @wraps(func) + def wrapper(*args, **kwargs): + test_case = args[0] + if test_case.nic in nics: + raise VerifySkip('{} do not support this case'.format(test_case.nic)) + return func(*args, **kwargs) + return wrapper + return decorator + + +def check_supported_nic(nics): + """ + check if the test case is supported by the input nics + """ + if isinstance(nics, str): + nics = [nics] + + def decorator(func): + @wraps(func) + def wrapper(*args, **kwargs): + test_case = args[0] + if test_case.nic not in nics: + raise VerifySkip('{} do not support this case'.format(test_case.nic)) + return func(*args, **kwargs) + return wrapper + return decorator -- 2.17.1 parent reply other threads:[~2021-03-17 7:25 UTC|newest] Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-03-17 7:16 [dts] [PATCH V1 0/5] framework: add a proposal of recognizing pkgs Haiyang Zhao 2021-03-17 7:16 ` [dts] [PATCH V1 1/5] framework/exception: add new exception VerifySkip Haiyang Zhao 2021-03-17 7:16 ` Haiyang Zhao [this message] 2021-03-17 7:16 ` [dts] [PATCH V1 3/5] nics/net_device: add attribute pkg and get method Haiyang Zhao 2021-03-17 7:16 ` [dts] [PATCH V1 4/5] framework/dut: get nic package in dut prerequisites Haiyang Zhao 2021-03-17 7:16 ` [dts] [PATCH V1 5/5] tests: add nic and pkg check for rss_gtpu Haiyang Zhao 2021-03-17 7:30 ` [dts] [PATCH V1 0/5] framework: add a proposal of recognizing pkgs Zhao, HaiyangX Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ [email protected] \ [email protected] \ [email protected] \ [email protected] \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link test suite reviews and discussions This inbox may be cloned and mirrored by anyone: git clone --mirror https://inbox.dpdk.org/dts/0 dts/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 dts dts/ https://inbox.dpdk.org/dts \ [email protected] public-inbox-index dts Example config snippet for mirrors. Newsgroup available over NNTP: nntp://inbox.dpdk.org/inbox.dpdk.dts AGPL code for this site: git clone https://public-inbox.org/public-inbox.git
__label__pos
0.755432
DDD Europe 2024 - Program Aggregates: An In-depth Examination DDD Foundations - Talk Speakers Thomas Coopman and Gien Verschatse Thomas CoopmanGien Verschatse Schedule Wednesday 29 from 12:00 until 13:00 Description Aggregates serve as a means to encapsulate and manage related domain objects within a boundary, ensuring consistency and integrity. Understanding the scenarios in which aggregates are required is crucial for effective system design. It is also one of the most misunderstood concepts in Domain-Driven Design. When considering aggregates within the context of bounded contexts (BC), it raises questions about their relevance: • Why and when do you need an aggregate? • Are aggregates exclusively an internal concern, relevant only within a specific BC, or do they extend their usefulness beyond these boundaries? • Is there a difference between invariants as perceived from the outside of a BC/service and from the inside? This talk delves into the nuanced world of aggregates, investigating their necessity, utility, and the significance of their boundaries. We explore the fundamental questions of why and when aggregates are essential in system design and how their presence contributes to maintaining consistency and integrity. Attendees can expect to gain insights into the practical implications of aggregates, fostering a deeper appreciation for their role in effective system design. About Thomas Coopman Thomas Coopman has been fascinated with computers since he was a kid. Playing around at first, became programming later and after learning some programming for himself and a small detour starting studies for nursing, he went on and studied Master of Informatics at the KULeuven. Thomas is a polyglot and loves to learn new languages. His latest language studies have taken him to Elixir, Elm, Bucklescript and he has a special affinity for functional programming languages. Thomas is an independent software engineer and consultant focused on the full stack: frontend, backend and mostly people, practices and processes. Thomas is also currently active in the DDD Belgium and Software Craftsmanship Belgium community. About Gien Verschatse Gien Verschatse is an experienced consultant and software engineer that specialises in domain modelling and software architecture. She's fluent in both object-oriented and functional programming, mostly in .NET. As a Domain-Driven Design practitioner, she always looks to bridge the gaps between experts, users, and engineers. As a side interest, she's researching the science of decision-making strategies, to help teams improve how they make technical and organisational decisions. She shares her knowledge by speaking and teaching at international conferences. And when she is not doing all that, you'll find her on the sofa, reading a book and sipping coffee.
__label__pos
0.878267
Le try/catch est-il, aujourd'hui, aussi performant que des erreurs gérées avec des ifs ? icon Tags de l'article : , Décembre 11, 2019 Ce matin, un collègue m'a montré comment il avait réduit le code d'une méthode en passant par un try/catch (à la place d'un enchainement de if). Je lui ai expliqué que c'était plutôt à éviter, pour 3 raisons : • Moins performant • Pas forcément plus clair pour le développeur qui arrivera derrière ("pourquoi il a fait un try/catch ? il y a un cas particulier à gérer ?") • Ne permet pas de s'assurer que tous les cas métiers ont été gérés en un seul coup d'oeil Dans cette situation, les points 2 et 3 ne s'appliquaient pas vraiment, le code étant plutôt simple. Mais le point 1 s'appliquait toujours. Mon collègue m'a dit qu'après avoir fait des tests de son côté, il s'était rendu compte que l'écart de performances entre un try/catch et un enchainement de ifs était négligeable. J'ai donc voulu tester ça. #doute Pour ça, j'ai run 2/3 tests avec le code suivant : @Component({ selector: 'app-tab1', templateUrl: 'tab1.page.html', styleUrls: ['tab1.page.scss'] }) export class Tab1Page { nbItems: Number = 100000; public resultIf: Number; public resultTryCatch: Number; constructor() { this.doTest(); } doTest() { let items = []; for(let i = 0; i < this.nbItems ; i++) { items.push(this.generateRandomToto()) } // first we try to update the values of the Totos with if let startDate = new Date(); for(let i = 0; i < this.nbItems ; i++) { this.testWithIf(items[i]); } let endDate = new Date(); this.resultIf = endDate.getTime() - startDate.getTime(); // then we try with to update the values of the Totos with try/catch startDate = new Date(); for(let i = 0; i < this.nbItems ; i++) { this.testWithTryCatch(items[i]); } endDate = new Date(); this.resultTryCatch = endDate.getTime() - startDate.getTime(); } generateRandomToto(): Toto { if(Math.round(Math.random()) === 0) { // the math.round of a math.random gives 0 or 1 const result = new Toto(); result.titi = new Titi(); result.titi.tutu = 'bonjour'; return result; } else { const result = new Toto(); return result; } } testWithIf(toto: Toto) { if(toto && toto.titi && toto.titi.tutu) { toto.titi.tutu = 'wesh'; } } testWithTryCatch(toto: Toto) { try { toto.titi.tutu = 'wesh'; } catch(error) { } } } class Toto { public titi: Titi; } class Titi { public tutu: string; } Maintenant la question : quel écart en fonction du nombre de try/catchs ? Voici le résultat pour 100 objets : 0ms avec des ifs, 2ms avec des try/catchs. Woaw ! On a déjà une différence avec seulement 100 try/catchs ! Et si on augmente ? • 1 000 objets -> 1ms avec des ifs, 17ms avec des try/catchs. • 10 000 objets -> 1ms avec des ifs, 153ms avec des try/catchs. • 100 000 objets -> 2ms avec des ifs, 1427ms avec des try/catchs. Résultat : non, le try/catch n'est pas aussi performant que les ifs. Loin de là. Même aujourd'hui, à l'aube de 2020, le try/catch ne doit être utilisé que lorsqu'on a pas le choix : • pour attraper globalement les erreurs • lorsqu'on n'a pas le choix (par exemple si l'objet/la couche concernée throw des erreurs) • pour gérer un cas particulier (erreur spécifiquement attendue) En dehors de ces situations, le try/catch reste à éviter. Bon dev à tous et toutes ! Créer un template Razor utilisable en JavaScript icon Tags de l'article : , , , Aout 15, 2018 Hello tout le monde, Aujourd'hui on va voir comment créer un template Razor utilisable en JavaScript. C'est une problématique assez courante, et la solution que je propose vient d'une implémentation sur un projet pro... et le résultat est pas si mal. A noter qu'on est à des années lumières d'un vrai moteur de template, et je ne peux que vous recommander d'en implémenter un si c'est un vrai besoin de votre projet. Cette solution n'est proposée que pour dépanner, ou à utiliser sur un projet où on ne peut/veut implémenter de moteur de template. Du coup, allons-y ! Tout d'abord, on va créer une vue partielle pour notre template : @model Projet.ObjetHtmlTemplate <div class="objet"> <h1>@Model.Name</h1> <p>@Model.Description</p> <span class="price">@Model.Price</span> </div> Nous allons maintenant créer notre ViewModel, ici ObjetHtmlTemplate. Deux choses sont cependant à noter sur ce template : • Tous ces champs sont des strings • Le constructeur va permettre une initialisation des champs avec une chaine en dur contenant des antiquotes et le nom de la variable. Voyons la classe avant que j'explique le "pourquoi" : public class ObjetHtmlTemplate { public string Name { get; set; } public string Description { get; set; } public string Price { get; set; } public ObjetHtmlTemplate(bool initForJs = false) { if(initForJs) { Name = "` + Name + `"; Description = "` + Description + `"; Price = "` + Price + `"; } } } Voilà. Je pense que vous commencez à voir où je veux en venir : nous allons juste créer une méthode en JS qui prendra en paramètres Name, Description et Price, et cela renverra... le HTML généré préalablement par Razor... mais avec nos valeurs JS ! La prochaine étape est donc ce petit morceau de JavaScript dans notre vue : <script> var getObjetHtml = function(Name, Description, Price) { return `@Model.Partial("_ObjetHtmlTemplate", new ObjetHtmlTemplate(true))`; }; </script> Magique non ? Maintenant on a une jolie méthode JavaScript "getObjetHtml" qui prend 3 paramètres, et qui va renvoyer le HTML généré à partir de ces 3 paramètres. <3 Et côté ASP.Net, on peut utiliser notre vue partielle comme à notre habitude : <ul> @foreach(var item in Model.Items) { <li>@Html.RenderPartial("_ObjetHtmlTemplate", new ObjetHtmlTemplate() { Name = item.Name, Description = item.Description, Price = Math.Round(item.Price, 2) })<li> } </ul> Et... c'est tout ! Simple et efficace, que demander de plus ?! Allez, bonne journée et bon dev à tous/toutes ! Bug avec bootstrap-datepicker dans sa version v4.17.45 icon Tags de l'article : , , , Aout 10, 2018 Hello, Juste un micro article pour vous prévenir d'un bug dans la version 4.17.45 du bootstrap-datepicker... Si vous êtes comme moi et que vous passez par des packages nugets, méfiez-vous : la dernière version en Nuget est la 4.17.45 qui contient des bugs. (Dont un très chiant : si vous faites une sélection particulière en fonction du format, par exemple une sélection par mois uniquement, au 2eme clic le mode de sélection aura disparu). Les bugs sont corrigés sur la dernière version en ligne : la 4.17.47. N'hésitez pas à mettre à jour manuellement votre code, étant donné que le package Nuget est à la masse :( Bon dev à tous/toutes. KnockoutJS, JavaScript et JQuery : notes et exemples perso icon Tags de l'article : , , Juin 24, 2016 Hello, Une petite liste de notes perso, de morceaux de code et de comment faire 2/3 choses, pour moi, mais qui peut vous servir. Enjoy ! <!-- quelques databind par défaut --> <input type="text" data-bind="value: searchText" /> <td data-bind="text: 1 === Role() ? 'User' : 'Admin'"></td> <p data-bind="visible: currentlyWorking"> <!-- A noter : si on est dans un cas de visible "si pas ce booléen", il faut mettre les parenthèses : --> <div data-bind="visible: !currentlyWorking()"> <p data-bind="visible: !showSearchResults() && firstSearchDone()"> <!-- Plusieurs paramètres --> <input class="btn" type="button" value="Add" data-bind="attr: { 'data-value': Id }, visible: !model.IsMethod(Id)" /> <!-- Appeler le modèle parent de l'objet KO courant : $parent --> <span data-bind="visible: null !== ItemId && $parent.apiConsumerId() !== ApiConsumerId"> Dans l'idéal il vaut mieux éviter d'avoir de la logique côté vue, il vaut mieux faire des computed : self.showSearchResults = ko.computed(function () { return self.firstSearchDone() && self.searchedUsers().length > 0; }); // Attraper la touche entrée sur un champ texte $('#txtSearch').on('keyup', function (e) { if (e.which !== 13) { return; } $('#btSearchUser').click(); }); // Passer des informations depuis la vue ASP.Net MVC vers le fichier JavaScript <script type="text/javascript"> var urls = { 'promoteUser': '@Url.Action("Promote", "User")', }; var initialData = @MvcHtmlString.Create(Newtonsoft.Json.JsonConvert.SerializeObject(Model.Users)); var currentItemId = @Model.Item.Id; </script> // Demander confirmation pour une action dans certaines conditions ? // Créer une méthode JS globale "mustConfirm()" qui sera appelée avant chaque action critique comme ça : if (confirm('Are you sure that you want to DELETE this user account? This action CANNOT BE UNDONE.') && mustConfirm()) { } // Un appel en direct dans le HTML : <input type="submit" value="Purge" class="btn btn-default" onclick="return mustConfirm(); " /> // Et cette méthode dans la Layout CSHTML : var mustConfirm = function () { @{ if (UserHelper.GetCurrentEnvironment().AskForConfirmation) { <text>return confirm('Attention: This is a production environment with sensitive personal and customer data. Are you sure?');</text> } else { <text>return true;</text> } } }; // Recharger la page courante location.reload(); // Récupérer une collection d'observables à partir d'un JSON // Pour ça on utilise la lib JS ko.mapping : self.Users = ko.mapping.fromJS(initialData); // Envoyer un fichier en POST à un contrôleur ASP.Net MVC var data = new FormData(); var files = $('#fileUpload').get(0).files; if (files.length === 1) { data.append("UploadedFile", files[0]); data.append("CreatorId", $('#creatorId').val()); $.ajax({ type: "POST", url: urls.import, contentType: false, processData: false, data: data, success: function (result) { if (0 === result.ErrorMessages.length) { setTimeout(function() { document.location = urls.importsList; }, 1000); } else { model.currentlyWorking(false); model.LoadResult(result, false); } } }); } else { alert('Error: you must choose a file'); } // et côté ASP.Net MVC public ActionResult Import(int? creatorId, int? userId) { var context = this.CreateContext(); var model = new ImportVM(); if (Request.Files.Count != 1) { model.ErrorMessages.Add("You must upload a file to import profiles"); return Json(model); } var uploadedFile = Request.Files[0]; if (!uploadedFile.FileName.EndsWith(".csv", StringComparison.OrdinalIgnoreCase)) { model.ErrorMessages.Add("Invalid extension: only CSV file can be used for import"); return Json(model); } _importBusiness.Import(uploadedFile.InputStream, context); return Json(model); } // Exemple complet côté JS var model; (function () { var Users = function () { var self = this; self.firstSearchDone = ko.observable(false); self.searchText = ko.observable(""); self.searchedUsers = ko.observableArray(); self.currentlyWorking = ko.observable(false); self.showSearchResults = ko.computed(function () { return self.firstSearchDone() && self.searchedUsers().length > 0; }); self.cleanSearch = function () { self.searchedUsers.removeAll(); self.searchText(''); self.firstSearchDone(false); } } $('#txtSearch').on('keyup', function (e) { if (e.which !== 13) { return; } $('#btSearchUser').click(); }); $('#btSearchUser').on('click', function () { $('#txtSearch').val(''); model.currentlyWorking(true); $.post(urls.searchUserByName, { text: model.searchText() }, function (response) { if (!response.ok) { alert(response.errorMessage); model.cleanSearch(); } else { model.firstSearchDone(true); model.searchedUsers(response.searchedUsers); } model.currentlyWorking(false); }); }); // we initialize the model model = new Users(); ko.applyBindings(model); })();
__label__pos
0.979756
CodeIgniter Hosting: Compare Hosting Showing top 10 results Show All Oops! No Hosting Plans Match Your Search You've selected a combination of features that none of the web hosts we profile offer. We suggest you remove your last filter or reset & start again. Ask Our Experts Need help with your hosting? Tell us exactly what you are looking for and we’ll do our very best to help. Please allow one working day for a response. Please fill in all fields. Thanks! Your request has been sent. We'll reply within 24 hours. Filtering by: • CodeIgniter Reset What is CodeIgniter Hosting? CodeIgniter is a web application framework for PHP Hypertext Preprocessor (PHP) coders. It allows Web developers to advance projects more quickly than they could if writing code from scratch. As open source software, CodeIgniter is affordable, adaptable, and accessible. Overview CodeIgniter is a PHP web development application framework built on the Model-View-Controller (MVC) paradigm. A web application development framework is a tool for creating dynamic websites, web-based applications, and web services, which provides a structure to the overall application and modules or libraries for doing common development tasks. Libraries and modules eliminate the need for a developer to solve a problem that other developers have already solved — basic functions and features like user login, session management, database access, and form validation. There's no need to reinvent the wheel, and a good application framework provides these functions so that a developer can focus on the important task of creating new and valuable features. CodeIgniter also provides a structure to a web application, by suggesting a general template for how to organize code and directories, and by making some key architectural decisions about how various components interact with each other. Model-View-Controller One important thing that CodeIgniter provides is a Model-View-Controller (MVC) outline for application structure. Model-View-Controller is considered by many to be a “Best Practice” in application development and is a key feature of the CodeIgniter framework. MVC is, essentially, a way of organizing the components of an application in a way that separates the underlying data (the Model), the application or business logic (the Controller), and the final presentation to the screen or public API (the View). The easiest way to understand how MVC works is to think about what might happen in CodeIgniter-based web application between a user clicking on a link and that same user seeing the content on the page a moment later. The browser sends a request to the web server, which routes it to a set of scripts called the Controller. The Controller sends a request to the Model scripts, where details about data structure and database access are written. The Model includes code that fetches content from the database and then turns that content back over to the Controller. The Controller then sends that content to the View, which includes HTML template information. The View pushes the rendered page out to the user through the web server. This is a somewhat simplified explanation, and omits important details like page caching (which is handled by the View, and which speeds up overall performance), application functionality like processing credit cards (handled indirectly by the Controller), and updating the database (done by the Model). Following this general separation of concerns helps to ensure a high level of code organization and guides good decision making about how to implement novel features when building a new web application. CodeIgniter provides Model, View, and Controller scripts, as well as the libraries and application infrastructure that allow these components to interact in a meaningful way. Important CodeIgniter Features Light Weight CodeIgniter provides only the scripts needed and nothing else. Most functionality comes in the form a series of plugins and interoperable libraries, so you don't end up with code for a lot of features you aren't actually using. Database classes with support for multiple platforms Classes for interacting with the database provide a layer of abstraction, freeing you from having to write boilerplate SQL queries, and allowing you to change database software without rewriting your application. Several popular databases are supported, including MySQL, SQLite, and PostgreSQL. Flexibility The philosophy of CodeIgniter is to make PHP development easier, not harder. To that end, the framework provides a lot of flexibility and does not force you to develop in a certain way. For example, while the MVC model described above is fully supported, the framework still functions in the absence of Models, which is perfect if you are not building a database driven application. Speed and Performance CodeIgniter is serious about speed, and has made a number of key decision to ensure the fastest possible rendering of pages. One example of this is the automatic caching of rendered pages, which allows frequently-accessed pages to be displayed without the full fetch-and-render process. Another example is the lack of a specialized template language for creating HTML/PHP Views. While a template markup system requires a little less typing and appears a little cleaner in source code, it is a huge hit in performance because the file essentially has to be rendered twice (once from Template markup in to PHP, and then a second time into HTML). CodeIgniter Hosting Hosts that support PHP should generally support CodeIgniter. There are occasional issues with environment configuration for certain modules, such as email classes having access to a mail server. Before launching a new project with CodeIgniter, you should make sure the specific modules and features you need are supported by your web host. Additionally, check to make sure that your host supports the version of PHP needed to run CodeIgniter and any required libraries. CodeIgniter Hosting Frequently Asked Questions • Who develops and maintains CodeIgniter? CodeIgniter was originally developed and maintained by EllisLab, developers of ExpressionEngine, and in 2014 it was acquired by the British Columbia Institute of Technology. Since then, it has officially become a community-maintained project. • How do I install CodeIgniter? After you've downloaded Code Igniter, it can be installed in a few easy steps: 1. Unzip the package. 2. Upload the CodeIgniter folders and files to your server. Typically you'll want to place the index.php file at your root. 3. Open application/config/config.php with a text editor. Set your base URL and, if you plan to use encryption or sessions, set your encryption key. If you’re not using a database, that’s it. If you do plan on using a database, you’ll need to open application/config/database.php with a text editor and configure your database settings. For increased security, you can also rename your system and application folders. It’s best to check the installation documentation on CodeIgniter’s website to make sure you've updated all of your configurations correctly before doing this. • What is the difference between CodeIgniter 3.x and CodeIgniter 2.x? CodeIgniter 3.0 introduced a number of improvements to the sessions, encryption, and database libraries, improving overall performance and reliability. While they continue to support the legacy version, they recommend the 3.x version for all new installations. • Is there a guide to learn how to use CodeIgniter? Yes, a user manual is included with the installation. It is recommended that you review all topics in the introduction, and then read each of the General Topics pages in order, as each topic builds on each other. Code examples are also included, so you can practice while you learn. Additional reference guides are also included, and you can find more information in the community forums and online Wiki. • Why should I choose CodeIgniter? CodeIgniter is designed for developers who want an application development framework with a small footprint and exceptional performance, with broad compatibility, and that requires almost zero configuration. It’s for users who don't want to use the command line, adhere to restrictive coding rules, or be forced to learn a templating language. In short, it’s for anyone who wants a simple to use solution with plenty of support documentation available. • Does CodeIgniter use a Template Engine? Yes and no. CodeIgniter comes with a simple template parser, but using it is optional. The CodeIgniter team feels template engines cannot match the performance of native PHP, and the benefits gained by using them (slightly simpler syntax), does not outweigh the price to performance required to convert the template’s pseudo-code code back to PHP. However, the template engine is available for those who find it easier to use. • How much does CodeIgniter cost? CodeIgniter is free to download and use. It’s licensed under an Apache/BSD-style open source license, so you can use or modify it however you please. • How are URLs configured in CodeIgniter? CodeIgniter by default creates search-engine and human friendly, segment-based URLs, such as: example.com/news/event/summer_event. CodeIgniter segments the URL so that the first segment represents the controller class that should be invoked (in the above example, “news”), the second represent the class function or method to be called (“event”), and the third represents the ID and any variables that will be passed to the controller (“summer_event”). • What type of security measures does CodeIgniter provide? CodeIgniter includes a number of built-in security measures, including: It restricts the characters it allows in your URI strings in order to minimize the possibility that malicious data can be passed to your application. PHP error reporting can be disabled in production, preventing sensitive information from being entered in error outputs. It includes a Cross Site Scripting Filter, which looks for commonly used techniques for embedding malicious code, hijack cookies, or other malicious activity. It provides Cross-Site Request Forgery protection, which protects users from unknowingly submitting hacker requests. It also includes a number of best practices for programmers to improve the security of their code. • Can I contribute to CodeIgniter? CodeIgniter is community-driven, and they gladly accept code and documentation contributions through a repository on their website and GitHub. The easiest way to contribute is to point out a bug, which can be done through the issues reporting system on their site. • I’m using an older version of CodeIgniter. Can I upgrade to the latest version? The steps to upgrade will vary depending on the version you are currently running. Make sure to check out the documentation section on CodeIgniter’s website, as several files and setting may need to be adjusted to ensure a smooth transition to a later version, particularly if you are moving from a legacy version (2.x) to CodeIgniter 3.0. • Since CodeIgniter launched version 3.0, will they continue to support legacy versions? Since it is a community-maintained project, this will depend on the participation of community contributors. At the time of this writing, the legacy version was still being actively developed, and will likely continue to be developed, due to the number of active users. WhoIsHostingThis Recommends ★★★★ Support ★★★★ Features ★★★★ Uptime ★★★★ Value ★★★★ Pros: Free domain name , Unlimited traffic, emails, DBs Cons: No Windows-based plans SiteGround is an established web host managing well over 1,000 servers and... Read more Visit host 2. iPage ipage logo ★★★★ Support ★★★★ Features ★★★★ Uptime ★★★★ Value ★★★★ Pros: Unlimited Disk Space, Bandwidth & Email , Unlimited MySQL Databases Based in the US, iPage specialises in low-cost shared hosting. Each plan... Read more Visit host ★★★ Support ★★★ Features ★★★★ Uptime ★★★ Value ★★★ Pros: Free Domain , Unlimited Bandwidth Bluehost provides customers with low-cost shared hosting, as well as reseller, VPS... Read more Visit host Updating...
__label__pos
0.561955
RadfordMathematics.com Online Mathematics Book Binomial Expansions Formula how it works & how to use it The Binomial Expansions Formula will allow us to quickly find all of the terms in the expansion of any binomial raised to the power of \(n\): \[\begin{pmatrix} a + b \end{pmatrix}^n \] Where \(n\) is a positive integer. By the end of this section we'll know how to write all the terms in the expansions of binomials like: \(\begin{pmatrix} 2 + x \end{pmatrix}^4\), \(\begin{pmatrix} 2x - 3 \end{pmatrix}^5\), \(\begin{pmatrix} 4 + x^2 \end{pmatrix}^4\)... We start by learning the binomial expansion formula, we then watch a tutorial to learn how the binomial expansion formula works. We'll then work our way through a written, detailed, example as well consolidate our knowledge with several exercises. Binomial Expansions - Formula All of the terms of \(\begin{pmatrix} a + b \end{pmatrix}^n\) can be written using the binomial expansions formula, which states: \[\begin{pmatrix} a + b \end{pmatrix}^n = \begin{pmatrix} n \\ r \end{pmatrix}a^{n-r}.b^r\] Where \(\begin{pmatrix} n \\ r \end{pmatrix}\) is the binomial coefficient, sometimes witten \( ^nC_r\), and is calculated as: \[\begin{pmatrix} n \\ r \end{pmatrix} = \frac{n!}{(n-r)!r!}\] Tutorials In the following tutorials we learn how the binomial expansions formula works and how to use it to write all the terms in any binomial raised to the power of \(n\), \(\begin{pmatrix} a + b \end{pmatrix}^n\). Watch these now before working through exercise 1. Tutorial 1 In this first tutorial, we learn how to read the binomial expansion formula and how it works to write the terms in the expansion of \(\begin{pmatrix} a + b \end{pmatrix}^n\). Tutorial 2 In tutorial 2, we learn how to use the binomial expansion formula to write all the terms in any binomial expansion \(\begin{pmatrix}a+b\end{pmatrix}^n\). We show this by working through detailed examples. In particular, we show how to write all of the terms in the expansions of: \(\begin{pmatrix}a+b\end{pmatrix}^3\) and \(\begin{pmatrix} a + b \end{pmatrix}^4\) Exercise 1 Write all the terms in the expansion of each of the following binomials: 1. \(\begin{pmatrix}a+b \end{pmatrix}^3\) 2. \(\begin{pmatrix}a+b \end{pmatrix}^4\) 3. \(\begin{pmatrix}a+b \end{pmatrix}^5\) 4. \(\begin{pmatrix}a+b \end{pmatrix}^6\) Answers Without Working 1. \(\begin{pmatrix}a+b \end{pmatrix}^3 = a^3 + 3a^2b + 3ab^2 + b^3\) 2. \(\begin{pmatrix}a+b \end{pmatrix}^4 = a^4 + 4a^3b + 6a^2b^2 + 4ab^3 + b^4\) 3. \(\begin{pmatrix}a+b \end{pmatrix}^5 = a^5 + 5a^4b+10a^3b^2 + 10a^2b^3 + 5ab^4 + b^5\) 4. \(\begin{pmatrix}a+b \end{pmatrix}^6 = a^6+6a^5b+15a^4b^2+20a^3b^3+15a^2b^4+6ab^5+b^6\) Writing all the terms of \(\begin{pmatrix}x+b\end{pmatrix}^n\) and \(\begin{pmatrix}a+x\end{pmatrix}^n\) Using the binomial expansion formula and the method for writing all the terms of any expansion \(\begin{pmatrix}a+b \end{pmatrix}^n\), we learn how to write all the terms in the expansion of binomials looking like \(\begin{pmatrix}a+x\end{pmatrix}^n\) and \(\begin{pmatrix}x+b\end{pmatrix}^n\). The method for doing this is shown in tutorial 3, below. Tutorial 3 Exercise 2 Write all the terms in the expansion of each of the following binomials: 1. \(\begin{pmatrix}x + 2 \end{pmatrix}^3\) 2. \(\begin{pmatrix}x - 1 \end{pmatrix}^4\) 3. \(\begin{pmatrix}x + 2 \end{pmatrix}^5\) 4. \(\begin{pmatrix} 3 - x \end{pmatrix}^5\) 5. \(\begin{pmatrix}1 + x \end{pmatrix}^6\) Answers Without Working 1. \(\begin{pmatrix} x + 2 \end{pmatrix}^3 = x^3 + 6x^2 + 12x + 8\) 2. \(\begin{pmatrix} x - 1 \end{pmatrix}^4 = x^4 - 4x^3 + 6x^2 - 4x + 1 \) 3. \(\begin{pmatrix} x + 2\end{pmatrix}^5 = x^5 + 10x^4 + 40 x^3 + 80x^2 + 80x + 32\) 4. \(\begin{pmatrix} 3 - x \end{pmatrix}^5 = 243 - 81x + 27x^2 - 9x^3 + 3x^4 - x^5\) 5. \(\begin{pmatrix} 1 + x \end{pmatrix}^6 = 1 + 6x + 15x^2 + 20x^3 + 15x^2 + 6x + 1\) Writing all the terms of \(\begin{pmatrix}ax+b\end{pmatrix}^n\) and \(\begin{pmatrix}x^m+b\end{pmatrix}^n\) To write all the terms in the expansion of binomials in which the \(x\) term is either \(ax\), a power \(x^m\), or even a combination of both \(ax^m\), such as: \(\begin{pmatrix}2x+5\end{pmatrix}^4\), \(\begin{pmatrix}x^2+2\end{pmatrix}^5\), \(\begin{pmatrix}2x^3-1\end{pmatrix}^3\), ... . It is essential to write any \(x\) term in paretheses and use the following laws of exponents: • products raised to a power: \( \begin{pmatrix}ax\end{pmatrix}^n = a^n.x^n \) • powers raised to a power: \(\begin{pmatrix}x^m \end{pmatrix}^n = x^{m\times n}\) • combinations of both: \(\begin{pmatrix}ax^m \end{pmatrix}^n = a^n.x^{m\times n}\) Using these laws, as well as the fact that: \[\begin{pmatrix}ax+b\end{pmatrix}^n = \begin{pmatrix} \begin{pmatrix} ax \end{pmatrix}+b\end{pmatrix}^n \] and \[\begin{pmatrix}x^m+b\end{pmatrix}^n = \begin{pmatrix} \begin{pmatrix}x^m \end{pmatrix}+b\end{pmatrix}^n\] we can write all the terms in such expansions. Tutorials 4 & 5 In the following tutorials we work through examples showing how to write all the terms of expansions of the type \(\begin{pmatrix}ax+b\end{pmatrix}^n\) and \(\begin{pmatrix}x^m+b\end{pmatrix}^n\). Watch these tutorials before working through exercise 3. Tutorial 4 In the following tutorial we show, in detail, how to write all the terms in the expansion of: \[\begin{pmatrix} 2 + 3x\end{pmatrix}^4\] We do this using the binomial expansion formula and using the fact that: \[\begin{pmatrix} 2 + 3x\end{pmatrix}^4 = \begin{pmatrix} 2 + \begin{pmatrix} 3x \end{pmatrix} \end{pmatrix}^4\] Along with the following rule for exponents: \[\begin{pmatrix}ax\end{pmatrix}^n = a^n.x^n\] watch tutorial 4 Tutorial 5 In the following tutorial we show, in detail, how to write all the terms in the expansion of: \[\begin{pmatrix} 2 + x^2\end{pmatrix}^4\] We do this using the binomial expansion formula and using the fact that: \[\begin{pmatrix} 2 + x^2\end{pmatrix}^4 = \begin{pmatrix} 2 + \begin{pmatrix} x^2 \end{pmatrix} \end{pmatrix}^4\] Along with the following rule for exponents: \[\begin{pmatrix}x^m\end{pmatrix}^n = x^{m\times n}\] watch tutorial 5 Exercise 3 Write all the terms in the expansion of each of the following binomials: 1. \(\begin{pmatrix}2x + 3 \end{pmatrix}^3\) 2. \(\begin{pmatrix} 4x - 3 \end{pmatrix}^4\) 3. \(\begin{pmatrix}1 - 3x \end{pmatrix}^5\) 4. \(\begin{pmatrix} 5 + 3x \end{pmatrix}^4\) 5. \(\begin{pmatrix} 2x + 1 \end{pmatrix}^6\) 6. \(\begin{pmatrix} 2+x^2 \end{pmatrix}^4\) 7. \(\begin{pmatrix} x^3 - 1 \end{pmatrix}^6\) 8. \(\begin{pmatrix}2x^2 + 3 \end{pmatrix}^5\) Answers Without Working 1. \(\begin{pmatrix}2x + 3 \end{pmatrix}^3 = 8x^3 + 36x^2 + 54x + 27\) 2. \(\begin{pmatrix} 4x - 3 \end{pmatrix}^4 = 256x^4 - 768x^3 + 864x^2 - 432x + 81 \) 3. \(\begin{pmatrix}1 - 3x \end{pmatrix}^5 = 1 - 15x + 90x^2 - 270x^3 + 405x^4 + 243x^5\) 4. \(\begin{pmatrix} 5 + 3x \end{pmatrix}^4 = 625 + 1500x + 1350x^2 540x^3 + 81x^4 \) 5. \(\begin{pmatrix} 2x + 1 \end{pmatrix}^6 = 64x^6 + 192x^5 + 240x^4 + 160x^3 + 60x^2 + 12x + 1\) Calculator Technique In the following tutorial we learn how to calculate the binomial coefficient with a calculator. The calculator used here is the TI NSpire CX Tutorial
__label__pos
1
[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index] Re: [libvirt] adding bandwidth control support - new updates Hi, Here are some updates on this work: It is common that user want to set up backend device for a virtual interface over a VLAN. So, one more option is added, '--vlanid', to specify the VLAN ID user want to attach to the backend device supporting this virtual interface. So that the shell script responsible for setting up the backend device can do all the configuration for user. It was pointed out that 'bandwidth control' is not a clear statement. We can only support setting upper limit of the bandwidth for a virtual interface for now. We are not able to reserve a specific bandwidth for it. So, the name of the new option for 'virsh attach-interface' is changed from '--rate' to '--capped-bandwidth' to remove the ambiguity and leave room for reserved bandwidth in the future. Based on the above changes, the XML format changed from my previous proposal: <interface type='bridge'> <source bridge='e1000g1'/> <flowcontrol> <rate unit='megabit' period='second' value='100'/> </flowcontrol> </interface> to something looks like below: <interface type='bridge'> <source bridge='e1000g1'/> <networkresource> <capped-bandwidth unit='megabit' period='second' value='100'/> </networkresource> <vlan id='1'/> </interface> Note that we also change the element name from 'flowcontrol' to 'networkresource' so that we can add more QoS related (not just related to bandwidth) parameters into this element in the future, if needed. There are also changes in virt-install command line options. Like '--disk' option, I also grouped network related optioins into properties of -w/--network option. So, instead of adding '--capped-bandwidth' and '--vlanid' options, I choose to add two properties for -w/--network option. Now, we have: # virt-install ... --network bridge=eth0,vlanid=2,mac=aa:0:1:2:3:4,capped-bandwidth=200M. Old style of syntax is still supported, but should be obsoleted. Any comment? Thanks, Max [Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]
__label__pos
0.553798
External Login (No PHP) • Hey all, I searched through the forum and found a lot of similar threads, but all were taking care of logins written in PHP/HTML where just the github loginHelper was imported. I want to use the login details to use them to login into a game account, so I want to know if there is something like a RESTful Service behind that or if there is some plugin available I did not find cause I am maybe to dumb to find good keywords for my issue. So some API kind of thing would help a lot too. I would need to check the following: • POST username and pass => returns a value that represents correctness • GET id of user? so I can • GET if the user is in an specific group or the groupID > (non-whitelist)-ID I want to implement this into JS (JS only, no HTML,CEF etc if possible), so creating webrequest would be my favourity way to go, since this would happen serverside. If I would use CEF this would be clientside but I do not want to save any of these API data clientside or make it manipulatable. Cheers. • No. there is some possibility doing it with WCF or whatever its called but only with php? I saw you writing in different threads regarding that as I searched through the forum. Could you give me an link to an entry point to read what and how and where to do such things? • I'm afraid most of the useful threads are in German if I remember correctly (possible search parameters are externer login externe registrierung) , so I might just give you an example and try to explain a little bit: You can find all the sources of WCF 2.1 (since you posted in this section) oin GitHub: https://github.com/WoltLab/WCF/tree/2.1.24/ I added links to the used classes wihtin the code, you might want to check them in case to implement more queries. If you send a POST-request against api.php?action=getGroupIDs and send the following parameter as POST: Code username you'll get an array of integers containing the user's group ids. (I implemented it this way because it might be more flexible to use instead of checking a single id). If you send a POST-request against api.php?action=checkCredentials and send the following parameter as POST: Code username password you'll get whether the user exists and it's id. Keep in mind that the clear type password is used, so don't do this without encryption. We could check the hashed password in the database, but I'm pretty sure your requesting software doesn't own the hashed version, am I right? Is it helpful; do you have further questions? Btw. I didn't test the script. Don't wonder if it behaves strange; don't hastitate to ask then. ;) Participate now! Don’t have an account yet? Register yourself now and be a part of our community!
__label__pos
0.709515
Etiquetado y políticas de control de acceso - Amazon Simple Storage Service Etiquetado y políticas de control de acceso También puede usar políticas de permisos (políticas de bucket y usuario) para administrar los permisos relacionados con el etiquetado de objetos. Para ver acciones de políticas, consulte los siguientes temas: Las etiquetas de objetos permiten un control de acceso pormenorizado para administrar permisos. Puede otorgar permisos condicionales en función de las etiquetas de objetos. Amazon S3 admite las siguientes claves de condiciones que puede usar para conceder permisos condicionales basados en etiquetas de objetos. • s3:ExistingObjectTag/<tag-key>: use esta clave condicional para verificar una etiqueta de objeto existente tiene una clave y un valor específicos para la etiqueta. nota Al conceder permisos para las operaciones PUT Object y DELETE Object, esta clave condicional no se admite. Es decir, no puede crear una política para conceder o denegar permisos a un usuario para eliminar o sobrescribir un objeto en función de sus etiquetas existentes. • s3:RequestObjectTagKeys: use esta clave condicional para restringir las claves de etiqueta que quiera permitir en objetos. Esto resulta útil al agregar etiquetas a objetos con PutObjectTagging y PutObject y con las solicitudes POST para objetos. • s3:RequestObjectTag/<tag-key>: use esta clave condicional para restringir las claves y valores de etiqueta que quiera permitir en objetos. Esto resulta útil al agregar etiquetas a objetos con PutObjectTagging y PutObject y con las solicitudes POST para buckets. Para obtener una lista completa de las claves condicionales específicas de servicio de Amazon S3, consulte Ejemplos de claves de condición de Amazon S3. Las siguientes políticas de permisos ilustran cómo el etiquetado de objetos facilita una administración de permisos de acceso pormenorizada. ejemplo 1: Permitir a un usuario leer solo los objetos que tienen una etiqueta específica La siguiente política de permisos concede a un usuario permiso para leer objetos, pero la condición limita el permiso de lectura a objetos que tengan los siguientes valor y clave específicos de la etiqueta. security : public Tenga en cuenta que la política usa la clave condicional de Amazon S3, s3:ExistingObjectTag/<tag-key> para especificar la clave y el valor. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::awsexamplebucket1/*", "Principal": "*", "Condition": { "StringEquals": {"s3:ExistingObjectTag/security": "public" } } } ] } ejemplo 2: Permitir a un usuario agregar etiquetas de objetos con restricciones sobre las claves de etiqueta permitidas La siguiente política de permisos concede permisos a un usuario para realizar la acción s3:PutObjectTagging, lo que permite al usuario agregar etiquetas a un objeto existente. La condición limita las claves de etiqueta que puede usar el usuario. La condición usa la clave condicional s3:RequestObjectTagKeys para especificar el conjunto de claves de etiqueta. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:PutObjectTagging" ], "Resource": [ "arn:aws:s3:::awsexamplebucket1/*" ], "Principal":{ "CanonicalUser":[ "64-digit-alphanumeric-value" ] }, "Condition": { "ForAllValues:StringLike": { "s3:RequestObjectTagKeys": [ "Owner", "CreationDate" ] } } } ] } La política garantiza que el conjunto de etiquetas, si se especifica en la solicitud, tenga las claves especificadas. Un usuario podría enviar un conjunto de etiquetas vacías en PutObjectTagging, lo cual está permitido por esta política (un conjunto de etiquetas vacío en la solicitud elimina las etiquetas existentes en el objeto). Si quiere evitar que un usuario elimine el conjunto de etiquetas, puede agregar otra condición para garantizar que el usuario proporcione al menos un valor. El ForAnyValue de la condición garantiza que al menos uno de los valores especificados estará presente en la solicitud. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:PutObjectTagging" ], "Resource": [ "arn:aws:s3:::awsexamplebucket1/*" ], "Principal":{ "AWS":[ "arn:aws:iam::account-number-without-hyphens:user/username" ] }, "Condition": { "ForAllValues:StringLike": { "s3:RequestObjectTagKeys": [ "Owner", "CreationDate" ] }, "ForAnyValue:StringLike": { "s3:RequestObjectTagKeys": [ "Owner", "CreationDate" ] } } } ] } Para obtener más información, consulte Creación de una condición que pruebe valores de varias claves (operaciones de definición) en la guía para usuarios de IAM. ejemplo 3: Permitir a un usuario agregar etiquetas de objetos que incluyan una clave y un valor de una etiqueta específica La siguiente política de usuario concede permisos a un usuario para realizar la acción s3:PutObjectTagging, lo que permite al usuario agregar etiquetas a un objeto existente. La condición requiere que el usuario incluya una etiqueta específica (Project) con un valor X. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:PutObjectTagging" ], "Resource": [ "arn:aws:s3:::awsexamplebucket1/*" ], "Principal":{ "AWS":[ "arn:aws:iam::account-number-without-hyphens:user/username" ] }, "Condition": { "StringEquals": { "s3:RequestObjectTag/Project": "X" } } } ] }
__label__pos
0.821236
0 0 This might be a FMODex issue but I’ll post it here anyway. With the latest debug libs I have been trying out things have started to run really, really slow. A not very wild guess is that writing to logfile is the cause for this. Correct me if I’m wrong. So, assuming I’m right then how do I disable this logging? If not then why is it so slow? • You must to post comments 0 0 [quote="brett":1ldmigbn]The library is not a ‘debug’ library anyway. It is simple a version of fmod with logging. If you don’t want logging use the release version.[/quote:1ldmigbn] Does this mean that "debug" FMOD libraries don’t themselves link against debug versions of other libraries? Because we would like to run debug versions of our code WITHOUT the FMOD logging. Thanks. • You must to post comments 0 0 They’re not debug as I said. They are release with logging. If your code is debug or release is not really relevant anyway, people are already doing this with the normal dll. If you dont want logging dont use the logging dll. • You must to post comments 0 0 Please, tell me how to solve this since it slows down my work alot. • You must to post comments 0 0 [quote="Ljudas":dgq30eh1]Please, tell me how to solve this since it slows down my work alot.[/quote:dgq30eh1] There should be a FMOD::gDebugMode FMOD::gDebugLevel etc. variables, controlling what to be logged as bitfields (gDebugMode) and to what verbosity (gDebugLevel). If you have the FMOD source code (we are licensed to use it), then you can patch that directly (as I did). • You must to post comments 0 0 Have a look at this thread: http://52.88.2.202/forum/viewtopic.php … debuglevel I ran into the same issue. • G • You must to post comments 0 0 I get the following error if I try that approach: error LNK2001: unresolved external symbol "enum FMOD::FMOD_DEBUGLEVEL FMOD::gDebugLevel" (?gDebugLevel@FMOD@@3W4FMOD_DEBUGLEVEL@1@A) Could it be cause I don’t have the source??? • You must to post comments 0 0 I still can’t get rid of the debuglog. Anyone who has a better solution than this in my case non-working hack? As it is right now the debug libs are more or less useless since they stall the game for seconds. • You must to post comments 0 0 did you read the whole thread. If you wrap the code pasted in namespace FMOD {} it will link. The debug will stop if you set it to 0. The library is not a ‘debug’ library anyway. It is simple a version of fmod with logging. If you don’t want logging use the release version. • You must to post comments 0 0 This is what happens when I try to link: Testbed (Core) error LNK2001: unresolved external symbol "enum FMOD::FMOD_DEBUGLEVEL FMOD::gDebugLevel" (?gDebugLevel@FMOD@@3W4FMOD_DEBUGLEVEL@1@A) This is what I did in my code: [code:w3oiz60u] namespace FMOD { typedef enum { LOG_ERROR = 0x00000001, LOG_WARNING = 0x00000002, LOG_TUTORIAL = 0x00000004, LOG_NORMAL = 0x000000FF, LOG_MEMORY = 0x00000100, LOG_THREAD = 0x00000200, LOG_FILE = 0x00000400, LOG_TYPEMASK = 0x0000FFFF, LOG_WIN32 = 0x00010000, LOG_PS2 = 0x00020000, LOG_XBOX = 0x00040000, LOG_GC = 0x00080000, LOG_LINUX = 0x00100000, LOG_MAC = 0x00200000, LOG_CE = 0x00400000, LOG_PLATFORMMASK = 0x00FF0000, LOG_TIMESTAMPS = 0x01000000, LOG_LINENUMBERS = 0x02000000, LOG_COMPRESS = 0x04000000, LOG_CONTROLMASK = 0x0F000000, LOG_BRETT = 0x10000000, LOG_ANDREW = 0x20000000, LOG_CHENPO = 0x40000000, LOG_USERMASK = 0xF0000000, LOG_ALL = 0xFFFFFFFF } FMOD_DEBUGLEVEL; extern FMOD_DEBUGLEVEL gDebugLevel; } // Constructor { FMOD::gDebugLevel = FMOD::LOG_NORMAL; } [/code:w3oiz60u] What’s wrong here? • You must to post comments Showing 9 results Your Answer Please first to submit.
__label__pos
0.745035
Codeforces Round #383 (Div. 2) C. Arpa's loud Owf and Mehrdad's evil plan 問題 Problem - C - Codeforces 考察 色々と試して見ると、まず「輪っかにならない長さ2以上の数列」ができるとアウトになることがわかります。 また、輪っかの長さが偶数ならその半分の長さで済み、奇数ならその長さが必要なこともわかります。 1つの問題中に複数の輪っかが同時に出るので、それらを全て満たす長さ(それぞれ必要な長さのlcm)を取れば良いです。 輪っかになっているかどうかはdfsを2周すると求めることができます。 また、輪っかの長さはUnionFindを使うと簡単に求まります(dfsでまとめてやってもOKだと思います)。 コード struct UnionFind { vector<int> data; UnionFind(int size) : data(size, -1) {} bool unionSet(int x, int y) { x = root(x); y = root(y); if (x != y) { if (data[y] < data[x]) swap(x, y); data[x] += data[y]; data[y] = x; } return x != y; } bool findSet(int x, int y) { return root(x) == root(y); } int root(int x) { return data[x] < 0 ? x : data[x] = root(data[x]); } int size(int x) { return -data[root(x)]; } }; vi d(110, -1); vvi graph; void dfs(int v) { if (d[v] == -1) { d[v] = 0; } else if (d[v] == 0) { d[v] = 1; } else if (d[v] == 1) { return; } tr(it, graph[v]) { dfs(*it); } } ll gcd(ll a, ll b) { if (b == 0) return a; return gcd(b, a % b); } ll lcm(ll m, ll n) { if ((0 == m) || (0 == n)) return 0; return ((m / gcd(m, n)) * n); } int main() { int n; cin >> n; graph.resize(n); d.resize(n); UnionFind uf(n); rep(i, n) { int t; cin >> t; t--; uf.unionSet(i, t); graph[i].push_back(t); } rep(i, n) { fill(all(d), -1); dfs(i); rep(j, n) { if (d[j] == 0) { cout << -1 << endl; return 0; } } } ll ans = 1; rep(i, n) { int t = uf.size(i); if (t == 1 || t == 2) { } else { if (t % 2 == 0) ans = lcm(ans, t / 2); else ans = lcm(ans, t); } } cout << ans << endl; return 0; } 感想 罠は無いですがタイトルが長いです。
__label__pos
0.987852
若要檢視英文版的文章,請選取 [原文] 核取方塊。您也可以將滑鼠指標移到文字上,即可在快顯視窗顯示英文原文。 譯文 原文 partial (C++ 元件擴充功能)   partial 關鍵字允許在不同的檔案中獨立撰寫相同 ref 類別的不同部分。 (這個語言功能只適用於 Windows 執行階段)。 對於有兩個部分定義的 ref 類別,partial 關鍵字會套用至定義的第一個項目,這通常是由自動產生的程式碼完成,因此程式碼編寫人員一般都不會經常使用這個關鍵字。  對於類別所有的後續部分定義,請省略「類別機碼」(Class-key) 關鍵字及類別識別項中的 partial 修飾詞。  當編譯器遇到先前定義的 ref 類別與類別識別項,但沒有 partial 關鍵字時,就會在內部將 ref 類別定義的所有部分合併一個定義。   partial class-key identifier { /* The first part of the partial class definition. This is typically auto-generated*/ } // ... class-key identifier { /* The subsequent part(s) of the class definition. The same identifier is specified, but the "partial" keyword is omitted. */ } class-key 宣告 Windows 執行階段 所支援之類別或結構的關鍵字。   ref classvalue classref structvalue struct   identifier 定義的型別的名稱。 部分類別會下列支援情節:您在一個檔案中修改類別定義的某個部分,而自動產生程式碼軟體 (例如 XAML 設計工具) 也在另一個檔案中修改相同類別中的程式碼。  您可以使用部分類別,防止自動程式碼產生器覆寫您的程式碼。  在 Visual Studio 專案中,會對產生的檔案自動套用 partial 修飾詞。   內容:有兩個例外情況。如果省略 partial 關鍵字,則部分類別定義可以包含完整類別定義所能包含的任何項目。  不過,您無法指定類別存取範圍 (例如 public partial class X {…};) 或 declspec   用於identifier 之部分類別定義的存取規範,不會影響 identifier 後續之部分或完整類別定義中的預設存取範圍。  允許靜態資料成員的內嵌定義。   宣告:identifier 類別的部分定義只引入名稱 identifier,但是 identifier 無法在需要類別定義的方式下使用。  名稱 identifier 無法用來得知 identifier 的大小,也無法藉以在編譯器遇到 identifier 的完整定義之前使用 identifier 的基底或成員。   數字和順序:identifier可以有零個或多個部分類別定義。   identifier 的所有部分類別定義都必須在語彙上居於 identifier 的某個完整定義之先 (如果有完整定義的話。否則,除非是在向前宣告的情況下,不然就無法使用該類別),但是不需要在 identifier 的向前宣告之前。  所有的類別機碼都必須相符。   在進行類別 identifier 的完整定義時,運作方式會如同 identifier 的定義已宣告所有的基底類別、成員等項目 (宣告順序取決於在部分類別中發現及定義這些項目的順序)。 範本:部分類別不可以是範本。 泛型:如果完整定義可以是泛型,部分類別就可以是泛型。  但是每個部分或完整類別都必須具有完全相同的泛型參數,包括型式參數名稱。   如需使用 partial 關鍵字的詳細資訊,請參閱 部分類別 (C++/CX) 編譯器選項:/ZW (這個語言功能不適用於 Common Language Runtime)。 顯示:
__label__pos
0.565679
Scilab Home page | Wiki | Bug tracker | Forge | Mailing list archives | ATOMS | File exchange Please login or create an account Change language to: Français - Português - 日本語 Please note that the recommended version of Scilab is 6.0.2. This page might be outdated. See the recommended documentation of this function Scilab manual >> Linear Algebra > kroneck kroneck Kronecker form of matrix pencil Calling Sequence [Q,Z,Qd,Zd,numbeps,numbeta]=kroneck(F) [Q,Z,Qd,Zd,numbeps,numbeta]=kroneck(E,A) Arguments F real matrix pencil F=s*E-A E,A two real matrices of same dimensions Q,Z two square orthogonal matrices Qd,Zd two vectors of integers numbeps,numeta two vectors of integers Description Kronecker form of matrix pencil: kroneck computes two orthogonal matrices Q, Z which put the pencil F=s*E -A into upper-triangular form: | sE(eps)-A(eps) | X | X | X | |----------------|----------------|------------|---------------| | O | sE(inf)-A(inf) | X | X | Q(sE-A)Z = |---------------------------------|----------------------------| | | | | | | 0 | 0 | sE(f)-A(f) | X | |--------------------------------------------------------------| | | | | | | 0 | 0 | 0 | sE(eta)-A(eta)| The dimensions of the four blocks are given by: eps=Qd(1) x Zd(1), inf=Qd(2) x Zd(2), f = Qd(3) x Zd(3), eta=Qd(4)xZd(4) The inf block contains the infinite modes of the pencil. The f block contains the finite modes of the pencil The structure of epsilon and eta blocks are given by: numbeps(1) = # of eps blocks of size 0 x 1 numbeps(2) = # of eps blocks of size 1 x 2 numbeps(3) = # of eps blocks of size 2 x 3 etc... numbeta(1) = # of eta blocks of size 1 x 0 numbeta(2) = # of eta blocks of size 2 x 1 numbeta(3) = # of eta blocks of size 3 x 2 etc... The code is taken from T. Beelen (Slicot-WGS group). Examples F=randpencil([1,1,2],[2,3],[-1,3,1],[0,3]); Q=rand(17,17);Z=rand(18,18);F=Q*F*Z; //random pencil with eps1=1,eps2=1,eps3=1; 2 J-blocks @ infty //with dimensions 2 and 3 //3 finite eigenvalues at -1,3,1 and eta1=0,eta2=3 [Q,Z,Qd,Zd,numbeps,numbeta]=kroneck(F); [Qd(1),Zd(1)] //eps. part is sum(epsi) x (sum(epsi) + number of epsi) [Qd(2),Zd(2)] //infinity part [Qd(3),Zd(3)] //finite part [Qd(4),Zd(4)] //eta part is (sum(etai) + number(eta1)) x sum(etai) numbeps numbeta Scilab Enterprises Copyright (c) 2011-2017 (Scilab Enterprises) Copyright (c) 1989-2012 (INRIA) Copyright (c) 1989-2007 (ENPC) with contributors Last updated: Wed Jan 26 16:23:41 CET 2011
__label__pos
0.731796
Source riddle-not-a-not-b-not-c / riddle-not-a-not-b-not-c / not-a-not-b-not-c.txt Expression: |ABC | |000|001|010|011|100|101|110|111| ----------------------------+---+---+---+---+---+---+---+---| ~A+~B+~C | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | ----------------------------+---+---+---+---+---+---+---+---| ~A~B~C | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ----------------------------+---+---+---+---+---+---+---+---| A | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | ----------------------------+---+---+---+---+---+---+---+---| B | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | ----------------------------+---+---+---+---+---+---+---+---| C | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | ----------------------------+---+---+---+---+---+---+---+---| ~A~B+~A~C+A~B~C | 1 1 1 0 1 0 0 0 | ----------------------------+---+---+---+---+---+---+---+---| ~(AB+AC+BC) | 1 1 1 0 1 0 0 0 | ----------------------------+---+---+---+---+---+---+---+---| ~(A XOR B XOR C) | 1 0 0 1 0 1 1 0 | ----------------------------+---+---+---+---+---+---+---+---| ~A~B + A = A + ~B A~B = A~B ~A~B~C + A = A + ~B~C A(~A+~B+~C) = A~B + A~C = A (~B + ~C) B A (~B + ~C) = AB~C (and also ~ABC and A~BC ) AB~C + ~ABC = C (AB~ + ~AB) = C (A XOR B) C (AB~ + ~AB) + ABC = C (AB~ + ~AB + AB) = C (A + B) C (AB~ + ~AB) + ~A~B~C = AB~C (A + ~B~C) = AB~C AB~C (C + ~A~B) = False A + ~B~C + B = A + B + ~C (A + B + ~C) (~A + ~B + ~C) = A~B + B~A + ~C = ~C + (A XOR B) A~B + B~A + ~C + A~C + C~A + ~B = ~B + ~C + ~A (B + C) A ( ~B + ~C + ~A (B + C) ) = A~B + A~C = A (~B + ~C) AB~C + ~A~B~C = ~C (AB + ~A~B) (~B + ~C + ~A (B + C) )*(B+C) = ~BC + ~CB + ~A(B + C) = ~BC + ~CB + ~AB + ~AC = B (~A + ~C) + C (~A + ~B) ------------------------- Contemplating: -------------- AB~C + ~A~B~C + ~AB~C + A~B~C = ~C Alternatives: ------------- ~(AB + BC + AC) = (~A + ~B)(~A + ~C)(~B + ~C) = (~A + ~B~C)(~B + ~C) = ~[(A+B)(B+C)(A+C)] = ~(A+B) + ~(B+C) + ~(A+C) = ~A~B + ~B~C + ~A~C A (~A~B + ~B~C + ~A~C) = A~B~C A~B~C + ~AB~C = ~C(A XOR B) ~A~B + ~B~C + ~A~C + A = A + ~B + ~C A~B~C + ~AB~C + C = (A XOR B) + C = C + A~B + ~AB B*(C + A~B + ~AB) = BC + ~AB = B (C + ~A) BC + ~AB + ~BA + AC = C(A + B) + (A XOR B) A~B~C + B = B + A~C (B + A~C) + (~A~B + ~B~C + ~A~C) = ~A + ~C + B (A + ~B + ~C) * (~AB~C) = ~AB~C / ~A~B -> A + ~B ; B + ~A ; ~A~B~C ; A + ~C ; C + ~A ; ~A (~B + ~C) | -> ~A~B + ~A~C \ ~A~C ->
__label__pos
1
« first day (3186 days earlier)      last day (117 days later) »  12:13 AM @user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero).   12:53 AM @user76284 If you're interested, I think mihaild help me clear this up here: math.stackexchange.com/questions/3197468/… See the comments on his answer.   Ah, that's for en.wikipedia.org/wiki/…. My comment above applies for en.wikipedia.org/wiki/…. There are two kinds of multigraph.   1:37 AM hi @ted     1 hour later… 2:43 AM I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it. But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$ oh! -x, -y, -xy sorry     2 hours later… 4:38 AM @TedShifrin Hi all and hi Ted! Classic question. But I decided to learn on that. I got a lot more more pickier recently to what I agree to review.     3 hours later… 7:15 AM Hey y'all Heile @Rudi   7:44 AM Let $G$ and $H$ be groups. Is there a name for elements $\langle a, b \rangle$ of the wreath product $G \wr H = G^{\oplus H} \rtimes H$ such that $a(b) = e_G$?     4 hours later… 11:48 AM So, just to make sure I'm not missing something, $-\sqrt{-30}$ and $\sqrt{-30}$ are automatically irreducible in $\mathbb{Z}[\sqrt{-30}]$, right?   12:03 PM Why do we say that complex holomorphic functions are analytic locally? LeakyNun gave me an example, but I am sorry to say that i lost it.   12:29 PM hello folks. i have to solve an ODE but i dont know how to go about it $\lambda^2 f + f'' = A\cos\big((\lambda - 2)\theta\big)$ im given some sine products manipulations as hints but i dont know how to use them. if somebody could point me to some ressource or give me a hint i would be grateful   TP: $N_n = \frac 1 1 + \frac 1 2 + \frac 1 3 + \frac 14 + \dots$ the function $N$ has no Integers as range except for $1,3$ 2. Where does the series converges? $$\sum \limits_{n=1}^{\infty}\frac 1 {3^n + 1}$$?   @Rithaniel what is $\Bbb Z[\sqrt{-30}]/(\sqrt{-30})$?   12:45 PM Wouldn't that be the trivial object? (Had to work it out a little bit in my head)   No, what does an element of $\Bbb Z[\sqrt{-30}]$ look like?   $a+b\sqrt{-30}$ of course   with $a, b \in \Bbb Z$ when is an element equal to $0$ in the quotient?   When it is in the ideal generated by $\sqrt{-30}$ (which is what I was thinking about. I'm not sure what that ideal looks like)   $(\sqrt{-30}) = \lbrace r\sqrt{-30} : r \in \Bbb Z[\sqrt{-30}]\rbrace$   12:50 PM The thing I'm not sure about is that $(a+b\sqrt{-30})\sqrt{-30}=-30b+a\sqrt{-30}$, so we have anything with first part equal to a multiple of 30, right? So would that make this quotient isomorphic to $\mathbb{Z}/30\mathbb{Z}$?   err 'ang on   @Silent $\sum z^n$ $\Bbb Z[\sqrt{-30}]/(\sqrt{-30}) = \Bbb Z[X]/(X^2+30,X) = \Bbb Z[X]/(30,X) = (\Bbb Z/30\Bbb Z)[X]/(X) = \Bbb Z/30\Bbb Z$ @Rithaniel @ÍgjøgnumMeg   Right that's what I was expecting so ignore me lol I'm so rusty   Well, I got the trivial object my first work through. Though, why are we talking about the quotient by that particular ideal? I'm not familiar enough with this stuff to know what that was supposed to indicate.   it's supposed to be irreducible iff the quotient is an integral domain   1:00 PM Well for some reason I was expecting $\Bbb Z$ to come out which would make the quotient an integral domain but I was mistaken lol For reference @Rithaniel, an ideal $\mathfrak{p}$ is prime iff $R/\mathfrak{p}$ is an integral domainj   Ooooh, okay. So, this actually tells us that $\sqrt{-30}$ is reducible?   well $\Bbb Z/30\Bbb Z$ isn't an integral domain hahaha oops misread jesus christ   Lol, you're good. It's early morning (at least where I am)   it's 14:03 here and I'm at work lol   Sometimes the morning lasts well into the afternoon.   1:06 PM no it doesn't since $\Bbb Z[\sqrt{-30}]$ is not a principle ideal domain   right   Ah, so this quotient theorem only applies in PIDs?   no it applies in general but ideals don't behave the same way that elements do in non-PIDs, so maximal ideal $\iff$ generated by irreducible element doesn't hold, I think   Ah wait, an irreducible doesn't have to be prime and the theorem talks about primes, right?   can we factorize $(\sqrt{-30})$ then   1:10 PM Don't think so, I think it's maximal among principal ideals @Rithaniel I've hijacked your question a bit because it's highlighted some stuff I need to refresh lol   Ah, you're fine, that's how a conversation will usually go. I think I can show $\sqrt{-30}$ is irreducible by using the norm function, though.   1:28 PM 2 Q: Open Sets in the Wedge Sum and a Homeomorphism user193319I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation. Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ...   Please help in the proof: I can't see how tail going to zero says series convergent. I know that converse is true.   @Silent it's the basic theorem of analysis. $\sum a_n$ converges $\implies a_n\to 0$. Or is that not your problem?   Can anyone help ?   No, no! In the slide, professor says: $\sum_{N+1}^{\infty}|c_n|\to 0$ hence $\sum_0^{\infty} |c_n|$ converges. I can't follow that reasoning. @anakhro yes, this is not my problem :)   1:44 PM Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!)   Isn't that clear, @Silent? Since $\sum_{N}^\infty |c_n|$ converges to 0, then it is bounded. So you just take the finite sum at the start of the sequence and add it to the bound.   oh!! thank you very much   Once again, Travelocity customer service can go f itself   ok so I just want to know why this post is going to be closed math.stackexchange.com/questions/3200274/…   How do you know it is going to be closed?   1:57 PM Told me on Friday that they couldn’t get in touch with Icelandair until Monday since it was a holiday for Icelandair. Okay, fine   Well it only needs two more votes   On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case   I mean I will just post it again there is nothing wrong with it   @Adam Can you not see what the votes are saying?   No response on Wednesday at all. Contacted them again today and they said they’ll review my messages...   2:01 PM it says it's unclear what I am asking and I clearly am asking for someone to provide a counter example   so, fingers crossed that they actually book the corrected flight this time   2:15 PM @Adam I don't actually see you ever ask for a counter example in the question. That would be a good edit to make. You only make a vague comment about "counterexample sought" in the title. So it's not clear at all that that is what you are asking. So I highly recommend you include a bold comment in the question body that says explicitly what you need help for.   @anakhro tail of harmonic series not go to zero?   @Secret if $\lim_{N\to\infty}\sum_{n = N}^\infty a_n = 0$, then you can find an $N_0$ such that $\sum_{n=N_0}^\infty a_n < M <\infty$. Sorry, made a lot of typographical errors. :P   2:39 PM @anakhro did you read the question title?   2:54 PM @Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question. Moreover, the title is vague and doesn't clearly ask a question. And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed. If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself.   3:17 PM meh it doesn't really matter I've found them anyway   Well now you know the reason people were voting to close.   3:35 PM but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre   4:39 PM @Adam I don't think so; I don't know anyone by that name.   5:31 PM Nevermind   6:05 PM Hi, I have the following problem for a math contest: For which value of b is there only one intersection between the line y = x + b and the parabola y = x^2 - 5x + 3? The answer key says the answer is -6 How did it get that answer?   Combine the two equations and rearrange them to get a quadratic equation, then calculate the discriminant.   I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A? I got 6 P.M. but the answer key says 4 P.M.   illinois is 2 hours behind California ;)   Isn't Illinois 2 hours ahead? but I get what your saying   yeah you're right lol, was just making a joke anyway I'm not from the US so idk   6:19 PM oh ok   7:01 PM @swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying But 240 miles seems waaay to short to cross two time zones So my inclination is to say the answer key is nonsense   ive got a confession to make folks i dont actually understand math i just think the starred messages are funny 6     2 hours later… 8:59 PM Hi swagwagon Hi chat   9:15 PM r there closed forms for the sums of the prime zeta function   10:05 PM 3 A: Will all solutions of this ODE look like this? ChappersYou can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form $$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$ (Obvi... Hi there, I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer. Where does the term e^{(r_1-r_2)x} come from? It seems like it is taken out of the blue, but it yields the desired result.   you forgot to pop that in $ signs schn   Sorry.   11:02 PM @schn the term for that is that that exponential is an integrating factor   Okay. From what I've learned up to this point, integrating factors have only appeared when solving first ODE. Why does it pop up when trying to solve $\u'' + (r_1-r_2)u' = 0$, which is a second ODE?   Well, it’s a second-order ode in u(x). But if you define $v(x)=u’(x)$ then the ODE is $v’+(r_1-r_2)v=0$, which is first order   Suppose $U$ is open in $X$, $V$ in $Y$, does it follow that $U \vee V$ is open in $X \vee Y$.   @Semiclassica True, so substitution is totally legitimate in this case?   Sure. You solve the ODE for v(x). Then u’(x)=v(x), so you can antidifferentiate to get u(x)   11:12 PM hey joe shmo   But the point is really that the same integrating factor idea works here   Okay. Thanks a lot!   is analytic continuation of a sum like zeta function generally pretty hard to do?   Suppose you’ve got a second-order ODE like $y’’+p(x)y’+q(x)y=0$   I know there is only 1 analytric continuation, if it even exists   11:15 PM You could still look for a function g(x) such that $g(x)y’’+g(x)p(x)y’+g(x)q(x)$ is the derivative of some first-order ODE And that’s a useful idea, in fact But with second-order odes you typically get something leftover that you can’t eliminate. (Look up “canonical form of Sturm-Liouville equations” if you want more on that.)   « first day (3186 days earlier)      last day (117 days later) » 
__label__pos
0.976069
Using D3 Tooltips March 9, 2015 Leave a comment Tooltips can be extremely useful for displaying additional information NOT currently being rendered in the core D3  visualization, be it a chart, graph, map, etc..  Tooltips can also enhance the overall aesthetic value of the visualization through the implementation of CSS and D3 transitions.  Let’s face it…D3 isn’t just about displaying data…it’s about “visualizing” it in such a way that captivates the user and creates a “wow thats cool” experience. Before we get started let me first admit that at this point I’m still a newbie to D3 (only 4 months in) and much of this tutorial has been the result of studying the works of Mike Bostock and the following sites\tutorials\forums I’m a firm believer that the best way to convey a new concept is to first demo a real world example and then break that down into the functional components.   Interestingly enough this tutorial began as an assist to a post made by a fellow techie on Google’s D3 Groups.  He originally requested help on several areas of a bar graph and one that caught my attention was: “displaying the name column on the tooltip”.  After reviewing his code I was quickly able to edit and provide a working solution, which you can view on CODEPEN...perhaps I’m not such a newbie after all..:} tooltips-bargraph Although the solution I provided worked to add the tooltip another fellow D3 enthusiast and more senior coder, Nick,  posted his solution as well on Codepen, which fulfilled all the users requirements, and then some. His code was also more structured and included additional code snippets for using RawGit, CSS styling\animation, as well as a different technique for positioning a tooltip. So let’s start by reviewing the code I posted and then we’ll incorporate Nick’s tooltip code in a future post. Tooltip Implementation The most obvious place to start is: • Adding the tooltip • MOUSEOVER event to display tooltip The code used to add the tooltip var tooltip = d3.select('body').append('div') .style('position','absolute') .style('padding','0 10px') .style('opacity',0) .attr('class','tooltip') Let’s break this down: Position: Absolute – The position of the tooltip will be relative to the where the mouse pointer is located when the mouseover event is initiated. Padding: 0 10px – This will pad the the div 0 px top\bottom and 10px left\right. Opacity: 0 – This makes the tooltip invisible and provides a starting point for the transition() method when the mouseover event it initiated. Class: tooltip – This adds a class to the div which can be used to apply additional CSS styling. Now that we have a div in place let’s add the mouseover\mouseout events. var rect = myChart .on('mouseover', function(d) { tooltip.transition() .style('opacity', .9) .style('background', 'lightsteelblue') tooltip.html( d.name + ": " + d.totalp ) .style('left',(d3.event.pageX - 35) + 'px') .style('top', (d3.event.pageY - 30) + 'px') }) .on('mouseout', function(d) { tooltip.transition() .style('opacity', 0) }) Let’s break this down: .on() – This is a method used to execute an eventlistener and requires the event to listen for, in this case both “mouseover “and “mouseout” and a callback function which executes a block of code, in this case the tooltip.transition() and tooltip.html(). tooltip.transition() – This calls the tooltip class, which was an attribute added to the div when it was created, and then executes the transition() method which changes an elements attributes, in this case the opacity will transition from 0 to .9.   I’ve also added a background color of lightsteelblue.  Transition is also used on the mouseout event to set the opacity back to 0, thereby making the tooltip seemingly disappear. tooltip.html – This adds the text we want to display in the tooltip as well as where to position the tooltip.  The d3.event.pageX and d3.event.pageY are used as the coordinates for positioning the tooltip based on the mouse location.  In this case were looking to enter the tooltip above the mouse by moving it -35px to the left and -30px above. So this lesson provides the basics for adding a tooltip.  In the next blog article I will discuss additional options on positioning the tooltip as I’m too crazy about displaying the data within the bar itself.   Although this technique works fine for a line chart or world map with points of interest it’s not the best implementation for a bar chart.  I’ll also review orgainizing the code a bit more by moving the mouseover\mouseout callback functions into their own named function. Categories: Uncategorized Tags: Powershell: Importing Items into a Sharepoint List March 7, 2012 1 comment I was teaching a Windows Server 2003 to 2008 upgrade class onsite recently for a client and it included several Powershell examples on performing such tasks as installing a role\feature or managing AD users\groups.   While on break I got to speaking with one of the tech’s and asked if he was currently utilizing Powershell to automate common day to day tasks.  His response was not what I expected as he made the statement that he couldn’t see any use for Powershell.   My initial reaction was suprise as I thought any decent tech would want to embrace such a robust and versitile automation technology, especially one that provides a standard platform for managing all things Microsoft.   Read more… Powershell: Create Custom MAC\IP Table February 20, 2012 Leave a comment Local MAC Discovery There are times where I need to determine the MAC address of not only my PC but also the other PC’s on the local network segment.  There are a few different ways to determine the local PC”s MAC address(s) using Powershell: getmac                (ipconfig /all) -match " ([0-9A-Z]{2}[-]){5}[0-9A-Z]{2}$"                (ipconfig /all) | Select-String " ([0-9A-Z]{2}[-]){5}[0-9A-Z]{2}$"                GWMI Win32_NetworkAdapter -f "MacAddress like '%:%'" | Select -expand MacAddress Although they all display the MAC address information for all network adapters the output is done so differently for each command, except for -Match and Select-String as they produce the same output.  Read more… Categories: Powershell Tags: Powershell: Retrieving AD FSMO Role Holders February 18, 2012 2 comments I was recently asked to create a script to display the current FSMO Role holders in an Active Directory domain.  There are 5 FSMO roles and the first domain controller in the forest root domain holds them all by default.  Of the 5 roles, 2 are per forest and 3 per domain.  These roles can be transfered or seized during the lifetime of the AD Domain and it’s important to know what DC’s hold which roles, especially when doing maintenance. Powershell isn’t the only way to retrieve the role holders and both Netdom and NTDSUtil provide the same info, with Netdom being the easier of the two commands to use.  Here is an example of using Netdom: Read more… Categories: Powershell Tags: Powershell: Adding Directories to Path Statement February 10, 2012 1 comment Adding directories to the Path statement is a rarity for most techs these days but on occasion, such as configuring Sharpeoint 2010 to use an Adobe IFilter or my own desire to make it easier to run powershell scripts, updating the Path statement must be done.  Either way it’s just the reason I was looking for to write another powershell script.🙂 I always approach writing powershell code with the intention of making it resuable, mostly in the form of a function.  The function I’ve created this time is called  AddTo-SystemPath I begin by defining parameters.  Since there may be a need to add several directories to the path statement I’ve cast the $PathToAdd variable  as an array. Param( [array]$PathToAdd ) A Foreach loop will then be run against the $PathToAdd variable .  It also seemed like best practice  to make sure that the Path statement didn’t already contain the directory(s) in the $PathToAdd variable so comparison is used inside an If\Else statement. The  $VerifiedPathsToAdd variable is then populated with the directories to add. Foreach($Path in $PathToAdd) { #Verify if the Path statement already contains the folder if($env:Path -like "*$Path*") { Write-Host "$Path already exists in Path statement" } else { $VerifiedPathsToAdd += ";$Path" Write-Host "`$VerifiedPathsToAdd updated to contain: $Path"} I now want to make sure that $VerifiedPathsToAdd contains something, and if so update the Path statement  using the [Environment] class. The code below containing  [Environment]::SetEnvironmentVariable() is too long to display as one line so I’ve divided it into the class and method overloads. [Environment]::SetEnvironmentVariable ("Path",$env:Path + $VerifiedPathsToAdd,"Process") It’s possible to update the Path statement using just the $Env:Path variable, however it’s not persistent and any added values will be lost when the PS session closes. An example of using this non-persistent method is: $ENV:Path = $ENV:Path + ";$Path" The complete If statement containing the persistent method is: #Verify that there is something in $VerifiedPathsToAdd to update the Path statement if($VerifiedPathsToAdd -ne $null) { Write-Host "`$VerifiedPathsToAdd contains: $verifiedPathsToAdd" Write-Host "Adding $Path to Path statement now..." [Environment]::SetEnvironmentVariable("Path",$env:Path + $VerifiedPathsToAdd,"Process") }#End If The complete Function is below: Function AddTo-SystemPath { Param( [array]$PathToAdd ) $VerifiedPathsToAdd = $Null Foreach($Path in $PathToAdd) { if($env:Path -like "*$Path*") { Write-Host "Currnet item in path is: $Path" Write-Host "$Path already exists in Path statement" } else { $VerifiedPathsToAdd += ";$Path" Write-Host "`$VerifiedPathsToAdd updated to contain: $Path"} if($VerifiedPathsToAdd -ne $null) { Write-Host "`$VerifiedPathsToAdd contains: $verifiedPathsToAdd" Write-Host "Adding $Path to Path statement now..." [Environment]::SetEnvironmentVariable("Path",$env:Path + $VerifiedPathsToAdd,"Process") } } } Categories: Powershell Tags: Powershell: Importing Hyper-V VM’s February 2, 2012 5 comments In my previous post Creating Hyper-V Symolic Links using Powershell I created a small but useful function called Create-SymbolicLinks which was used to execute one or more .bat files that created symbolic links to base or middle tier VHD’s as part of the initial classroom VM setup.  Once this was completed the next step was to import the VM’s and of course what better way to automate this then to use Powershell. The first task at hand is to download and import the Hyper-V module from Codeplex.  There are 2 versions of this module available to download, with the latest version being R2 SP1.  Once downloaded I then place it into the directory where I will be running the script\function so that it can be copied to the appropriate Modules directory on the server.   Both the module path and name are defined in the Param statement as follows, along with the path to the VM’s. Param ( $ModulePath = ($env:psmodulePath -split ";")[1], $ModuleName = "HyperV", $path = "C:\Program Files\Microsoft Learning\6419\Drives\" ) The code to copy the HyperV module and import it is: #Copy the HyperV module if it doesn't already exist if(!((Dir $ModulePath) -match $ModuleName)) { Copy-Item .\$ModuleName $ModulePath -Recurse } #Import the HyperV module if not already imported if(!(Get-Module | ?{$_.Name -like $ModuleName})) { Import-Module $ModuleName } Now the real work begins.  I need to determine what VM’s have already imported into Hyper-V to make sure we don’t 1) do more work then is necessary and 2) don’t try overwriting any previously imported VMs.  Doing this involves using Get-VM and extracting just he name property ( or in this case the ElementName property) and putting those results into an array called $ActiveVMs. In order to get a list of the VM’s I need to import I run a Get-ChildItem $Path and extract just the Name property and put the results into an array called $VMsToImport. #Create array to contain active VM's $ActiveVMs = Get-VM | Foreach{$_.ElementName} #Create array to contain VM's to be imported $VMsToImport = (Get-ChildItem $path) | Foreach{ $_.Name } Now comes the interesting part.  How to do a comparision of the two arrays and determine if any VM names overlap.  This seemed like a perfect opportunity to use a regular expression.  I remember reading an article on the Scripting Guys called “Speed Up Array Comparisions in Powershell with a Runtime Regex” where the author discussed the benefits of using the -Match operator with a regular expression  instead of the -Contains comparison operator.  Needless to say using a regular expression was way faster…10x faster and since Powershell is all about automation and efficiency , creating a regex seems like the way to go. [regex]$ActiveVMs_Regex = '(?i)^('+(($ActiveVMs | Foreach {[regex]::Escape($_)})-join "|" )+')$' The only thing left now was to run the -Match comparison and import the VM’s  using Import-VM.   I also needed to use Start-Sleep 5 because during my initial tests ( and there were many of them ) some VM’s weren’t imported.  It was random and not consistent but after having the script pause before each import provided just the rate of success I was looking for.   Powershell you rock!!! #Import the VMs $VMsToImport -notmatch $ActiveVMs_Regex | Foreach{ Import-VM (Join-Path $path $_ ) Start-Sleep 5 } Here is the complete function.. Function Import-VMs { Param ( $ModulePath = ($env:psmodulePath -split ";")[1], $ModuleName = "HyperV", $path = "C:\Program Files\Microsoft Learning\6419\Drives\" ) #Copy the HyperV module if it doesn't already exist if(!((Dir $ModulePath) -match $ModuleName)) { Copy-Item .\$ModuleName $ModulePath -Recurse } #Import the HyperV module if not already imported if(!(Get-Module | Where{$_.Name -like $ModuleName})) { Import-Module $ModuleName } #Create array to contain active VM's $ActiveVMs = Get-VM | Foreach{$_.ElementName} #Create array to contain VM's to be imported $VMsToImport = ( Get-ChildItem $path ) | Foreach{$_.Name} [regex]$ActiveVMs_Regex = '(?i)^('+(($ActiveVMs | Foreach {[regex]::Escape($_)})-join "|" )+')$' #Import the VMs $VMsToImport -notmatch $ActiveVMs_Regex | Foreach{ Import-VM ( Join-Path $path $_ ) Start-Sleep 5 } }#End Function Categories: Powershell Tags: , Backing Up Event Logs using Powershell January 31, 2012 5 comments I was recently asked to create a script that would backup certain event logs ( Application & Security ) to it’s native .evt format and then clear all events from the corresponding logs once complete.  This seemed simple enough although I didn’t recall seeing any parameters in either Get-WinEvent or any of the *-Eventlog cmdlets that provided this functionality.  Then I remembered that when something can’t be done using object specific cmdlet the next possible option is to explore the Win32_* classess.   So I used Get-WMIObject to query possible Win32_* classes that referenced Event Log. Get-WMIObject Win32_*event* -List The query produced the following results: So the question now was which Win32 class to choose from.   I  narrowed it down to the Win32_NTEvent* classes and after some further examination determined that Win32_NTEventLogFile had a method called BackupEventLog.  I was able to make this determination by using Get-Member on the class. Get-WMIObject Win32_NTEventLogFile | Get-Member This query displayed all Properties and Methods of the Event Logs.  I’ve filtered the results to display only the first few Methods The BackupEventLogFile method accepts one overload of System.String type which will be the name of the backup log file with an .evt extension.  The files were going to be backed up daily and then the Event Logs cleared of all events  so I needed to make sure the backup log files had unique names and decided to include the current date in the event log name.  I also needed to use a Foreach loop so as to run the code on several Event Logs in sequence.   I also included the following parameters to make the function more versitile: Param( $Computername = $ENV:COMPUTERNAME, [array]$EventLogs = @("application","security"), $BackupFolder = "C:\BackupEventLogs\" ) Logic was also added to create the $BackupFolder if it didn’t exist. If(!( Test-Path $BackupFolder )) { New-Item $BackupFolder -Type Directory } I called the function Clear-EventLogs and below is the complete script. Function Clear-Eventlogs { Param( $Computername = $ENV:COMPUTERNAME, [array]$EventLogs = @("application","security"), $BackupFolder = "C:\BackupEventLogs\" ) Foreach ( $i in $EventLogs ) { If(!( Test-Path $BackupFolder )) { New-Item $BackupFolder -Type Directory } $eventlog="c:\BackupEventLogs\$i" + (Get-Date).tostring("yyyyMMdd") + ".evt" (get-wmiobject win32_nteventlogfile -ComputerName $computername | Where {$_.logfilename -eq "$i"}).backupeventlog($eventlog) Clear-EventLog -LogName $i }# end Foreach }#end function Clear-Eventlogs The results of running the script are the following log files: Categories: Powershell Tags: ,
__label__pos
0.531887
Changes between Version 8 and Version 9 of MaterialSystem Ignore: Timestamp: 2012-04-15 17:12:00 (13 months ago) Author: Philip Comment: expand on defines Legend: Unmodified Added Removed Modified • MaterialSystem v8 v9   5757  5858== Defines ==   59   60Defines can come from many places: first from the engine, and then from `<define>`s inside `<material>`, `<technique>`, `<pass>`, `<program>`, and from `#define` in the source files, in that order. At each stage, conditionals can refer to the defines from all earlier stages. If a name is defined that was already defined in an earlier stage, its value will be overridden. There's no way to undefine a name, but defining it to `0` should have the same effect. (Shader source files should use `#if` instead of `#ifdef` to correctly handle values of `0`.)   61   62The system of defines and conditionals can support many different ways to implement desired behaviour: to implement two materials, you could have two different shader effects, or one effect with two techniques, or one technique with conditionals to select two paths through the code. Performance is the same with any of those, so good design requires a judgement call to maximise clarity (avoid code duplication, avoid complex conditionals, etc).   63   64Current defines: (this is likely to get outdated, so check the code to verify)  5965  6066Set globally by engine code: 
__label__pos
0.980194
tmux Cheat Sheet Session Commands | Action | Command | |---------------------------|------------------| | Start a named session | tmux new -s NAME | | Detach from session | Ctrl+b d | | List sessions | tmux ls | | Reattach to named session | tmux a -t NAME | Window Pane Commands | Action | Command | |------------------------------|-----------------------| | Make vertical window panes | Ctrl+b % | | Make horizontal window panes | Ctrl+b " | | Switch panes | Ctrl+b <arrow> | | Resize pane (mac) | Ctrl+b :resize-p -L 4 | | Resize pane (GNU) | Ctrl+b Ctrl+<arrow> | Note: The `Ctrl+b :resize-p -D 4` command means to press `Ctrl+b` then press `:` to enter the command line for the rest of the command (e.g. `resize-p -D 4`). The `-D` flag indicates down. The other options are `-U` for up, `-L` for left, and `-R` for right. The "4" in the example is the amount that can be changed to your desired setting. Window Commands | Action | Command | |-------------------------|----------| | Create new window | Ctrl+b c | | Move to next window | Ctrl+b n | | Move to previous window | Ctrl+b p |
__label__pos
0.999819
Ro R I noticed this in the privacy settings. Did I miss it before? Where is this block to make it viewable?   Last update on January 22, 2021 by Ro R. Be the first person to like this. Ro R Any ideas? Be the first person to like this. Do you mean the privacy setting is not viewable to your users? Be the first person to like this. Ro R I'm asking what the privacy setting relates to? Where can anyone see who viewed a profile? Be the first person to like this. JohnJr I think there was a block that showed you the last person who viewed your profile.  I checked the database and it has been removed. https://community.phpfox.com/forum/search?forum_id=0&search%5Bsearch%5D=recently+viewed&search%5Badv_search%5D=0&search%5Buser%5D=&search%5Bforum%5D%5B%5D=&search%5Bdays_prune%5D=-1&search%5Bsubmit%5D= Be the first person to like this. Ro R The setting is still there for some reason. My site testers are asking me about it and I have no clue.  Be the first person to like this. JohnJr I would say you could probably just add the block back in the database under the block table, but we don't know if they removed the code/sql statement that indicated how many recently viewed users to show.  Hopefully, phpfox will chime in here. Ro R Is this monitored phpfox employees at all? Be the first person to like this. Is this monitored phpfox employees at all? Rarely nowadays, maybe because of Corona, but I think it's more to do with letting the community self-help, whilst they charge £1000 for the product and make money from support packages. Ro R It's ridiculous, because when I send tickets in, I've been told that they don't offer one on one support and go to the community. But there's no help here either.   Rarely nowadays, maybe because of Corona, but I think it's more to do with letting the community self-help, whilst they charge £1000 for the product and make money from support packages. Be the first person to like this.
__label__pos
0.787761
Topic: Ensuring Data Reaches Disk 本文翻译自这篇文章:https://lwn.net/Articles/457667/ 对于操作系统来说,最理想的状态就是永不发生崩溃、断电以及硬件故障(如磁盘),这样工程师在编程时就不需要关心这些特殊情况。但软件、硬件故障是常态,理想的状态并不存在。这篇文档主要目的在于解释数据从应用到持久化存储的路径,着重关注路径上哪些地方使用了缓存,最后提出确保数据已经写入持久化存储的方案。 本文讨论过程中的例子用 C 语言书写,但对其它语言同样适用。 I/O Buffering 为了保证程序数据的完整性,软件工程师一定要理解整个操作系统的数据架构、数据流向。数据在进入持久化存储之前可能穿过许多层,如下图所示: 1. 处于最顶端的是正在运行的应用程序,程序中存储着需要持久化的数据。这些数据存放于程序申请的若干块内存空间中。这些数据也可能被传递给一些第三方的 library 中,后者可能也会在内部维持着缓存,但不论是哪种情况,这些数据都生活在应用的寻址空间中。 2. 数据流向的下一层就是 kernel,kernel 维护着内部版本的写回缓存 (write-back cache),写回缓存也被称为页缓存 (page cache)。一些脏页可能在页缓存中生存不定长的时间,具体的时长取决于系统的负载和 I/O 运作模式。 3. 当脏数据最终从页缓存中清除时,将被写入到存储设备中,如磁盘。但磁盘也不一定会立即持久化数据,后者可能继续维持着自己的写回缓存,这种缓存通常是易失性存储,如果数据在缓存中尚未写入时出现断电,则系统将面临数据丢失的风险。 4. 最终,当数据从存储设备的缓存写入存储层时,我们才能认为数据是安全的。 为了进一步解释这种多层缓存现象,我们可以以一个具体的应用为例。假设应用 A 监听某 socket 中的连接,当收到客户端的数据时,将数据写入本地持久化存储中。在断开连接时,A 服务必须保证接收到的数据已经确定写入持久化存储设备中,并向客户端发送 ack 消息。 在接收到客户端的连接后,应用需要从 socket 中读取数据到 buffer 中,下面的示例就将完成这个功能: int socket_read(int sockfd, FILE *outfp, size_t nrbytes) { int ret; size_t written = 0; char *buf = malloc(MY_BUF_SIZE); if (!buf) return -1; while (written < nrbytes) { ret = read(sockfd, buf, MY_BUF_SIZE); if (ret <= 0) { if (errno == EINTR) continue; return ret; } written += ret; ret = fwrite((void *)buf, ret, 1, outfp); if (ret != 1) return ferror(outfp); } ret = fflush(outfp); if (ret != 0) return -1; ret = fsync(fileno(outfp)); if (ret < 0) return -1; return 0; } • 第 5 行就是应用缓存的一个例子,从 socket 中读取的数据将被放入缓存 buf 中 • 由于网络传输数据时快时慢,我们选择预先确定好即将发送的文件大小,然后使用 libc 的 stream 函数 fwrite 和 fflush 来在 lib 层面缓冲数据。第 10 - 21 行就是将数据从 socket 写入 file stream 中的过程。在 22 行处,所有数据都写入到 file stream 中。在 23 行处,file stream 刷出,进入到 kernel buffer 中。 • 第 27 行,数据正式进入到持久化存储中,即 "Stable Storage" 抽象层。 I/O APIs 在上文中,我们强化了一些 APIs 与多层存储模型 (layering model) 之间的关系。在本小节中,我们将 I/O 进一步划分成 3 种类别:system I/O,stream I/O 以及 memory mapped (mmap) I/O。 System I/O System I/O 指的是只有在 kernel 的地址空间中,通过 kernel 系统调用接口才能执行的操作,其中的写操作主要包括: Operation Function(s) Open open(), creat() Write write(), aio_write(), pwrite(), pwritev() Sync fsync(), sync() Close close() Stream I/O Stream I/O 主要是应用程序通过 C library 的 stream 接口触发的操作。Stream I/O 中的写操作不一定触发相应的系统调用,这意味着在执行 Stream I/O 中的操作后,数据可能仍然存在于应用缓存,即应用自己的内存空间中。其中的操作主要包括: Operation Function(s) Open fopen(), fdopen(), freopen() Write fwrite(), fputc(), fputs(), putc(), putchar(), puts() Sync fflush() followed by fsync() or sync() Close fclose() Memory Mapped I/O Mmap I/O 与 System I/O 类似。文件仍然用相同的接口开启和关闭,但对文件数据的访问则通过将数据直接映射到应用程序的地址空间来实现,绕过 kernel buffers。 Operation Functions Open open(), creat() Map mmap() Write memcpy(), memove(), read(), or any other routine that writes to application memory Sync msync() Unmap munmap() Close close() Caching Behavior 在 linux 系统中,打开一个文件的同时可以指定它的缓存行为,O_SYNC (O_DSYNC) 以及 O_DIRECT。如果开启 O_DIRECT,数据将直接绕过 kernel 的 page cache,直接写入到存储设备中。但存储设备可能会先将数据存入写回缓存中,因此如果要确保数据已经完成持久化,你仍然需要调用 fsync。O_DIRECT 选项只与 system I/O API 有关。 原始设备 (raw devices, /dev/raw/rawN) 是 O_DIRECT I/O 的一种特殊情况,这些设备默认实现了 O_DIRECT 的语义。 我们称开启 O_SYNC (O_DSYNC) 的 I/O,包括 system I/O 和 stream I/O,为 Synchronous I/O。在 POSIX 中,同步 (synchronous) 的模式有以下几种: • O_SYNC:文件数据及元数据都同步地写入持久化存储 • O_DSYNC:只有访问文件数据所需的数据及元数据被同步地写入持久化存储 • O_RSYNC:未实现 需要注意这里的表达,并不是所有的元数据和数据都会被同步写入,如 access time,creation time 以及 modification time 这种不影响访问文件的数据不需要被同步写入。 值得注意的是,当我们以 O_SYNC 或 O_DSYNC 模式打开文件,并将其交给 Stream I/O API 时,调用 fwrite 时写入的数据会进入到 C library 的缓存中,当我们调用 fflush 时,数据才会被写出到持久化存储设备,有了 O_SYNC/O_DSYNC 时,在 fflush 之后,我们不需要再调用 fsync,因为数据会被同步写入到持久化存储设备中。 When Should You fsync? 判断是否需要 fsync,最重要的就是问自己:这些数据是否需要立即持久化? • 无需立即持久化 • 临时数据 • 可被重新生成的数据 • 需要立即持久化 • 事务数据 • 更新用户的配置信息 Creating New Files 一个微妙的场景是,当你创建新的文件,你不仅需要 fsync 文件本身,还需要 fsync 文件夹,它的默认行为由文件系统来决定,你也可以在代码中确保这些 fsync 操作按自己的预期发生,同时提高代码的可移植性。 Overwriting Existing Files 另一个微妙的场景是,当你在覆盖某个文件时,系统发生故障,可能导致已有数据的丢失。避免这种问题的常见做法是:先将数据写入到一个临时文件中,保证它已经安全地持久化,然后将该临时文件重命名成原文件。这就保证了文件的原子更新操作,这样其它读者只可能读到旧文件或新文件,而不会读到一个中间状态的文件。 整个过程具体如下: 1. create a new temp file (on the same file system!) 2. write data to temp file 3. fsync() the temp file 4. rename the temp file to the appropriate name 5. fsync() the containing directory Checking For Errors 当执行 I/O 写操作时,数据会在应用地址空间和 kernel 地址空间中缓存,因此类似 write(), fflush() 调用只会被写入到缓存中,通常不会抛错。错误通常会在写入持久化设备时抛错,如 fsync(), msync() 以及 close() 等。因此,在这些调用的返回处检查错误是很有必要的。 Write-Back Caches 本小节介绍一些磁盘缓存的一般信息,以及操作系统对这些缓存的控制。这些内容不影响一般程序的构建,因此本小节的内容仅仅是为了让读者对这个话题有一个更深入的了解。 在持久化设备中的写回缓存 (write-back cache) 有很多种实现。我们在上面全文中的讨论都是以 volatile write-back cache 为基础,一旦发生断电,缓存数据全部丢失。然而,大部分存储设备可以被配置为 cache-less 或 write-through 模式,在这两种模式写,数据在未被安全持久化之前不会返回。一些外部存储设备阵列通常会支持 non-volatile 或 battery-backed write-cache,即使发生断电,缓存区的数据也不会丢失。 但这些在程序员看来都是透明的,对于程序员来说,最好直接假设存储设备中仅支持 volatile cache,防御地去编程。 参考
__label__pos
0.799771
Why Would a Flash Drive Be Recognized on One System But Not on Another? Computer settings and compatibility issues can prevent the system from recognizing flash drives. ... Jeffrey Hamilton/Digital Vision/Getty Images When a flash drive works on one system but doesn't on another computer or device, it is an indication that the device unable to read the drive may have a compatibility or settings conflict. The good news is that if one computer or device is able to read the flash drive, the drive is not broken and doesn't have corrupted data. However, you'll need to identify the problem with the system that cannot read the flash drive. 1 Using Compatible File Systems Storage devices like flash drives use filesystems to format and arrange saved data. A computer or similar device must be compatible with the file system in order to make sense of the stored data. Some file systems, like NTFS and HFS+, are proprietary formats for Windows and Mac OS devices. A Windows computer can't recognize a HFS+ formatted flash drive, while a Mac can't recognize a NTFS+ formatted flash drive. However, flash drives that are going to be used for cross-platform data transfers can be formatted in the ExFAT or FAT32 file systems. Changing a flash drive's file system format will erase all data on the device. 2 Disabling Plug and Play Devices The computer recognizes devices added to the system after the initial power-on and automatically configures them with a feature called plug and play. Disabling plug and play on the computer can prevent the system from recognizing external storage devices like flash drives when connected to the computer. Plug and play can be toggled in the CMOS and Device Manager. Plug and play is enabled by default feature, which means it would have to be manually disabled on a system to cause device detection problems. Rebooting the computer can help clear plug and play errors. 3 Working Around USB Hub Conflicts USB hubs can connect to up to 127 devices at any given time. However, it is highly unlikely that a computer would ever use that many devices. The 127 device limit assigns addresses to each connected device so the computer can tell devices apart; if the computer erroneously assigns two USB devices the same address, it can't tell them apart. If the computer will not recognize a flash drive, try rebooting the computer and reconnecting the devices. The addresses are re-assigned on restart. 4 Bus-powered USB Hub Problems The system may not be able to recognize the flash drive if the drive is connected to a bus-powered USB hub. Bus-powered USB hubs split available power between the ports; if the port doesn't have enough energy to operate a connected device, the device won't work. A bus-powered hub may reduce the power flow to less than 100 milliamps, which is insufficient to run flash drives. Try connecting the flash drive directly to one of the computer's USB ports or switching to a self-powered USB hub to access the device. Dan Stone started writing professionally in 2006, specializing in education, technology and music. He is a web developer for a communications company and previously worked in television. Stone received a Bachelor of Arts in journalism and a Master of Arts in communication studies from Northern Illinois University. ×
__label__pos
0.96448
How can I copy objects in the ITEXIA app? If you would like to use the app to add a new object that should be filled with similar or the same data as an object that has already been added, you can simply copy its values.   Especially when adding objects of which you have several of the same type, you can save a lot of time using the copy function. For example, you don't have to add each chair individually.   Note that this feature is only available on iOS devices. Note: • Do you work with Android? Although you cannot currently copy data in the Android app, you can save a lot of time using the 'Replace values' function in the WebApp and enter data for several objects at the same time. Here we'll show you how to do it. • 💡The copy feature is now also available in the web app for all customers with Advanced BETA license or higher. Learn more by dropping by here . This is how it works:   1. Open the app and log in 2. Tap the search field at the bottom of the screen   3. Give the new object a new scan code   4. Instead of entering all values manually, click on the objects icon in the head   5. Select the object whose properties you wish to copy   6. Confirm by clicking on Copy info
__label__pos
0.672126
What is the Importance of Computer Security? Computer keyword with combination lock on top Is your computer protected or vulnerable? Anybody who doesn’t know the answer to that question may be putting their personal information and data at risk without realizing it. There are many ways to protect a computer to minimize the risk that a hacker or thief can access the information there and most are easy to implement. Why is Computer Security Important? Whether you use a computer at work or at home—or both—proper security is a must to prevent anybody from accessing your personal data, work data, or identifying information, which they could potentially use to steal your identity, hack into your bank account, or even ruin your credit. Examples of the Importance of Computer Security According to SC Magazine, the average American has been impacted by at least seven data breaches since 2004. Over those seven breaches, people have lost an average of 21 data points, including passwords, emails, and usernames. The HIPAA Journal has reported that on March 8, 2022, Aesto Health, a health software company in Alabama, experienced a data breach that involved patient names, birthdates, and personal medical information. While the company immediately reported the breach and no social security numbers or financial data was lost, this incident illustrates the importance of network security. On a more personal level, Consumer Affairs reports that since 2019, there has been a 311% increase in the number of victims of identity theft. The article also notes that there has been a 14% increase in data breaches in the first quarter of 2022. With vulnerabilities on computers, mobile devices, and the Internet of Things–smart devices of every description–it’s no wonder that computer security is more important than ever. What Are Some Ways That People Can Be Hacked? In the early days of the internet, the biggest risk to users was that their password might be breached and someone might access their email. That’s still a risk, but there are far more ways for people to be hacked in 2022. Here are some examples: • Password breaches. As mentioned, password breaches are still a risk. Even people who use secure passwords may be vulnerable to hacking if a website they use experiences a data breach. The concerns are particularly high when personal data is involved. • Credit card fraud. Most Americans have received at least one notice that their credit card information is vulnerable after a data breach at a major retailer. Credit cards may be vulnerable at the point of sale, too. • Formjacking. Any time someone fills out a form online, there’s some risk that the information entered will be stolen, which is why it’s essential to ensure that a company is trustworthy before entering personal information. • Internet of Things. Anybody who has a digital assistant such as an Amazon Alexa, or a smart device such as a printer or refrigerator that’s linked to a computer, may be vulnerable to hacking. Medical equipment in doctors’ offices can also be hacked, including remote patient monitoring devices and diagnostic tools. • Outdated software. Whatever operating system a computer uses, there are always regular updates to install. Many of these updates are designed to patch security holes. Failure to update them can leave any computer or device vulnerable to hackers. This is by no means a comprehensive list, but it illustrates how easy it is to become vulnerable thanks to the many ways we are connected to one another via our computers and devices. A laptop screen with username and password login What is a Strong Password and Why Do You Need One? Many people make the mistake of choosing easily guessable passwords for their online accounts and devices. They may even use the same password, or variations of it, for multiple accounts. One of the quickest ways to improve computer security is to use strong passwords. A strong password is a password that has at least 8 characters, including one capital letter, one lowercase letter, one number, and one special character. Special characters are usually punctuation marks or symbols. You can make up a strong password or use a random password generator. As noted previously, it’s not recommended to use the same password for multiple accounts. The safest option is to use a different, strong password for each account and device. What is the Best Way to Keep Track of Your Passwords? Using strong passwords does have one disadvantage, which is that it makes it difficult to memorize passwords. What’s the best way to keep track of them? One option is to allow your computer and devices to remember passwords. For example, anybody who uses Google Chrome as their browser has seen the pop-up window that appears after choosing a new password. It’s fine to use your device’s memory if you’re the only one who uses it, but never allow a shared or public computer to remember your password. Another option is to use a password manager such as Dashlane, LastPass, or NordPass. These tools allow users to set a secure master password that allows them to access all stored passwords. Of course, it’s still essential to change passwords regularly to maximize your protection. Keep Your Computers and Devices Safe with Strong Passwords Whether you have one device or many, the importance of computer security is undeniable. You can protect yourself by installing all software updates as soon as they become available, choosing strong passwords, using two-factor authentication involving a fingerprint or code whenever possible, and not accessing your accounts on any unsecured networks. This entry was posted in Cybersecurity and tagged , . By Aimee Parrott
__label__pos
0.984382
Seeing the Wood for the Trees R apps time series Visualising small multiples when crime data leave you unable to see the wood for the trees Author Carl Goodwin Published January 1, 2019 Modified December 25, 2023 A small clump of trees with a "Little Wood" sign nailed to one of them. It's a dark starry night and a rabbit peers out at a thief tip-toeing away. In Criminal Goings-on faceting offered a way to get a sense of the data. This is a great visualisation tool building on the principle of small multiples. There may come a point though where the sheer volume of small multiples make it harder to “see the wood for the trees”. What’s an alternative strategy? This time I’ll use Van Gogh’s “The Starry Night” palette for the feature image and plots. And there are 12 types of criminal offence, so colorRampPalette will enable the interpolation of an extended set. theme_set(theme_bw()) (cols <- vangogh_palette("StarryNight")) cols12 <- colorRampPalette(cols)(12) The data need a little tidy-up. crime_df <- str_c( "https://data.london.gov.uk/download/recorded_crime_summary/", "934f2ddb-5804-4c6a-a17c-bdd79b33430e/", "MPS%20Borough%20Level%20Crime%20%28Historical%29.csv" ) |> read_csv(show_col_types = FALSE) |> clean_names() |> rename_with(\(x) str_remove_all(x, "_text|look_up_|_name")) |> pivot_longer(where(is.numeric), names_to = "month", values_to = "num_offences") |> mutate(month = parse_number(month) |> str_c("01") |> ymd()) The original visualisation in Criminal Goings-on using ggplot’s facet_wrap is a little tricky to digest, even when limited to major categories of crime. crime_df |> summarise(num_offences = sum(num_offences), .by = c(major, borough, month)) |> ggplot(aes(month, num_offences, colour = major, group = major)) + geom_line() + facet_wrap(~borough, scales = "free_y", ncol = 4) + labs( x = NULL, y = NULL, title = "London Crime by Borough", colour = "Offence", caption = "Source: data.gov.uk" ) + scale_colour_manual(values = cols12) + guides(colour = guide_legend(nrow = 3)) + theme( strip.background = element_rect(fill = cols[4]), legend.position = "bottom", axis.text.x = element_text(angle = 45, hjust = 1) ) + guides(col = guide_legend(ncol = 2)) This “little project” was first published using trelliscopejs which offered a really nice alternative approach to the static facet_wrap. This has been recently reimagined by the superior and easier-to-use trelliscope package. I’ve updated this post to use the “latest and greatest”. Click top-right to pop the display out full screen. Over 1,700 time series plots may be interactively filtered and sorted (for every combination of borough, major/minor category of crime) using summary statistics such as the steepness of the linear trend line. panels_df <- crime_df |> mutate(major = str_wrap(major, 16)) |> ggplot(aes(month, num_offences)) + geom_line(show.legend = FALSE) + geom_smooth(method = "lm", se = FALSE, colour = cols[5]) + facet_panels(vars(borough, major, minor), scales = "free") + labs(colour = NULL, x = NULL, y = "Offence Count") slope <- \(x, y) coef(lm(y ~ x))[2] summary_df <- crime_df |> summarise( mean_count = mean(num_offences), slope = slope(month, num_offences), .by = c(borough, major, minor)) panels_df |> as_panels_df(as_plotly = TRUE) |> as_trelliscope_df( name = "Crime in 'The Smoke'", description = str_c( "Timeseries of offences by category ", "across London's 33 boroughs sourced from data.gov.uk." ) ) |> left_join(summary_df, join_by(borough, major, minor)) |> set_var_labels( major = "Major Category of Offence", minor = "Minor Category of Offence", mean_count = "Average Offences by Borough & Offence Category", slope = "Steepness of a Linear Trendline" ) |> set_default_sort(c("slope"), dirs = "desc") |> set_tags( stats = c("mean_count", "slope"), info = c("borough", "major", "minor") ) |> set_theme( primary = cols[1], dark = cols[1], light = cols[5], light_text_on_dark = TRUE, dark_text = cols[1], light_text = cols[4], header_background = cols[2], header_text = NULL ) |> view_trelliscope() R Toolbox Summarising below the packages and functions used in this post enables me to separately create a toolbox visualisation summarising the usage of packages and functions across all posts. Package Function base c[5], library[6], mean[1], sum[1] conflicted conflict_prefer_all[1], conflict_scout[1] dplyr join_by[1], left_join[1], mutate[2], rename_with[1], summarise[2], vars[1] ggplot2 aes[2], element_rect[1], element_text[1], facet_wrap[1], geom_line[2], geom_smooth[1], ggplot[2], guide_legend[2], guides[2], labs[2], scale_colour_manual[1], theme[1], theme_bw[1], theme_set[1] grDevices colorRampPalette[1] janitor clean_names[1] lubridate ymd[1] readr parse_number[1], read_csv[1] stats coef[1], lm[1] stringr str_c[3], str_remove_all[1], str_wrap[1] tidyr pivot_longer[1] tidyselect where[1] trelliscope as_panels_df[1], as_trelliscope_df[1], facet_panels[1], set_default_sort[1], set_tags[1], set_theme[1], set_var_labels[1], view_trelliscope[1] usedthese used_here[1] vangogh vangogh_palette[1]
__label__pos
0.964894
Código Java – Centrar un JFrame en Pantalla Este es el Ejemplo #08 del Topic: Programación Gráfica en Java. El siguiente ejemplo presenta dos funciones, uno para centrar un JFrame en la pantalla, ésta no requiere necesariamente de un formulario padre y el otro para centrar en JInternalFrame, el cual si requiere de un formulario padre. ... private void setCentrarJInternalFrame(JInternalFrame jifrm) { jifrm.setLocation(jifrm.getParent().getWidth()/2 - jifrm.getWidth()/2 ,jifrm.getParent().getHeight()/2 - jifrm.getHeight()/2 - 20); } private void setCentrarJFrame(JFrame jfrm) { jfrm.setLocationRelativeTo(null); } ... Código de Ejemplo: package beastieux.gui; import java.awt.Dialog; import javax.swing.JFrame; import javax.swing.JInternalFrame; /** * * @author beastieux */ public class Ejm08_CentrarFormulario extends JFrame{ public Ejm08_CentrarFormulario() { this.setSize(500, 200); setCentrarJFrame((JFrame)this); setDefaultCloseOperation(javax.swing.WindowConstants.DISPOSE_ON_CLOSE); } private void setCentrarJInternalFrame(JInternalFrame jifrm) { jifrm.setLocation(jifrm.getParent().getWidth()/2 - jifrm.getWidth()/2 ,jifrm.getParent().getHeight()/2 - jifrm.getHeight()/2 - 20); } private void setCentrarJFrame(JFrame jfrm) { jfrm.setLocationRelativeTo(null); } public static void main(String args[]) { Ejm08_CentrarFormulario obj = new Ejm08_CentrarFormulario(); obj.setVisible(true); } } Pueder ir al artículo principal: Códigos Sencillos hechos en Java Deja un comentario
__label__pos
0.713451
CSS color gradients for UI elements and backgrounds Tool A color gradient is a gradual blend between two or more colors. Usually, a gradient is defined by two user-defined colors, and the computer automatically calculate all colors in between. Color gradients can consist of two or more used-defined colors. In CSS code for websites, you can define linear and radial gradients. Normally a designer…
__label__pos
0.948196
PinAttributes: NIL as default value? hi, a question regarding plugin developement: is it possible to set a pin to a default value of NIL? something like that: [Output("Output", DefaultValue = NIL)](Output("Output", DefaultValue = NIL)) ISpread<double> FOutput; for now i’m doing it by setting SliceCount = 0 , but this feels a little clumsy… thanks hei motzi, setting SliceCount=0 is exactly what NIL is about. NIL is not a special value in slice 0. it actually means that there are 0 slices. yes, i am aware of that. the question is: can a slicecount of 0 be set via PinAttribute? no… no text … allright, thanks for the clarification! i’d find this a handy feature though :) just for documentation: @velcrome)) shows a way how to set an initial slicecount of zero in his ((contribution:teensy3.0octows2811-led-control contribution plugins, which does not clutter the evaluate method. by implementing the IPartImportsSatisfiedNotification interface and the OnImportsSatisfied method, you get a method that is run once at plugin initialisation. public class ValuetestNode : IPluginEvaluate, IPartImportsSatisfiedNotification { #region fields & pins [Input("Input", DefaultValue = 1.0)](Input("Input", DefaultValue = 1.0)) public ISpread<double> FInput; [Output("Output")](Output("Output")) public ISpread<double> FOutput; [Import()](Import()) public ILogger FLogger; #endregion fields & pins public void OnImportsSatisfied() { //start with an empty stream output FOutput.SliceCount = 0; } //called when data for any output pin is requested public void Evaluate(int SpreadMax) { ... } } thanks @velcrome!
__label__pos
0.997728
Demonstration of GET method using NodeJS and ExpressJS to find single record Rajnilari2015 Posted by Rajnilari2015 under Node.js category on | Points: 40 | Views : 282 In the below code, we will find an example that demonstrates the use of GET method using NodeJS and ExpressJS to find single record //add the express module var express = require('express'); //create an instance of express module var app = express(); //prepare the Employee data source/model var employees = [ { "EmployeeID" :1 , "EmployeeName" : "RNA Team", "Salary" : "200000", "Address" : "Bangalore" }, { "EmployeeID" :2 , "EmployeeName" : "Mahesh Samabesh", "Salary" : "100000", "Address" : "Hydrabad" }, { "EmployeeID" :3 , "EmployeeName" : "Rui Figo", "Salary" : "50000", "Address" : "Dallas" }, { "EmployeeID" :4 , "EmployeeName" : "Indradev Jana", "Salary" : "456789", "Address" : "Los Angles" }, { "EmployeeID" :5 , "EmployeeName" : "Suresh Shailesh", "Salary" : "1234567", "Address" : "Patna" } ]; //Get single Employee record app.get('/:EmployeeID', function (req, res) { var employeeID = req.params.EmployeeID; //Get Employee Records whose EmployeeID = get the EmployeeID at runtime var filteredEmployee = []; for(var i= 0; i< employees.length; i++){ if(employees[i].EmployeeID == employeeID){ filteredEmployee.push(employees[i]); } } //end Loop employees = filteredEmployee; res.send(employees); }); //run the server var server = app.listen(3000, function () { var host = server.address().address; var port = server.address().port; console.log('Server started and is listening at :> http://%s:%s', host, port); }); In the beginning, we have imported the needed modules and prepared the model data.Since express routes are based on HTTP verbs, so the app.get() method fetches the records from the URI specified.And finally the app starts a server and listens on port 3000 for connection. It will respond with the Employee Records for requests to the root URL (/) or route.We have passed the EmployeeID at runtime and filter the records.Internally, Express converts a route to regular expression. Comments or Responses Login to post response
__label__pos
0.977808
Validation Conditional Logic Validation Conditional Logic feature adds a possibility to dynamically apply different validation rules based on inputs/choices in other Options. This video will help to understand it better: Currently, this feature works only for Text Input Option. Example & tutorial of using validation conditional logic is shown here: https://moomoo.agency/create-product-like-on-blinds4udirect-co-uk-in-woocommerce-with-uni-cpo/ Validation attributes available The following section is for advanced users only! It is possible to use the following validation attributes: type -email | number | integer | digits | alphanum | url minlength - number maxlength - number min - number max - number range - example: [6, 10] mincheck - number maxcheck - number equalto - ID of html form field greaterorequalthan - ID of html form field greaterthan - ID of html form field lessorequalthan - ID of html form field lessthan - ID of html form field The attributes above should be prefixed by 'data-parsley-'. Examples: 1 data-parsley-type="email" Copied! 1 data-parsley-mincheck="3" Copied! 1 data-parsley-greaterthan="#uni_cpo_width-field" Copied! The mentioned validation rules are usable in any option. They can be put inside "Extra valdiation rules" setting. Basically, this setting will output any HTML attribute (for instance, you would need placeholder attribute - it can be added here as well ;) ). Last modified 9mo ago
__label__pos
0.502596
Skydive The Open Source Real-Time Network and Protocols Analyzer. Skydive is an open source real-time network topology and protocols analyzer. It aims to provide a comprehensive way of understanding what is happening in the network infrastructure. Skydive agents collect topology informations and flows and forward them to a central agent for further analysis. All the informations a stored in an Elasticsearch database. Skydive is SDN-agnostic but provides SDN drivers in order to enhance the topology and flows informations. Currently only the Neutron driver is provided but more drivers will come soon. [button size=large style=round color=red align=none url=https://github.com/redhat-cip/skydive ]Download [/button] NJ Ouchn "Passion is needed for any great work, and for the revolution, passion and audacity are required in big doses"
__label__pos
0.506623
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. How can I write a 1-bit bmp image in Matlab using imwrite or any other function. the default of imwrite for bmp is 8-bit. Thanks a lot :) share|improve this question add comment 2 Answers 2 up vote 1 down vote accepted According to the IMWRITE documentation: If the input array is of class logical, imwrite assumes the data is a binary image and writes it to the file with a bit depth of 1, if the format allows it. BMP, PNG, or TIFF formats accept binary images as input arrays. Therefore, if you convert your image data to a logical matrix before giving it to IMWRITE, you should be able to create a 1-bit BMP image: imwrite(logical(imageData),'image.bmp'); share|improve this answer add comment You have to convert the image to logical (i.e. 1-bit) before the call to imwrite. %# assuming the image is stored in a variable 'img' imwrite(logical(img),'test.bmp','bmp') share|improve this answer      Thank you. Your answer is true. However, I can not mark two answers as my accepted answer. Thanks a lot. –  Shadi Aug 11 '10 at 15:35      @Shadi: You're welcome. –  Jonas Aug 11 '10 at 15:38 add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.55519
/[gentoo-x86]/eclass/kernel-2.eclass Gentoo Diff of /eclass/kernel-2.eclass Parent Directory Parent Directory | Revision Log Revision Log | View Patch Patch Revision 1.108 Revision 1.258 1# Copyright 1999-2005 Gentoo Foundation 1# Copyright 1999-2011 Gentoo Foundation 2# Distributed under the terms of the GNU General Public License v2 2# Distributed under the terms of the GNU General Public License v2 3# $Header: /var/cvsroot/gentoo-x86/eclass/kernel-2.eclass,v 1.108 2005/03/08 21:40:45 johnm Exp $ 3# $Header: /var/cvsroot/gentoo-x86/eclass/kernel-2.eclass,v 1.258 2011/08/18 14:58:57 vapier Exp $ 4 4 5# Description: kernel.eclass rewrite for a clean base regarding the 2.6 5# Description: kernel.eclass rewrite for a clean base regarding the 2.6 6# series of kernel with back-compatibility for 2.4 6# series of kernel with back-compatibility for 2.4 7# 7# 8# Maintainer: John Mylchreest <[email protected]> 8# Original author: John Mylchreest <[email protected]> 9# Copyright 2005 Gentoo Linux 9# Maintainer: [email protected] 10# 10# 11# Please direct your bugs to the current eclass maintainer :) 11# Please direct your bugs to the current eclass maintainer :) 12 12 13# added functionality: 13# added functionality: 14# unipatch - a flexible, singular method to extract, add and remove patches. 14# unipatch - a flexible, singular method to extract, add and remove patches. 22# EXTRAVERSION would be something like : -wolk-4.19-r1 22# EXTRAVERSION would be something like : -wolk-4.19-r1 23# K_NOSETEXTRAVERSION - if this is set then EXTRAVERSION will not be 23# K_NOSETEXTRAVERSION - if this is set then EXTRAVERSION will not be 24# automatically set within the kernel Makefile 24# automatically set within the kernel Makefile 25# K_NOUSENAME - if this is set then EXTRAVERSION will not include the 25# K_NOUSENAME - if this is set then EXTRAVERSION will not include the 26# first part of ${PN} in EXTRAVERSION 26# first part of ${PN} in EXTRAVERSION 27# K_NOUSEPR - if this is set then EXTRAVERSION will not include the 28# anything based on ${PR}. 27# K_PREPATCHED - if the patchset is prepatched (ie: mm-sources, 29# K_PREPATCHED - if the patchset is prepatched (ie: mm-sources, 28# ck-sources, ac-sources) it will use PR (ie: -r5) as 30# ck-sources, ac-sources) it will use PR (ie: -r5) as 29# the patchset version for 31# the patchset version for 30# and not use it as a true package revision 32# and not use it as a true package revision 31# K_EXTRAEINFO - this is a new-line seperated list of einfo displays in 33# K_EXTRAEINFO - this is a new-line seperated list of einfo displays in 32# postinst and can be used to carry additional postinst 34# postinst and can be used to carry additional postinst 33# messages 35# messages 36# K_EXTRAELOG - same as K_EXTRAEINFO except using elog instead of einfo 34# K_EXTRAEWARN - same as K_EXTRAEINFO except ewarn's instead of einfo's 37# K_EXTRAEWARN - same as K_EXTRAEINFO except using ewarn instead of einfo 35# K_SYMLINK - if this is set, then forcably create symlink anyway 38# K_SYMLINK - if this is set, then forcably create symlink anyway 36# 39# 37# K_DEFCONFIG - Allow specifying a different defconfig target. 40# K_DEFCONFIG - Allow specifying a different defconfig target. 38# If length zero, defaults to "defconfig". 41# If length zero, defaults to "defconfig". 39 42# K_WANT_GENPATCHES - Apply genpatches to kernel source. Provide any 43# combination of "base" and "extras" 44# K_GENPATCHES_VER - The version of the genpatches tarball(s) to apply. 45# A value of "5" would apply genpatches-2.6.12-5 to 46# my-sources-2.6.12.ebuild 47# K_SECURITY_UNSUPPORTED- If set, this kernel is unsupported by Gentoo Security 48# K_DEBLOB_AVAILABLE - A value of "0" will disable all of the optional deblob 49# code. If empty, will be set to "1" if deblobbing is 50# possible. Test ONLY for "1". 51# K_PREDEBLOBBED - This kernel was already deblobbed elsewhere. 52# If false, either optional deblobbing will be available 53# or the license will note the inclusion of freedist 54# code. 55# K_LONGTERM - If set, the eclass will search for the kernel source 56# in the long term directories on the upstream servers 57# as the location has been changed by upstream 40# H_SUPPORTEDARCH - this should be a space separated list of ARCH's which 58# H_SUPPORTEDARCH - this should be a space separated list of ARCH's which 41# can be supported by the headers ebuild 59# can be supported by the headers ebuild 42 60 43# UNIPATCH_LIST - space delimetered list of patches to be applied to the 61# UNIPATCH_LIST - space delimetered list of patches to be applied to the 44# kernel 62# kernel 49# UNIPATCH_DOCS - space delimemeted list of docs to be installed to 67# UNIPATCH_DOCS - space delimemeted list of docs to be installed to 50# the doc dir 68# the doc dir 51# UNIPATCH_STRICTORDER - if this is set places patches into directories of 69# UNIPATCH_STRICTORDER - if this is set places patches into directories of 52# order, so they are applied in the order passed 70# order, so they are applied in the order passed 53 71 54inherit toolchain-funcs versionator multilib 72inherit eutils toolchain-funcs versionator multilib 55ECLASS="kernel-2" 73EXPORT_FUNCTIONS pkg_setup src_unpack src_compile src_test src_install pkg_preinst pkg_postinst pkg_postrm 56INHERITED="$INHERITED $ECLASS" 74 57EXPORT_FUNCTIONS pkg_setup src_unpack src_compile src_install \ 75# Added by Daniel Ostrow <[email protected]> 58 pkg_preinst pkg_postinst 76# This is an ugly hack to get around an issue with a 32-bit userland on ppc64. 77# I will remove it when I come up with something more reasonable. 78[[ ${PROFILE_ARCH} == "ppc64" ]] && CHOST="powerpc64-${CHOST#*-}" 59 79 60export CTARGET=${CTARGET:-${CHOST}} 80export CTARGET=${CTARGET:-${CHOST}} 61if [[ ${CTARGET} == ${CHOST} && ${CATEGORY/cross-} != ${CATEGORY} ]]; then 81if [[ ${CTARGET} == ${CHOST} && ${CATEGORY/cross-} != ${CATEGORY} ]]; then 62 export CTARGET=${CATEGORY/cross-} 82 export CTARGET=${CATEGORY/cross-} 63fi 83fi 64 84 65HOMEPAGE="http://www.kernel.org/ http://www.gentoo.org/" 85HOMEPAGE="http://www.kernel.org/ http://www.gentoo.org/ ${HOMEPAGE}" 86[[ -z ${LICENSE} ]] && \ 66LICENSE="GPL-2" 87 LICENSE="GPL-2" 88 89# This is the latest KV_PATCH of the deblob tool available from the 90# libre-sources upstream. If you bump this, you MUST regenerate the Manifests 91# for ALL kernel-2 consumer packages where deblob is available. 92[[ -z ${DEBLOB_MAX_VERSION} ]] && DEBLOB_MAX_VERSION=38 93 94# No need to run scanelf/strip on kernel sources/headers (bug #134453). 95RESTRICT="binchecks strip" 67 96 68# set LINUX_HOSTCFLAGS if not already set 97# set LINUX_HOSTCFLAGS if not already set 69[ -z "$LINUX_HOSTCFLAGS" ] && \ 98[[ -z ${LINUX_HOSTCFLAGS} ]] && \ 70 LINUX_HOSTCFLAGS="-Wall -Wstrict-prototypes -Os -fomit-frame-pointer -I${S}/include" 99 LINUX_HOSTCFLAGS="-Wall -Wstrict-prototypes -Os -fomit-frame-pointer -I${S}/include" 100 101# debugging functions 102#============================================================== 103# this function exists only to help debug kernel-2.eclass 104# if you are adding new functionality in, put a call to it 105# at the start of src_unpack, or during SRC_URI/dep generation. 106debug-print-kernel2-variables() { 107 for v in PVR CKV OKV KV KV_FULL KV_MAJOR KV_MINOR KV_PATCH RELEASETYPE \ 108 RELEASE UNIPATCH_LIST_DEFAULT UNIPATCH_LIST_GENPATCHES \ 109 UNIPATCH_LIST S KERNEL_URI ; do 110 debug-print "${v}: ${!v}" 111 done 112} 71 113 72#Eclass functions only from here onwards ... 114#Eclass functions only from here onwards ... 73#============================================================== 115#============================================================== 116handle_genpatches() { 117 local tarball 118 [[ -z ${K_WANT_GENPATCHES} || -z ${K_GENPATCHES_VER} ]] && return 1 119 120 debug-print "Inside handle_genpatches" 121 local oldifs=${IFS} 122 export IFS="." 123 local OKV_ARRAY=( $OKV ) 124 export IFS=${oldifs} 125 126 # for > 3.0 kernels, handle genpatches tarball name 127 # genpatches for 3.0 and 3.0.1 might be named 128 # genpatches-3.0-1.base.tar.bz2 and genpatches-3.0-2.base.tar.bz2 129 # respectively. Handle this. 130 131 for i in ${K_WANT_GENPATCHES} ; do 132 if [[ ${KV_MAJOR} -ge 3 ]]; then 133 if [[ ${#OKV_ARRAY[@]} -ge 3 ]]; then 134 tarball="genpatches-${KV_MAJOR}.${KV_MINOR}-${K_GENPATCHES_VER}.${i}.tar.bz2" 135 else 136 tarball="genpatches-${KV_MAJOR}.${KV_PATCH}-${K_GENPATCHES_VER}.${i}.tar.bz2" 137 fi 138 else 139 tarball="genpatches-${OKV}-${K_GENPATCHES_VER}.${i}.tar.bz2" 140 fi 141 debug-print "genpatches tarball: $tarball" 142 GENPATCHES_URI="${GENPATCHES_URI} mirror://gentoo/${tarball}" 143 UNIPATCH_LIST_GENPATCHES="${UNIPATCH_LIST_GENPATCHES} ${DISTDIR}/${tarball}" 144 done 145} 146 147detect_version() { 148 # this function will detect and set 149 # - OKV: Original Kernel Version (2.6.0/2.6.0-test11) 150 # - KV: Kernel Version (2.6.0-gentoo/2.6.0-test11-gentoo-r1) 151 # - EXTRAVERSION: The additional version appended to OKV (-gentoo/-gentoo-r1) 152 153 if [[ -n ${KV_FULL} ]]; then 154 # we will set this for backwards compatibility. 155 KV=${KV_FULL} 156 157 # we know KV_FULL so lets stop here. but not without resetting S 158 S=${WORKDIR}/linux-${KV_FULL} 159 return 160 fi 161 162 # CKV is used as a comparison kernel version, which is used when 163 # PV doesnt reflect the genuine kernel version. 164 # this gets set to the portage style versioning. ie: 165 # CKV=2.6.11_rc4 166 CKV=${CKV:-${PV}} 167 OKV=${OKV:-${CKV}} 168 OKV=${OKV/_beta/-test} 169 OKV=${OKV/_rc/-rc} 170 OKV=${OKV/-r*} 171 OKV=${OKV/_p*} 172 173 KV_MAJOR=$(get_version_component_range 1 ${OKV}) 174 # handle if OKV is X.Y or X.Y.Z (e.g. 3.0 or 3.0.1) 175 local oldifs=${IFS} 176 export IFS="." 177 local OKV_ARRAY=( $OKV ) 178 export IFS=${oldifs} 179 180 # if KV_MAJOR >= 3, then we have no more KV_MINOR 181 #if [[ ${KV_MAJOR} -lt 3 ]]; then 182 if [[ ${#OKV_ARRAY[@]} -ge 3 ]]; then 183 KV_MINOR=$(get_version_component_range 2 ${OKV}) 184 KV_PATCH=$(get_version_component_range 3 ${OKV}) 185 if [[ ${KV_MAJOR}${KV_MINOR}${KV_PATCH} -ge 269 ]]; then 186 KV_EXTRA=$(get_version_component_range 4- ${OKV}) 187 KV_EXTRA=${KV_EXTRA/[-_]*} 188 else 189 KV_PATCH=$(get_version_component_range 3- ${OKV}) 190 fi 191 else 192 KV_PATCH=$(get_version_component_range 2 ${OKV}) 193 KV_EXTRA=$(get_version_component_range 3- ${OKV}) 194 KV_EXTRA=${KV_EXTRA/[-_]*} 195 fi 196 197 debug-print "KV_EXTRA is ${KV_EXTRA}" 198 199 KV_PATCH=${KV_PATCH/[-_]*} 200 201 local v n=0 missing 202 #if [[ ${KV_MAJOR} -lt 3 ]]; then 203 if [[ ${#OKV_ARRAY[@]} -ge 3 ]]; then 204 for v in CKV OKV KV_{MAJOR,MINOR,PATCH} ; do 205 [[ -z ${!v} ]] && n=1 && missing="${missing}${v} "; 206 done 207 else 208 for v in CKV OKV KV_{MAJOR,PATCH} ; do 209 [[ -z ${!v} ]] && n=1 && missing="${missing}${v} "; 210 done 211 fi 212 213 [[ $n -eq 1 ]] && \ 214 eerror "Missing variables: ${missing}" && \ 215 die "Failed to extract kernel version (try explicit CKV in ebuild)!" 216 unset v n missing 217 218# if [[ ${KV_MAJOR} -ge 3 ]]; then 219 if [[ ${#OKV_ARRAY[@]} -lt 3 ]]; then 220 KV_PATCH_ARR=(${KV_PATCH//\./ }) 221 222 # at this point 080811, Linus is putting 3.1 kernels in 3.0 directory 223 # revisit when 3.1 is released 224 if [[ ${KV_PATCH} -gt 0 ]]; then 225 KERNEL_BASE_URI="mirror://kernel/linux/kernel/v${KV_MAJOR}.$((${KV_PATCH_ARR} - 1))" 226 else 227 KERNEL_BASE_URI="mirror://kernel/linux/kernel/v${KV_MAJOR}.${KV_PATCH_ARR}" 228 fi 229 # KERNEL_BASE_URI="mirror://kernel/linux/kernel/v${KV_MAJOR}.${KV_PATCH_ARR}" 230 [[ -n "${K_LONGTERM}" ]] && 231 KERNEL_BASE_URI="${KERNEL_BASE_URI}/longterm/v${KV_MAJOR}.${KV_PATCH_ARR}" 232 else 233 #KERNEL_BASE_URI="mirror://kernel/linux/kernel/v${KV_MAJOR}.0" 234 KERNEL_BASE_URI="mirror://kernel/linux/kernel/v${KV_MAJOR}.${KV_MINOR}" 235 [[ -n "${K_LONGTERM}" ]] && 236 KERNEL_BASE_URI="${KERNEL_BASE_URI}/longterm/v${KV_MAJOR}.${KV_MINOR}.${KV_PATCH}" 237 fi 238 239 debug-print "KERNEL_BASE_URI is ${KERNEL_BASE_URI}" 240 241 if [[ ${#OKV_ARRAY[@]} -ge 3 ]] && [[ ${KV_MAJOR} -ge 3 ]]; then 242 # handle vanilla-sources-3.x.y correctly 243 if [[ ${PN/-*} == "vanilla" && ${KV_PATCH} -gt 0 ]]; then 244 KERNEL_URI="${KERNEL_BASE_URI}/patch-${OKV}.bz2" 245 UNIPATCH_LIST_DEFAULT="${DISTDIR}/patch-${CKV}.bz2" 246 fi 247 KERNEL_URI="${KERNEL_URI} ${KERNEL_BASE_URI}/linux-${KV_MAJOR}.${KV_MINOR}.tar.bz2" 248 else 249 KERNEL_URI="${KERNEL_BASE_URI}/linux-${OKV}.tar.bz2" 250 fi 251 252 RELEASE=${CKV/${OKV}} 253 RELEASE=${RELEASE/_beta} 254 RELEASE=${RELEASE/_rc/-rc} 255 RELEASE=${RELEASE/_pre/-pre} 256 # We cannot trivally call kernel_is here, because it calls us to detect the 257 # version 258 #kernel_is ge 2 6 && RELEASE=${RELEASE/-pre/-git} 259 [ $(($KV_MAJOR * 1000 + ${KV_MINOR:-0})) -ge 2006 ] && RELEASE=${RELEASE/-pre/-git} 260 RELEASETYPE=${RELEASE//[0-9]} 261 262 # Now we know that RELEASE is the -rc/-git 263 # and RELEASETYPE is the same but with its numerics stripped 264 # we can work on better sorting EXTRAVERSION. 265 # first of all, we add the release 266 EXTRAVERSION="${RELEASE}" 267 debug-print "0 EXTRAVERSION:${EXTRAVERSION}" 268 [[ -n ${KV_EXTRA} ]] && [[ ${KV_MAJOR} -lt 3 ]] && EXTRAVERSION=".${KV_EXTRA}${EXTRAVERSION}" 269 270 debug-print "1 EXTRAVERSION:${EXTRAVERSION}" 271 if [[ -n "${K_NOUSEPR}" ]]; then 272 # Don't add anything based on PR to EXTRAVERSION 273 debug-print "1.0 EXTRAVERSION:${EXTRAVERSION}" 274 elif [[ -n ${K_PREPATCHED} ]]; then 275 debug-print "1.1 EXTRAVERSION:${EXTRAVERSION}" 276 EXTRAVERSION="${EXTRAVERSION}-${PN/-*}${PR/r}" 277 elif [[ "${ETYPE}" = "sources" ]]; then 278 debug-print "1.2 EXTRAVERSION:${EXTRAVERSION}" 279 # For some sources we want to use the PV in the extra version 280 # This is because upstream releases with a completely different 281 # versioning scheme. 282 case ${PN/-*} in 283 wolk) K_USEPV=1;; 284 vserver) K_USEPV=1;; 285 esac 286 287 [[ -z "${K_NOUSENAME}" ]] && EXTRAVERSION="${EXTRAVERSION}-${PN/-*}" 288 [[ -n "${K_USEPV}" ]] && EXTRAVERSION="${EXTRAVERSION}-${PV//_/-}" 289 [[ -n "${PR//r0}" ]] && EXTRAVERSION="${EXTRAVERSION}-${PR}" 290 fi 291 debug-print "2 EXTRAVERSION:${EXTRAVERSION}" 292 293 # The only messing around which should actually effect this is for KV_EXTRA 294 # since this has to limit OKV to MAJ.MIN.PAT and strip EXTRA off else 295 # KV_FULL evaluates to MAJ.MIN.PAT.EXT.EXT after EXTRAVERSION 296 297 if [[ -n ${KV_EXTRA} ]]; then 298 if [[ -n ${KV_MINOR} ]]; then 299 OKV="${KV_MAJOR}.${KV_MINOR}.${KV_PATCH}" 300 else 301 OKV="${KV_MAJOR}.${KV_PATCH}" 302 fi 303 KERNEL_URI="${KERNEL_BASE_URI}/patch-${CKV}.bz2 304 ${KERNEL_BASE_URI}/linux-${OKV}.tar.bz2" 305 UNIPATCH_LIST_DEFAULT="${DISTDIR}/patch-${CKV}.bz2" 306 fi 307 308 # We need to set this using OKV, but we need to set it before we do any 309 # messing around with OKV based on RELEASETYPE 310 KV_FULL=${OKV}${EXTRAVERSION} 311 312 # we will set this for backwards compatibility. 313 S=${WORKDIR}/linux-${KV_FULL} 314 KV=${KV_FULL} 315 316 # -rc-git pulls can be achieved by specifying CKV 317 # for example: 318 # CKV="2.6.11_rc3_pre2" 319 # will pull: 320 # linux-2.6.10.tar.bz2 & patch-2.6.11-rc3.bz2 & patch-2.6.11-rc3-git2.bz2 321 322 if [[ ${KV_MAJOR}${KV_MINOR} -eq 26 ]]; then 323 324 if [[ ${RELEASETYPE} == -rc ]] || [[ ${RELEASETYPE} == -pre ]]; then 325 OKV="${KV_MAJOR}.${KV_MINOR}.$((${KV_PATCH} - 1))" 326 KERNEL_URI="${KERNEL_BASE_URI}/testing/patch-${CKV//_/-}.bz2 327 ${KERNEL_BASE_URI}/linux-${OKV}.tar.bz2" 328 UNIPATCH_LIST_DEFAULT="${DISTDIR}/patch-${CKV//_/-}.bz2" 329 fi 330 331 if [[ ${RELEASETYPE} == -git ]]; then 332 KERNEL_URI="${KERNEL_BASE_URI}/snapshots/patch-${OKV}${RELEASE}.bz2 333 ${KERNEL_BASE_URI}/linux-${OKV}.tar.bz2" 334 UNIPATCH_LIST_DEFAULT="${DISTDIR}/patch-${OKV}${RELEASE}.bz2" 335 fi 336 337 if [[ ${RELEASETYPE} == -rc-git ]]; then 338 OKV="${KV_MAJOR}.${KV_MINOR}.$((${KV_PATCH} - 1))" 339 KERNEL_URI="${KERNEL_BASE_URI}/snapshots/patch-${KV_MAJOR}.${KV_MINOR}.${KV_PATCH}${RELEASE}.bz2 340 ${KERNEL_BASE_URI}/testing/patch-${KV_MAJOR}.${KV_MINOR}.${KV_PATCH}${RELEASE/-git*}.bz2 341 ${KERNEL_BASE_URI}/linux-${OKV}.tar.bz2" 342 343 UNIPATCH_LIST_DEFAULT="${DISTDIR}/patch-${KV_MAJOR}.${KV_MINOR}.${KV_PATCH}${RELEASE/-git*}.bz2 ${DISTDIR}/patch-${KV_MAJOR}.${KV_MINOR}.${KV_PATCH}${RELEASE}.bz2" 344 fi 345 else 346 if [[ ${RELEASETYPE} == -rc ]] || [[ ${RELEASETYPE} == -pre ]]; then 347 if [[ ${KV_MAJOR}${KV_PATCH} -eq 30 ]]; then 348 OKV="2.6.39" 349 else 350 KV_PATCH_ARR=(${KV_PATCH//\./ }) 351 OKV="${KV_MAJOR}.$((${KV_PATCH_ARR} - 1))" 352 fi 353 KERNEL_URI="${KERNEL_BASE_URI}/testing/patch-${CKV//_/-}.bz2 354 ${KERNEL_BASE_URI}/testing/linux-${OKV}.tar.bz2" 355 UNIPATCH_LIST_DEFAULT="${DISTDIR}/patch-${CKV//_/-}.bz2" 356 fi 357 358 if [[ ${RELEASETYPE} == -git ]]; then 359 KERNEL_URI="${KERNEL_BASE_URI}/snapshots/patch-${OKV}${RELEASE}.bz2 360 ${KERNEL_BASE_URI}/linux-${OKV}.tar.bz2" 361 UNIPATCH_LIST_DEFAULT="${DISTDIR}/patch-${OKV}${RELEASE}.bz2" 362 fi 363 364 if [[ ${RELEASETYPE} == -rc-git ]]; then 365 if [[ ${KV_MAJOR}${KV_PATCH} -eq 30 ]]; then 366 OKV="2.6.39" 367 else 368 KV_PATCH_ARR=(${KV_PATCH//\./ }) 369 OKV="${KV_MAJOR}.$((${KV_PATCH_ARR} - 1))" 370 fi 371 KERNEL_URI="${KERNEL_BASE_URI}/snapshots/patch-${KV_MAJOR}.${KV_PATCH}${RELEASE}.bz2 372 ${KERNEL_BASE_URI}/testing/patch-${KV_MAJOR}.${KV_PATCH}${RELEASE/-git*}.bz2 373 ${KERNEL_BASE_URI}/linux-${OKV}.tar.bz2" 374 375 UNIPATCH_LIST_DEFAULT="${DISTDIR}/patch-${KV_MAJOR}.${KV_PATCH}${RELEASE/-git*}.bz2 ${DISTDIR}/patch-${KV_MAJOR}.${KV_PATCH}${RELEASE}.bz2" 376 fi 377 378 379 fi 380 381 382 debug-print-kernel2-variables 383 384 handle_genpatches 385} 386 74kernel_is() { 387kernel_is() { 75 [[ -z ${OKV} ]] && return 1 388 # ALL of these should be set before we can safely continue this function. 389 # some of the sources have in the past had only one set. 390 local v n=0 391 for v in OKV KV_{MAJOR,MINOR,PATCH} ; do [[ -z ${!v} ]] && n=1 ; done 392 [[ $n -eq 1 ]] && detect_version 393 unset v n 394 395 # Now we can continue 76 local operator test value x=0 y=0 z=0 396 local operator test value x=0 y=0 z=0 77 397 78 case ${1} in 398 case ${1} in 79 lt) operator="-lt"; shift;; 399 lt) operator="-lt"; shift;; 80 gt) operator="-gt"; shift;; 400 gt) operator="-gt"; shift;; 86 406 87 for x in ${@}; do 407 for x in ${@}; do 88 for((y=0; y<$((3 - ${#x})); y++)); do value="${value}0"; done 408 for((y=0; y<$((3 - ${#x})); y++)); do value="${value}0"; done 89 value="${value}${x}" 409 value="${value}${x}" 90 z=$((${z} + 1)) 410 z=$((${z} + 1)) 91 411 92 case ${z} in 412 case ${z} in 93 1) for((y=0; y<$((3 - ${#KV_MAJOR})); y++)); do test="${test}0"; done; 413 1) for((y=0; y<$((3 - ${#KV_MAJOR})); y++)); do test="${test}0"; done; 94 test="${test}${KV_MAJOR}";; 414 test="${test}${KV_MAJOR}";; 95 2) for((y=0; y<$((3 - ${#KV_MINOR})); y++)); do test="${test}0"; done; 415 2) for((y=0; y<$((3 - ${#KV_MINOR})); y++)); do test="${test}0"; done; 96 test="${test}${KV_MINOR}";; 416 test="${test}${KV_MINOR}";; 101 done 421 done 102 422 103 [ ${test} ${operator} ${value} ] && return 0 || return 1 423 [ ${test} ${operator} ${value} ] && return 0 || return 1 104} 424} 105 425 106 107kernel_is_2_4() { 426kernel_is_2_4() { 108 kernel_is 2 4 427 kernel_is 2 4 109} 428} 110 429 111kernel_is_2_6() { 430kernel_is_2_6() { 112 kernel_is 2 6 || kernel_is 2 5 431 kernel_is 2 6 || kernel_is 2 5 113} 114 115kernel_header_destdir() { 116 [[ ${CTARGET} == ${CHOST} ]] \ 117 && echo /usr/include \ 118 || echo /usr/${CTARGET}/include 119} 432} 120 433 121# Capture the sources type and set DEPENDs 434# Capture the sources type and set DEPENDs 122if [[ ${ETYPE} == sources ]]; then 435if [[ ${ETYPE} == sources ]]; then 123 DEPEND="!build? ( sys-apps/sed 436 DEPEND="!build? ( sys-apps/sed 124 >=sys-devel/binutils-2.11.90.0.31 ) 437 >=sys-devel/binutils-2.11.90.0.31 )" 125 doc? ( app-text/docbook-sgml-utils )" 126 RDEPEND="${DEPEND} 127 !build? ( >=sys-libs/ncurses-5.2 438 RDEPEND="!build? ( >=sys-libs/ncurses-5.2 128 sys-devel/make )" 439 sys-devel/make )" 440 PDEPEND="!build? ( virtual/dev-manager )" 129 441 130 PROVIDE="virtual/linux-sources" 131 kernel_is gt 2 4 && PROVIDE="${PROVIDE} virtual/alsa" 132 SLOT="${PVR}" 442 SLOT="${PVR}" 133 DESCRIPTION="Sources for the Linux kernel" 443 DESCRIPTION="Sources for the ${KV_MAJOR}.${KV_MINOR:-$KV_PATCH} linux kernel" 134 IUSE="${IUSE} symlink build doc" 444 IUSE="symlink build" 445 446 # Bug #266157, deblob for libre support 447 if [[ -z ${K_PREDEBLOBBED} ]] ; then 448 # Bug #359865, force a call to detect_version if needed 449 kernel_is ge 2 6 27 && \ 450 [[ -z "${K_DEBLOB_AVAILABLE}" ]] && \ 451 kernel_is le 2 6 ${DEBLOB_MAX_VERSION} && \ 452 K_DEBLOB_AVAILABLE=1 453 if [[ ${K_DEBLOB_AVAILABLE} == "1" ]] ; then 454 IUSE="${IUSE} deblob" 455 # Reflect that kernels contain firmware blobs unless otherwise 456 # stripped 457 LICENSE="${LICENSE} !deblob? ( freedist )" 458 459 if [[ -n KV_MINOR ]]; then 460 DEBLOB_PV="${KV_MAJOR}.${KV_MINOR}.${KV_PATCH}" 461 else 462 DEBLOB_PV="${KV_MAJOR}.${KV_PATCH}" 463 fi 464 465 if [[ ${KV_MAJOR} -ge 3 ]]; then 466 DEBLOB_PV="${KV_MAJOR}.${KV_MINOR}" 467 fi 468 469 DEBLOB_A="deblob-${DEBLOB_PV}" 470 DEBLOB_CHECK_A="deblob-check-${DEBLOB_PV}" 471 DEBLOB_HOMEPAGE="http://www.fsfla.org/svnwiki/selibre/linux-libre/" 472 DEBLOB_URI_PATH="download/releases/LATEST-${DEBLOB_PV}.N" 473 if ! has "${EAPI:-0}" 0 1 ; then 474 DEBLOB_CHECK_URI="${DEBLOB_HOMEPAGE}/${DEBLOB_URI_PATH}/deblob-check -> ${DEBLOB_CHECK_A}" 475 else 476 DEBLOB_CHECK_URI="mirror://gentoo/${DEBLOB_CHECK_A}" 477 fi 478 DEBLOB_URI="${DEBLOB_HOMEPAGE}/${DEBLOB_URI_PATH}/${DEBLOB_A}" 479 HOMEPAGE="${HOMEPAGE} ${DEBLOB_HOMEPAGE}" 480 481 KERNEL_URI="${KERNEL_URI} 482 deblob? ( 483 ${DEBLOB_URI} 484 ${DEBLOB_CHECK_URI} 485 )" 486 else 487 # We have no way to deblob older kernels, so just mark them as 488 # tainted with non-libre materials. 489 LICENSE="${LICENSE} freedist" 490 fi 491 fi 492 135elif [[ ${ETYPE} == headers ]]; then 493elif [[ ${ETYPE} == headers ]]; then 136 DESCRIPTION="Linux system headers" 494 DESCRIPTION="Linux system headers" 137 IUSE="${IUSE}" 495 138 496 # Since we should NOT honour KBUILD_OUTPUT in headers 497 # lets unset it here. 498 unset KBUILD_OUTPUT 499 139 if [[ ${CTARGET} = ${CHOST} ]]; then 500 if [[ ${CTARGET} = ${CHOST} ]]; then 140 DEPEND="!virtual/os-headers" 141 PROVIDE="virtual/kernel virtual/os-headers" 142 SLOT="0" 501 SLOT="0" 143 else 502 else 144 SLOT="${CTARGET}" 503 SLOT="${CTARGET}" 145 fi 504 fi 146else 505else 147 eerror "Unknown ETYPE=\"${ETYPE}\", must be \"sources\" or \"headers\"" 506 eerror "Unknown ETYPE=\"${ETYPE}\", must be \"sources\" or \"headers\"" 148 die "Unknown ETYPE=\"${ETYPE}\", must be \"sources\" or \"headers\"" 507 die "Unknown ETYPE=\"${ETYPE}\", must be \"sources\" or \"headers\"" 149fi 508fi 150 509 510# Cross-compile support functions 511#============================================================== 512kernel_header_destdir() { 513 [[ ${CTARGET} == ${CHOST} ]] \ 514 && echo /usr/include \ 515 || echo /usr/${CTARGET}/usr/include 516} 517 518cross_pre_c_headers() { 519 use crosscompile_opts_headers-only && [[ ${CHOST} != ${CTARGET} ]] 520} 521 522env_setup_xmakeopts() { 523 # Kernel ARCH != portage ARCH 524 export KARCH=$(tc-arch-kernel) 525 526 # When cross-compiling, we need to set the ARCH/CROSS_COMPILE 527 # variables properly or bad things happen ! 528 xmakeopts="ARCH=${KARCH}" 529 if [[ ${CTARGET} != ${CHOST} ]] && ! cross_pre_c_headers ; then 530 xmakeopts="${xmakeopts} CROSS_COMPILE=${CTARGET}-" 531 elif type -p ${CHOST}-ar > /dev/null ; then 532 xmakeopts="${xmakeopts} CROSS_COMPILE=${CHOST}-" 533 fi 534 export xmakeopts 535} 536 151# Unpack functions 537# Unpack functions 152#============================================================== 538#============================================================== 153unpack_2_4() { 539unpack_2_4() { 154 cd ${S} 155 # this file is required for other things to build properly, 540 # this file is required for other things to build properly, 156 # so we autogenerate it 541 # so we autogenerate it 157 make mrproper || die "make mrproper died" 542 make -s mrproper ${xmakeopts} || die "make mrproper failed" 543 make -s symlinks ${xmakeopts} || die "make symlinks failed" 158 make include/linux/version.h || die "make include/linux/version.h failed" 544 make -s include/linux/version.h ${xmakeopts} || die "make include/linux/version.h failed" 159 echo ">>> version.h compiled successfully." 545 echo ">>> version.h compiled successfully." 160} 546} 161 547 548unpack_2_6() { 549 # this file is required for other things to build properly, so we 550 # autogenerate it ... generate a .config to keep version.h build from 551 # spitting out an annoying warning 552 make -s mrproper ${xmakeopts} 2>/dev/null \ 553 || die "make mrproper failed" 554 555 # quick fix for bug #132152 which triggers when it cannot include linux 556 # headers (ie, we have not installed it yet) 557 if ! make -s defconfig ${xmakeopts} &>/dev/null 2>&1 ; then 558 touch .config 559 eerror "make defconfig failed." 560 eerror "assuming you dont have any headers installed yet and continuing" 561 epause 5 562 fi 563 564 make -s include/linux/version.h ${xmakeopts} 2>/dev/null \ 565 || die "make include/linux/version.h failed" 566 rm -f .config >/dev/null 567} 568 162universal_unpack() { 569universal_unpack() { 570 debug-print "Inside universal_unpack" 571 572 local oldifs=${IFS} 573 export IFS="." 574 local OKV_ARRAY=( $OKV ) 575 export IFS=${oldifs} 576 163 cd ${WORKDIR} 577 cd "${WORKDIR}" 578 if [[ ${#OKV_ARRAY[@]} -ge 3 ]] && [[ ${KV_MAJOR} -ge 3 ]]; then 579 unpack linux-${KV_MAJOR}.${KV_MINOR}.tar.bz2 580 else 164 unpack linux-${OKV}.tar.bz2 581 unpack linux-${OKV}.tar.bz2 165 if [[ "${OKV}" != "${KV_FULL}" ]]; then 582 fi 583 584 if [[ -d "linux" ]]; then 585 debug-print "Moving linux to linux-${KV_FULL}" 166 mv linux-${OKV} linux-${KV_FULL} \ 586 mv linux linux-${KV_FULL} \ 167 || die "Unable to move source tree to ${KV_FULL}." 587 || die "Unable to move source tree to ${KV_FULL}." 588 elif [[ "${OKV}" != "${KV_FULL}" ]]; then 589 if [[ ${#OKV_ARRAY[@]} -ge 3 ]] && [[ ${KV_MAJOR} -ge 3 ]] && 590 [[ "${ETYPE}" = "sources" ]]; then 591 debug-print "moving linux-${KV_MAJOR}.${KV_MINOR} to linux-${KV_FULL} " 592 mv linux-${KV_MAJOR}.${KV_MINOR} linux-${KV_FULL} \ 593 || die "Unable to move source tree to ${KV_FULL}." 594 else 595 debug-print "moving linux-${OKV} to linux-${KV_FULL} " 596 mv linux-${OKV} linux-${KV_FULL} \ 597 || die "Unable to move source tree to ${KV_FULL}." 168 fi 598 fi 599 elif [[ ${#OKV_ARRAY[@]} -ge 3 ]] && [[ ${KV_MAJOR} -ge 3 ]]; then 600 mv linux-${KV_MAJOR}.${KV_MINOR} linux-${KV_FULL} \ 601 || die "Unable to move source tree to ${KV_FULL}." 602 fi 169 cd ${S} 603 cd "${S}" 170 604 171 # remove all backup files 605 # remove all backup files 172 find . -iname "*~" -exec rm {} \; 2> /dev/null 606 find . -iname "*~" -exec rm {} \; 2> /dev/null 173 607 174 # fix a problem on ppc where TOUT writes to /usr/src/linux breaking sandbox 608 # fix a problem on ppc where TOUT writes to /usr/src/linux breaking sandbox 175 use ppc && \ 609 # only do this for kernel < 2.6.27 since this file does not exist in later 610 # kernels 611 if [[ -n ${KV_MINOR} && ${KV_MAJOR}.${KV_MINOR}.${KV_PATCH} < 2.6.27 ]] 612 then 613 sed -i \ 176 sed -ie 's|TOUT := .tmp_gas_check|TOUT := $(T).tmp_gas_check|' \ 614 -e 's|TOUT := .tmp_gas_check|TOUT := $(T).tmp_gas_check|' \ 177 ${S}/arch/ppc/Makefile 615 "${S}"/arch/ppc/Makefile 616 else 617 sed -i \ 618 -e 's|TOUT := .tmp_gas_check|TOUT := $(T).tmp_gas_check|' \ 619 "${S}"/arch/powerpc/Makefile 620 fi 178} 621} 179 622 180unpack_set_extraversion() { 623unpack_set_extraversion() { 181 cd ${S} 624 cd "${S}" 182 sed -i -e "s:^\(EXTRAVERSION =\).*:\1 ${EXTRAVERSION}:" Makefile 625 sed -i -e "s:^\(EXTRAVERSION =\).*:\1 ${EXTRAVERSION}:" Makefile 183 cd ${OLDPWD} 626 cd "${OLDPWD}" 184} 627} 185 628 186# Should be done after patches have been applied 629# Should be done after patches have been applied 187# Otherwise patches that modify the same area of Makefile will fail 630# Otherwise patches that modify the same area of Makefile will fail 188unpack_fix_install_path() { 631unpack_fix_install_path() { 189 cd ${S} 632 cd "${S}" 190 sed -i -e 's:#export\tINSTALL_PATH:export\tINSTALL_PATH:' Makefile 633 sed -i -e 's:#export\tINSTALL_PATH:export\tINSTALL_PATH:' Makefile 191} 192 193unpack_fix_docbook() { 194 if [[ -d ${S}/Documentation/DocBook ]]; then 195 cd ${S}/Documentation/DocBook 196 sed -ie "s:db2:docbook2:g" Makefile 197 cd ${OLDPWD} 198 fi 199} 634} 200 635 201# Compile Functions 636# Compile Functions 202#============================================================== 637#============================================================== 203compile_headers() { 638compile_headers() { 639 env_setup_xmakeopts 640 204 # if we couldnt obtain HOSTCFLAGS from the Makefile, 641 # if we couldnt obtain HOSTCFLAGS from the Makefile, 205 # then set it to something sane 642 # then set it to something sane 206 local HOSTCFLAGS=$(getfilevar HOSTCFLAGS ${S}/Makefile) 643 local HOSTCFLAGS=$(getfilevar HOSTCFLAGS "${S}"/Makefile) 207 HOSTCFLAGS=${HOSTCFLAGS:--Wall -Wstrict-prototypes -O2 -fomit-frame-pointer} 644 HOSTCFLAGS=${HOSTCFLAGS:--Wall -Wstrict-prototypes -O2 -fomit-frame-pointer} 208 209 # Kernel ARCH != portage ARCH 210 local KARCH=$(tc-arch-kernel) 211 212 # When cross-compiling, we need to set the ARCH/CROSS_COMPILE 213 # variables properly or bad things happen ! 214 local xmakeopts="ARCH=${KARCH}" 215 if [[ ${CTARGET} != ${CHOST} ]]; then 216 xmakeopts="${xmakeopts} CROSS_COMPILE=${CTARGET}-" 217 elif type -p ${CHOST}-ar; then 218 xmakeopts="${xmakeopts} CROSS_COMPILE=${CHOST}-" 219 fi 220 645 221 if kernel_is 2 4; then 646 if kernel_is 2 4; then 222 yes "" | make oldconfig ${xmakeopts} 647 yes "" | make oldconfig ${xmakeopts} 223 echo ">>> make oldconfig complete" 648 echo ">>> make oldconfig complete" 224 use sparc && make dep ${xmakeopts} 649 make dep ${xmakeopts} 225 elif kernel_is 2 6; then 650 elif kernel_is 2 6; then 651 # 2.6.18 introduces headers_install which means we dont need any 652 # of this crap anymore :D 653 kernel_is ge 2 6 18 && return 0 654 226 # autoconf.h isnt generated unless it already exists. plus, we have 655 # autoconf.h isnt generated unless it already exists. plus, we have 227 # no guarantee that any headers are installed on the system... 656 # no guarantee that any headers are installed on the system... 228 [[ -f ${ROOT}/usr/include/linux/autoconf.h ]] \ 657 [[ -f ${ROOT}/usr/include/linux/autoconf.h ]] \ 229 || touch include/linux/autoconf.h 658 || touch include/linux/autoconf.h 230 659 231 # if K_DEFCONFIG isn't set, force to "defconfig" 660 # if K_DEFCONFIG isn't set, force to "defconfig" 232 # needed by mips 661 # needed by mips 233 if [[ -z ${K_DEFCONFIG} ]]; then 662 if [[ -z ${K_DEFCONFIG} ]]; then 663 if [[ $(KV_to_int ${KV}) -ge $(KV_to_int 2.6.16) ]]; then 664 case ${CTARGET} in 665 powerpc64*) K_DEFCONFIG="ppc64_defconfig";; 666 powerpc*) K_DEFCONFIG="pmac32_defconfig";; 667 *) K_DEFCONFIG="defconfig";; 668 esac 669 else 234 K_DEFCONFIG="defconfig" 670 K_DEFCONFIG="defconfig" 671 fi 235 fi 672 fi 236 673 237 # if there arent any installed headers, then there also isnt an asm 674 # if there arent any installed headers, then there also isnt an asm 238 # symlink in /usr/include/, and make defconfig will fail, so we have 675 # symlink in /usr/include/, and make defconfig will fail, so we have 239 # to force an include path with $S. 676 # to force an include path with $S. 240 HOSTCFLAGS="${HOSTCFLAGS} -I${S}/include/" 677 HOSTCFLAGS="${HOSTCFLAGS} -I${S}/include/" 241 ln -sf asm-${KARCH} "${S}"/include/asm 678 ln -sf asm-${KARCH} "${S}"/include/asm 679 cross_pre_c_headers && return 0 680 242 make ${K_DEFCONFIG} HOSTCFLAGS="${HOSTCFLAGS}" ${xmakeopts} || die "defconfig failed" 681 make ${K_DEFCONFIG} HOSTCFLAGS="${HOSTCFLAGS}" ${xmakeopts} || die "defconfig failed (${K_DEFCONFIG})" 682 if compile_headers_tweak_config ; then 683 yes "" | make oldconfig HOSTCFLAGS="${HOSTCFLAGS}" ${xmakeopts} || die "2nd oldconfig failed" 684 fi 243 make prepare HOSTCFLAGS="${HOSTCFLAGS}" ${xmakeopts} || die "prepare failed" 685 make prepare HOSTCFLAGS="${HOSTCFLAGS}" ${xmakeopts} || die "prepare failed" 244 make prepare-all HOSTCFLAGS="${HOSTCFLAGS}" ${xmakeopts} || die "prepare failed" 686 make prepare-all HOSTCFLAGS="${HOSTCFLAGS}" ${xmakeopts} || die "prepare failed" 245 fi 687 fi 246} 688} 247 689 248compile_manpages() { 690compile_headers_tweak_config() { 249 einfo "Making manpages ..." 691 # some targets can be very very picky, so let's finesse the 250 env -u ARCH make mandocs 692 # .config based upon any info we may have 693 case ${CTARGET} in 694 sh*) 695 sed -i '/CONFIG_CPU_SH/d' .config 696 echo "CONFIG_CPU_SH${CTARGET:2:1}=y" >> .config 697 return 0;; 698 esac 699 700 # no changes, so lets do nothing 701 return 1 251} 702} 252 703 253# install functions 704# install functions 254#============================================================== 705#============================================================== 255install_universal() { 706install_universal() { 256 #fix silly permissions in tarball 707 #fix silly permissions in tarball 257 cd ${WORKDIR} 708 cd "${WORKDIR}" 258 chown -R root:root * 709 chown -R root:0 * >& /dev/null 259 chmod -R a+r-w+X,u+w * 710 chmod -R a+r-w+X,u+w * 260 cd ${OLDPWD} 711 cd ${OLDPWD} 261} 712} 262 713 263install_headers() { 714install_headers() { 264 local ddir=$(kernel_header_destdir) 715 local ddir=$(kernel_header_destdir) 265 716 266 cd ${S} 717 # 2.6.18 introduces headers_install which means we dont need any 718 # of this crap anymore :D 719 if kernel_is ge 2 6 18 ; then 720 env_setup_xmakeopts 721 emake headers_install INSTALL_HDR_PATH="${D}"/${ddir}/.. ${xmakeopts} || die 722 723 # let other packages install some of these headers 724 rm -rf "${D}"/${ddir}/sound #alsa-headers 725 rm -rf "${D}"/${ddir}/scsi #glibc/uclibc/etc... 726 return 0 727 fi 728 729 # Do not use "linux/*" as that can cause problems with very long 730 # $S values where the cmdline to cp is too long 731 pushd "${S}" >/dev/null 267 dodir ${ddir}/linux 732 dodir ${ddir}/linux 268 cp -ax ${S}/include/linux/* ${D}/${ddir}/linux 733 cp -pPR "${S}"/include/linux "${D}"/${ddir}/ || die 269 rm -rf ${D}/${ddir}/linux/modules 734 rm -rf "${D}"/${ddir}/linux/modules 270 735 271 # Handle multilib headers 272 case $(tc-arch-kernel) in 273 sparc64) 274 dodir ${ddir}/asm-sparc 275 cp -ax ${S}/include/asm-sparc/* ${D}/${ddir}/asm-sparc 276 277 dodir ${ddir}/asm-sparc64 278 cp -ax ${S}/include/asm-sparc64/* ${D}/${ddir}/asm-sparc64 279 280 create_ml_includes ${ddir}/asm !__arch64__:${ddir}/asm-sparc __arch64__:${ddir}/asm-sparc64 281 ;; 282 x86_64) 283 dodir ${ddir}/asm-i386 284 cp -ax ${S}/include/asm-i386/* ${D}/${ddir}/asm-i386 285 286 dodir ${ddir}/asm-x86_64 287 cp -ax ${S}/include/asm-x86_64/* ${D}/${ddir}/asm-x86_64 288 289 create_ml_includes ${ddir}/asm __i386__:${ddir}/asm-i386 __x86_64__:${ddir}/asm-x86_64 290 ;; 291 *) 292 dodir ${ddir}/asm 736 dodir ${ddir}/asm 293 cp -ax ${S}/include/asm/* ${D}/${ddir}/asm 737 cp -pPR "${S}"/include/asm/* "${D}"/${ddir}/asm 294 ;; 295 esac 296 738 297 if kernel_is 2 6; then 739 if kernel_is 2 6 ; then 298 dodir ${ddir}/asm-generic 740 dodir ${ddir}/asm-generic 299 cp -ax ${S}/include/asm-generic/* ${D}/${ddir}/asm-generic 741 cp -pPR "${S}"/include/asm-generic/* "${D}"/${ddir}/asm-generic 300 fi 742 fi 301 743 302 # clean up 744 # clean up 303 find "${D}" -name '*.orig' -exec rm -f {} \; 745 find "${D}" -name '*.orig' -exec rm -f {} \; 304 746 305 cd ${OLDPWD} 747 popd >/dev/null 306} 748} 307 749 308install_sources() { 750install_sources() { 309 local doc docs file 751 local file 310 752 311 cd ${S} 753 cd "${S}" 312 dodir /usr/src 754 dodir /usr/src 313 echo ">>> Copying sources ..." 755 echo ">>> Copying sources ..." 314 756 315 file="$(find ${WORKDIR} -iname "docs" -type d)" 757 file="$(find ${WORKDIR} -iname "docs" -type d)" 316 if [[ -n ${file} ]]; then 758 if [[ -n ${file} ]]; then 317 for file in $(find ${file} -type f); do 759 for file in $(find ${file} -type f); do 318 echo "${file//*docs\/}" >> ${S}/patches.txt 760 echo "${file//*docs\/}" >> "${S}"/patches.txt 319 echo "===================================================" >> ${S}/patches.txt 761 echo "===================================================" >> "${S}"/patches.txt 320 cat ${file} >> ${S}/patches.txt 762 cat ${file} >> "${S}"/patches.txt 321 echo "===================================================" >> ${S}/patches.txt 763 echo "===================================================" >> "${S}"/patches.txt 322 echo "" >> ${S}/patches.txt 764 echo "" >> "${S}"/patches.txt 323 done 765 done 324 fi 766 fi 325 767 326 if [[ ! -f ${S}/patches.txt ]]; then 768 if [[ ! -f ${S}/patches.txt ]]; then 327 # patches.txt is empty so lets use our ChangeLog 769 # patches.txt is empty so lets use our ChangeLog 328 [[ -f ${FILESDIR}/../ChangeLog ]] && \ 770 [[ -f ${FILESDIR}/../ChangeLog ]] && \ 329 echo "Please check the ebuild ChangeLog for more details." \ 771 echo "Please check the ebuild ChangeLog for more details." \ 330 > ${S}/patches.txt 772 > "${S}"/patches.txt 331 fi 773 fi 332 774 333 for doc in ${UNIPATCH_DOCS}; do [[ -f ${doc} ]] && docs="${docs} ${doc}"; done 334 if [[ -f ${S}/patches.txt ]]; then docs="${docs} ${S}/patches.txt"; fi 335 use doc && ! use arm && ! use s390 && install_manpages 336 dodoc ${docs} 337 338 mv ${WORKDIR}/linux* ${D}/usr/src 775 mv ${WORKDIR}/linux* "${D}"/usr/src 339} 340 341install_manpages() { 342 sed -ie "s#/usr/local/man#${D}/usr/man#g" scripts/makeman 343 ebegin "Installing manpages" 344 env -u ARCH make installmandocs 345 eend $? 346 sed -ie "s#${D}/usr/man#/usr/local/man#g" scripts/makeman 347} 776} 348 777 349# pkg_preinst functions 778# pkg_preinst functions 350#============================================================== 779#============================================================== 351preinst_headers() { 780preinst_headers() { 356 785 357# pkg_postinst functions 786# pkg_postinst functions 358#============================================================== 787#============================================================== 359postinst_sources() { 788postinst_sources() { 360 local MAKELINK=0 789 local MAKELINK=0 361 790 362 # if we have USE=symlink, then force K_SYMLINK=1 791 # if we have USE=symlink, then force K_SYMLINK=1 363 use symlink && K_SYMLINK=1 792 use symlink && K_SYMLINK=1 364 793 794 # if we're using a deblobbed kernel, it's not supported 795 [[ $K_DEBLOB_AVAILABLE == 1 ]] && \ 796 use deblob && \ 797 K_SECURITY_UNSUPPORTED=deblob 798 365 # if we are to forcably symlink, delete it if it already exists first. 799 # if we are to forcably symlink, delete it if it already exists first. 366 if [[ -n ${K_SYMLINK} ]]; then 800 if [[ ${K_SYMLINK} > 0 ]]; then 367 [[ -h ${ROOT}usr/src/linux ]] && rm ${ROOT}usr/src/linux 801 [[ -h ${ROOT}usr/src/linux ]] && rm ${ROOT}usr/src/linux 368 MAKELINK=1 802 MAKELINK=1 369 fi 803 fi 370 804 371 # if the link doesnt exist, lets create it 805 # if the link doesnt exist, lets create it 372 [[ ! -h ${ROOT}usr/src/linux ]] && MAKELINK=1 806 [[ ! -h ${ROOT}usr/src/linux ]] && MAKELINK=1 373 807 374 if [[ ${MAKELINK} == 1 ]]; then 808 if [[ ${MAKELINK} == 1 ]]; then 375 cd ${ROOT}usr/src 809 cd "${ROOT}"usr/src 376 ln -sf linux-${KV_FULL} linux 810 ln -sf linux-${KV_FULL} linux 377 cd ${OLDPWD} 811 cd ${OLDPWD} 378 fi 812 fi 379 813 380 # Don't forget to make directory for sysfs 814 # Don't forget to make directory for sysfs 381 [[ ! -d ${ROOT}sys ]] && kernel_is 2 6 && mkdir /sys 815 [[ ! -d ${ROOT}sys ]] && kernel_is 2 6 && mkdir ${ROOT}sys 382 816 383 echo 817 echo 384 einfo "After installing a new kernel of any version, it is important" 818 elog "If you are upgrading from a previous kernel, you may be interested" 385 einfo "that you have the appropriate /etc/modules.autoload.d/kernel-X.Y" 819 elog "in the following document:" 386 einfo "created (X.Y is the first 2 parts of your new kernel version)" 820 elog " - General upgrade guide: http://www.gentoo.org/doc/en/kernel-upgrade.xml" 387 echo 388 einfo "For example, this kernel will require:" 389 einfo "/etc/modules.autoload.d/kernel-${KV_MAJOR}.${KV_MINOR}" 390 echo 821 echo 391 822 392 # if K_EXTRAEINFO is set then lets display it now 823 # if K_EXTRAEINFO is set then lets display it now 393 if [[ -n ${K_EXTRAEINFO} ]]; then 824 if [[ -n ${K_EXTRAEINFO} ]]; then 394 echo ${K_EXTRAEINFO} | fmt | 825 echo ${K_EXTRAEINFO} | fmt | 395 while read -s ELINE; do einfo "${ELINE}"; done 826 while read -s ELINE; do einfo "${ELINE}"; done 396 fi 827 fi 397 828 829 # if K_EXTRAELOG is set then lets display it now 830 if [[ -n ${K_EXTRAELOG} ]]; then 831 echo ${K_EXTRAELOG} | fmt | 832 while read -s ELINE; do elog "${ELINE}"; done 833 fi 834 398 # if K_EXTRAEWARN is set then lets display it now 835 # if K_EXTRAEWARN is set then lets display it now 399 if [[ -n ${K_EXTRAEWARN} ]]; then 836 if [[ -n ${K_EXTRAEWARN} ]]; then 400 echo ${K_EXTRAEWARN} | fmt | 837 echo ${K_EXTRAEWARN} | fmt | 401 while read -s ELINE; do ewarn "${ELINE}"; done 838 while read -s ELINE; do ewarn "${ELINE}"; done 402 fi 839 fi 403} 404 840 405postinst_headers() { 841 # optionally display security unsupported message 406 einfo "Kernel headers are usually only used when recompiling glibc, as such, following the installation" 842 # Start with why 407 einfo "of newer headers, it is advised that you re-merge glibc as follows:" 843 if [[ ${K_SECURITY_UNSUPPORTED} = deblob ]]; then 408 einfo "emerge glibc" 844 ewarn "Deblobbed kernels are UNSUPPORTED by Gentoo Security." 409 einfo "Failure to do so will cause glibc to not make use of newer features present in the updated kernel" 845 elif [[ -n ${K_SECURITY_UNSUPPORTED} ]]; then 410 einfo "headers." 846 ewarn "${PN} is UNSUPPORTED by Gentoo Security." 847 fi 848 # And now the general message. 849 if [[ -n ${K_SECURITY_UNSUPPORTED} ]]; then 850 ewarn "This means that it is likely to be vulnerable to recent security issues." 851 ewarn "For specific information on why this kernel is unsupported, please read:" 852 ewarn "http://www.gentoo.org/proj/en/security/kernel.xml" 853 fi 854 855 # warn sparc users that they need to do cross-compiling with >= 2.6.25(bug #214765) 856 KV_MAJOR=$(get_version_component_range 1 ${OKV}) 857 KV_MINOR=$(get_version_component_range 2 ${OKV}) 858 KV_PATCH=$(get_version_component_range 3 ${OKV}) 859 if [[ "$(tc-arch)" = "sparc" ]]; then 860 if [[ ${KV_MAJOR} -ge 3 || ${KV_MAJOR}.${KV_MINOR}.${KV_PATCH} > 2.6.24 ]] 861 then 862 echo 863 elog "NOTE: Since 2.6.25 the kernel Makefile has changed in a way that" 864 elog "you now need to do" 865 elog " make CROSS_COMPILE=sparc64-unknown-linux-gnu-" 866 elog "instead of just" 867 elog " make" 868 elog "to compile the kernel. For more information please browse to" 869 elog "https://bugs.gentoo.org/show_bug.cgi?id=214765" 870 echo 871 fi 872 fi 411} 873} 412 874 413# pkg_setup functions 875# pkg_setup functions 414#============================================================== 876#============================================================== 415setup_headers() { 877setup_headers() { 429 891 430# unipatch 892# unipatch 431#============================================================== 893#============================================================== 432unipatch() { 894unipatch() { 433 local i x y z extention PIPE_CMD UNIPATCH_DROP KPATCH_DIR PATCH_DEPTH ELINE 895 local i x y z extention PIPE_CMD UNIPATCH_DROP KPATCH_DIR PATCH_DEPTH ELINE 434 local STRICT_COUNT PATCH_LEVEL myLC_ALL 896 local STRICT_COUNT PATCH_LEVEL myLC_ALL myLANG 435 897 436 # set to a standard locale to ensure sorts are ordered properly. 898 # set to a standard locale to ensure sorts are ordered properly. 437 myLC_ALL="${LC_ALL}" 899 myLC_ALL="${LC_ALL}" 900 myLANG="${LANG}" 438 LC_ALL="C" 901 LC_ALL="C" 902 LANG="" 439 903 440 [ -z "${KPATCH_DIR}" ] && KPATCH_DIR="${WORKDIR}/patches/" 904 [ -z "${KPATCH_DIR}" ] && KPATCH_DIR="${WORKDIR}/patches/" 441 [ ! -d ${KPATCH_DIR} ] && mkdir -p ${KPATCH_DIR} 905 [ ! -d ${KPATCH_DIR} ] && mkdir -p ${KPATCH_DIR} 442 906 443 # We're gonna need it when doing patches with a predefined patchlevel 907 # We're gonna need it when doing patches with a predefined patchlevel 444 shopt -s extglob 908 eshopts_push -s extglob 445 909 446 # This function will unpack all passed tarballs, add any passed patches, and remove any passed patchnumbers 910 # This function will unpack all passed tarballs, add any passed patches, and remove any passed patchnumbers 447 # usage can be either via an env var or by params 911 # usage can be either via an env var or by params 448 # although due to the nature we pass this within this eclass 912 # although due to the nature we pass this within this eclass 449 # it shall be by param only. 913 # it shall be by param only. 450 # -z "${UNIPATCH_LIST}" ] && UNIPATCH_LIST="${@}" 914 # -z "${UNIPATCH_LIST}" ] && UNIPATCH_LIST="${@}" 451 UNIPATCH_LIST="${@}" 915 UNIPATCH_LIST="${@}" 452 916 453 #unpack any passed tarballs 917 #unpack any passed tarballs 454 for i in ${UNIPATCH_LIST}; do 918 for i in ${UNIPATCH_LIST}; do 455 if [ -n "$(echo ${i} | grep -e "\.tar" -e "\.tbz" -e "\.tgz")" ]; then 919 if echo ${i} | grep -qs -e "\.tar" -e "\.tbz" -e "\.tgz" ; then 456 extention=${i/*./} 457 extention=${extention/:*/} 458 case ${extention} in 459 tbz2) PIPE_CMD="tar -xvjf";; 460 bz2) PIPE_CMD="tar -xvjf";; 461 tgz) PIPE_CMD="tar -xvzf";; 462 gz) PIPE_CMD="tar -xvzf";; 463 *) eerror "Unrecognized tarball compression" 464 die "Unrecognized tarball compression";; 465 esac 466 467 if [ -n "${UNIPATCH_STRICTORDER}" ]; then 920 if [ -n "${UNIPATCH_STRICTORDER}" ]; then 921 unset z 468 STRICT_COUNT=$((${STRICT_COUNT} + 1)) 922 STRICT_COUNT=$((10#${STRICT_COUNT} + 1)) 469 for((y=0; y<$((6 - ${#STRICT_COUNT})); y++)); 923 for((y=0; y<$((6 - ${#STRICT_COUNT})); y++)); 470 do z="${z}0"; 924 do z="${z}0"; 471 done 925 done 472 STRICT_COUNT="${z}${STRICT_COUNT}" 926 PATCH_ORDER="${z}${STRICT_COUNT}" 473 927 474 mkdir -p ${KPATCH_DIR}/${STRICT_COUNT}/ 928 mkdir -p "${KPATCH_DIR}/${PATCH_ORDER}" 475 ${PIPE_CMD} ${i/:*/} -C ${KPATCH_DIR}/${STRICT_COUNT}/ 1>/dev/null 929 pushd "${KPATCH_DIR}/${PATCH_ORDER}" >/dev/null 930 unpack ${i##*/} 931 popd >/dev/null 476 else 932 else 477 ${PIPE_CMD} ${i/:*/} -C ${KPATCH_DIR} 1>/dev/null 933 pushd "${KPATCH_DIR}" >/dev/null 934 unpack ${i##*/} 935 popd >/dev/null 478 fi 936 fi 479 937 480 if [ $? == 0 ]; then 481 einfo "${i/*\//} unpacked" 482 [ -n "$(echo ${i} | grep ':')" ] && echo ">>> Strict patch levels not currently supported for tarballed patchsets" 938 [[ ${i} == *:* ]] && echo ">>> Strict patch levels not currently supported for tarballed patchsets" 483 else 484 eerror "Failed to unpack ${i/:*/}" 485 die "unable to unpack patch tarball" 486 fi 487 else 939 else 488 extention=${i/*./} 940 extention=${i/*./} 489 extention=${extention/:*/} 941 extention=${extention/:*/} 490 PIPE_CMD="" 942 PIPE_CMD="" 491 case ${extention} in 943 case ${extention} in 944 xz) PIPE_CMD="xz -dc";; 945 lzma) PIPE_CMD="lzma -dc";; 492 bz2) PIPE_CMD="bzip2 -dc";; 946 bz2) PIPE_CMD="bzip2 -dc";; 493 patch) PIPE_CMD="cat";; 947 patch) PIPE_CMD="cat";; 494 diff) PIPE_CMD="cat";; 948 diff) PIPE_CMD="cat";; 495 gz|Z|z) PIPE_CMD="gzip -dc";; 949 gz|Z|z) PIPE_CMD="gzip -dc";; 496 ZIP|zip) PIPE_CMD="unzip -p";; 950 ZIP|zip) PIPE_CMD="unzip -p";; 511 eerror "or does not exist." 965 eerror "or does not exist." 512 die Unable to locate ${i} 966 die Unable to locate ${i} 513 fi 967 fi 514 968 515 if [ -n "${UNIPATCH_STRICTORDER}" ]; then 969 if [ -n "${UNIPATCH_STRICTORDER}" ]; then 970 unset z 516 STRICT_COUNT=$((${STRICT_COUNT} + 1)) 971 STRICT_COUNT=$((10#${STRICT_COUNT} + 1)) 517 for((y=0; y<$((6 - ${#STRICT_COUNT})); y++)); 972 for((y=0; y<$((6 - ${#STRICT_COUNT})); y++)); 518 do z="${z}0"; 973 do z="${z}0"; 519 done 974 done 520 STRICT_COUNT="${z}${STRICT_COUNT}" 975 PATCH_ORDER="${z}${STRICT_COUNT}" 521 976 522 mkdir -p ${KPATCH_DIR}/${STRICT_COUNT}/ 977 mkdir -p ${KPATCH_DIR}/${PATCH_ORDER}/ 523 $(${PIPE_CMD} ${i} > ${KPATCH_DIR}/${STRICT_COUNT}/${x}.patch${PATCH_LEVEL}) 978 $(${PIPE_CMD} ${i} > ${KPATCH_DIR}/${PATCH_ORDER}/${x}.patch${PATCH_LEVEL}) || die "uncompressing patch failed" 524 else 979 else 525 $(${PIPE_CMD} ${i} > ${KPATCH_DIR}/${x}.patch${PATCH_LEVEL}) 980 $(${PIPE_CMD} ${i} > ${KPATCH_DIR}/${x}.patch${PATCH_LEVEL}) || die "uncompressing patch failed" 526 fi 981 fi 527 fi 982 fi 528 fi 983 fi 529 done 984 done 530 985 532 x=${KPATCH_DIR} 987 x=${KPATCH_DIR} 533 KPATCH_DIR="" 988 KPATCH_DIR="" 534 for i in $(find ${x} -type d | sort -n); do 989 for i in $(find ${x} -type d | sort -n); do 535 KPATCH_DIR="${KPATCH_DIR} ${i}" 990 KPATCH_DIR="${KPATCH_DIR} ${i}" 536 done 991 done 992 993 # do not apply fbcondecor patch to sparc/sparc64 as it breaks boot 994 # bug #272676 995 if [[ "$(tc-arch)" = "sparc" || "$(tc-arch)" = "sparc64" ]]; then 996 if [[ ${KV_MAJOR} -ge 3 || ${KV_MAJOR}.${KV_MINOR}.${KV_PATCH} > 2.6.28 ]]; then 997 UNIPATCH_DROP="${UNIPATCH_DROP} *_fbcondecor-0.9.6.patch" 998 echo 999 ewarn "fbcondecor currently prevents sparc/sparc64 from booting" 1000 ewarn "for kernel versions >= 2.6.29. Removing fbcondecor patch." 1001 ewarn "See https://bugs.gentoo.org/show_bug.cgi?id=272676 for details" 1002 echo 1003 fi 1004 fi 537 1005 538 #so now lets get rid of the patchno's we want to exclude 1006 #so now lets get rid of the patchno's we want to exclude 539 UNIPATCH_DROP="${UNIPATCH_EXCLUDE} ${UNIPATCH_DROP}" 1007 UNIPATCH_DROP="${UNIPATCH_EXCLUDE} ${UNIPATCH_DROP}" 540 for i in ${UNIPATCH_DROP}; do 1008 for i in ${UNIPATCH_DROP}; do 541 ebegin "Excluding Patch #${i}" 1009 ebegin "Excluding Patch #${i}" 548 for i in $(find ${x} -maxdepth 1 -iname "*.patch*" -or -iname "*.diff*" | sort -n); do 1016 for i in $(find ${x} -maxdepth 1 -iname "*.patch*" -or -iname "*.diff*" | sort -n); do 549 STDERR_T="${T}/${i/*\//}" 1017 STDERR_T="${T}/${i/*\//}" 550 STDERR_T="${STDERR_T/.patch*/.err}" 1018 STDERR_T="${STDERR_T/.patch*/.err}" 551 1019 552 [ -z ${i/*.patch*/} ] && PATCH_DEPTH=${i/*.patch/} 1020 [ -z ${i/*.patch*/} ] && PATCH_DEPTH=${i/*.patch/} 553 [ -z ${i/*.diff*/} ] && PATCH_DEPTH=${i/*.diff/} 1021 #[ -z ${i/*.diff*/} ] && PATCH_DEPTH=${i/*.diff/} 554 1022 555 if [ -z "${PATCH_DEPTH}" ]; then PATCH_DEPTH=0; fi 1023 if [ -z "${PATCH_DEPTH}" ]; then PATCH_DEPTH=0; fi 556 1024 557 ebegin "Applying ${i/*\//} (-p${PATCH_DEPTH}+)" 1025 ebegin "Applying ${i/*\//} (-p${PATCH_DEPTH}+)" 558 while [ ${PATCH_DEPTH} -lt 5 ]; do 1026 while [ ${PATCH_DEPTH} -lt 5 ]; do 559 echo "Attempting Dry-run:" >> ${STDERR_T} 1027 echo "Attempting Dry-run:" >> ${STDERR_T} 560 echo "cmd: patch -p${PATCH_DEPTH} --dry-run -f < ${i}" >> ${STDERR_T} 1028 echo "cmd: patch -p${PATCH_DEPTH} --no-backup-if-mismatch --dry-run -f < ${i}" >> ${STDERR_T} 561 echo "=======================================================" >> ${STDERR_T} 1029 echo "=======================================================" >> ${STDERR_T} 562 if [ $(patch -p${PATCH_DEPTH} --dry-run -f < ${i} >> ${STDERR_T}) $? -eq 0 ]; then 1030 if [ $(patch -p${PATCH_DEPTH} --no-backup-if-mismatch --dry-run -f < ${i} >> ${STDERR_T}) $? -eq 0 ]; then 563 echo "Attempting patch:" > ${STDERR_T} 1031 echo "Attempting patch:" > ${STDERR_T} 564 echo "cmd: patch -p${PATCH_DEPTH} -f < ${i}" >> ${STDERR_T} 1032 echo "cmd: patch -p${PATCH_DEPTH} --no-backup-if-mismatch -f < ${i}" >> ${STDERR_T} 565 echo "=======================================================" >> ${STDERR_T} 1033 echo "=======================================================" >> ${STDERR_T} 566 if [ $(patch -p${PATCH_DEPTH} -f < ${i} >> ${STDERR_T}) "$?" -eq 0 ]; then 1034 if [ $(patch -p${PATCH_DEPTH} --no-backup-if-mismatch -f < ${i} >> ${STDERR_T}) "$?" -eq 0 ]; then 567 eend 0 1035 eend 0 568 rm ${STDERR_T} 1036 rm ${STDERR_T} 569 break 1037 break 570 else 1038 else 571 eend 1 1039 eend 1 572 eerror "Failed to apply patch ${i/*\//}" 1040 eerror "Failed to apply patch ${i/*\//}" 573 eerror "Please attach ${STDERR_T} to any bug you may post." 1041 eerror "Please attach ${STDERR_T} to any bug you may post." 1042 eshopts_pop 574 die "Failed to apply ${i/*\//}" 1043 die "Failed to apply ${i/*\//}" 575 fi 1044 fi 576 else 1045 else 577 PATCH_DEPTH=$((${PATCH_DEPTH} + 1)) 1046 PATCH_DEPTH=$((${PATCH_DEPTH} + 1)) 578 fi 1047 fi 579 done 1048 done 580 if [ ${PATCH_DEPTH} -eq 5 ]; then 1049 if [ ${PATCH_DEPTH} -eq 5 ]; then 581 eend 1 1050 eend 1 582 eerror "Please attach ${STDERR_T} to any bug you may post." 1051 eerror "Please attach ${STDERR_T} to any bug you may post." 1052 eshopts_pop 583 die "Unable to dry-run patch." 1053 die "Unable to dry-run patch." 584 fi 1054 fi 585 done 1055 done 586 done 1056 done 587 1057 1058 # This is a quick, and kind of nasty hack to deal with UNIPATCH_DOCS which 1059 # sit in KPATCH_DIR's. This is handled properly in the unipatch rewrite, 1060 # which is why I'm not taking too much time over this. 1061 local tmp 1062 for i in ${UNIPATCH_DOCS}; do 1063 tmp="${tmp} ${i//*\/}" 1064 cp -f ${i} "${T}"/ 1065 done 1066 UNIPATCH_DOCS="${tmp}" 1067 588 # clean up KPATCH_DIR's - fixes bug #53610 1068 # clean up KPATCH_DIR's - fixes bug #53610 589 for x in ${KPATCH_DIR}; do rm -Rf ${x}; done 1069 for x in ${KPATCH_DIR}; do rm -Rf ${x}; done 590 1070 591 LC_ALL="${myLC_ALL}" 1071 LC_ALL="${myLC_ALL}" 1072 LANG="${myLANG}" 1073 eshopts_pop 592} 1074} 593 1075 594# getfilevar accepts 2 vars as follows: 1076# getfilevar accepts 2 vars as follows: 595# getfilevar <VARIABLE> <CONFIGFILE> 1077# getfilevar <VARIABLE> <CONFIGFILE> 596# pulled from linux-info 1078# pulled from linux-info 597 1079 598getfilevar() { 1080getfilevar() { 599 local workingdir basefname basedname xarch=$(tc-arch-kernel) 1081 local workingdir basefname basedname xarch=$(tc-arch-kernel) 600 1082 601 if [[ -z ${1} ]] && [[ ! -f ${2} ]]; then 1083 if [[ -z ${1} ]] && [[ ! -f ${2} ]]; then 602 ebeep 603 echo -e "\n" 1084 echo -e "\n" 604 eerror "getfilevar requires 2 variables, with the second a valid file." 1085 eerror "getfilevar requires 2 variables, with the second a valid file." 605 eerror " getfilevar <VARIABLE> <CONFIGFILE>" 1086 eerror " getfilevar <VARIABLE> <CONFIGFILE>" 606 else 1087 else 607 workingdir=${PWD} 1088 workingdir=${PWD} 608 basefname=$(basename ${2}) 1089 basefname=$(basename ${2}) 609 basedname=$(dirname ${2}) 1090 basedname=$(dirname ${2}) 610 unset ARCH 1091 unset ARCH 611 1092 612 cd ${basedname} 1093 cd ${basedname} 613 echo -e "include ${basefname}\ne:\n\t@echo \$(${1})" | \ 1094 echo -e "include ${basefname}\ne:\n\t@echo \$(${1})" | \ 614 make ${BUILD_FIXES} -f - e 2>/dev/null 1095 make ${BUILD_FIXES} -s -f - e 2>/dev/null 615 cd ${workingdir} 1096 cd ${workingdir} 616 1097 617 ARCH=${xarch} 1098 ARCH=${xarch} 618 fi 1099 fi 619} 620 621detect_version() { 622 # this function will detect and set 623 # - OKV: Original Kernel Version (2.6.0/2.6.0-test11) 624 # - KV: Kernel Version (2.6.0-gentoo/2.6.0-test11-gentoo-r1) 625 # - EXTRAVERSION: The additional version appended to OKV (-gentoo/-gentoo-r1) 626 627 if [[ -n ${KV_FULL} ]]; then 628 # we will set this for backwards compatibility. 629 KV=${KV_FULL} 630 631 # we know KV_FULL so lets stop here. but not without resetting S 632 S=${WORKDIR}/linux-${KV_FULL} 633 return 634 fi 635 636 # CKV is used as a comparison kernel version, which is used when 637 # PV doesnt reflect the genuine kernel version. 638 # this gets set to the portage style versioning. ie: 639 # CKV=2.6.11_rc4 640 CKV=${CKV:-${PV}} 641 OKV=${OKV:-${CKV}} 642 OKV=${OKV/_beta/-test} 643 OKV=${OKV/_rc/-rc} 644 OKV=${OKV/-r*} 645 OKV=${OKV/_p*} 646 647 KV_MAJOR=$(get_version_component_range 1 ${OKV}) 648 KV_MINOR=$(get_version_component_range 2 ${OKV}) 649 KV_PATCH=$(get_version_component_range 3- ${OKV}) 650 KV_PATCH=${KV_PATCH/[-_]*} 651 652 KERNEL_URI="mirror://kernel/linux/kernel/v${KV_MAJOR}.${KV_MINOR}/linux-${OKV}.tar.bz2" 653 654 RELEASE=${CKV/${OKV}} 655 RELEASE=${RELEASE/_beta} 656 RELEASE=${RELEASE/_rc/-rc} 657 RELEASE=${RELEASE/_pre/-pre} 658 kernel_is_2_6 && RELEASE=${RELEASE/-pre/-bk} 659 RELEASETYPE=${RELEASE//[0-9]} 660 661 # Now we know that RELEASE is the -rc/-bk 662 # and RELEASETYPE is the same but with its numerics stripped 663 # we can work on better sorting EXTRAVERSION. 664 # first of all, we add the release 665 EXTRAVERSION="${RELEASE}" 666 667 if [[ -n ${K_PREPATCHED} ]]; then 668 EXTRAVERSION="${EXTRAVERSION}-${PN/-*}${PR/r}" 669 elif [[ "${ETYPE}" = "sources" ]]; then 670 # For some sources we want to use the PV in the extra version 671 # This is because upstream releases with a completely different 672 # versioning scheme. 673 case ${PN/-*} in 674 wolk) K_USEPV=1;; 675 vserver) K_USEPV=1;; 676 esac 677 678 [[ -z ${K_NOUSENAME} ]] && EXTRAVERSION="${EXTRAVERSION}-${PN/-*}" 679 [[ -n ${K_USEPV} ]] && EXTRAVERSION="${EXTRAVERSION}-${PV//_/-}" 680 [[ -n ${PR//r0} ]] && EXTRAVERSION="${EXTRAVERSION}-${PR}" 681 fi 682 683 KV_FULL=${OKV}${EXTRAVERSION} 684 685 # -rc-bk pulls can be achieved by specifying CKV 686 # for example: 687 # CKV="2.6.11_rc3_pre2" 688 # will pull: 689 # linux-2.6.10.tar.bz2 & patch-2.6.11-rc3.bz2 & patch-2.6.11-rc3-bk2.bz2 690 691 if [[ ${RELEASETYPE} == -rc ]] || [[ ${RELEASETYPE} == -pre ]]; then 692 OKV="${KV_MAJOR}.${KV_MINOR}.$((${KV_PATCH} - 1))" 693 KERNEL_URI="mirror://kernel/linux/kernel/v${KV_MAJOR}.${KV_MINOR}/testing/patch-${CKV//_/-}.bz2 694 mirror://kernel/linux/kernel/v${KV_MAJOR}.${KV_MINOR}/linux-${OKV}.tar.bz2" 695 UNIPATCH_LIST_DEFAULT="${DISTDIR}/patch-${CKV//_/-}.bz2" 696 fi 697 698 if [[ ${RELEASETYPE} == -bk ]]; then 699 KERNEL_URI="mirror://kernel/linux/kernel/v${KV_MAJOR}.${KV_MINOR}/snapshots/patch-${OKV}${RELEASE}.bz2 700 mirror://kernel/linux/kernel/v${KV_MAJOR}.${KV_MINOR}/linux-${OKV}.tar.bz2" 701 UNIPATCH_LIST_DEFAULT="${DISTDIR}/patch-${OKV}${RELEASE}.bz2" 702 fi 703 704 if [[ ${RELEASETYPE} == -rc-bk ]]; then 705 OKV="${KV_MAJOR}.${KV_MINOR}.$((${KV_PATCH} - 1))" 706 KERNEL_URI="mirror://kernel/linux/kernel/v${KV_MAJOR}.${KV_MINOR}/snapshots/patch-${KV_MAJOR}.${KV_MINOR}.${KV_PATCH}${RELEASE}.bz2 707 mirror://kernel/linux/kernel/v${KV_MAJOR}.${KV_MINOR}/testing/patch-${KV_MAJOR}.${KV_MINOR}.${KV_PATCH}${RELEASE/-bk*}.bz2 708 mirror://kernel/linux/kernel/v${KV_MAJOR}.${KV_MINOR}/linux-${OKV}.tar.bz2" 709 UNIPATCH_LIST_DEFAULT="${DISTDIR}/patch-${KV_MAJOR}.${KV_MINOR}.${KV_PATCH}${RELEASE/-bk*}.bz2 ${DISTDIR}/patch-${KV_MAJOR}.${KV_MINOR}.${KV_PATCH}${RELEASE}.bz2" 710 fi 711 712 # we will set this for backwards compatibility. 713 S=${WORKDIR}/linux-${KV_FULL} 714 KV=${KV_FULL} 715} 1100} 716 1101 717detect_arch() { 1102detect_arch() { 718 # This function sets ARCH_URI and ARCH_PATCH 1103 # This function sets ARCH_URI and ARCH_PATCH 719 # with the neccessary info for the arch sepecific compatibility 1104 # with the neccessary info for the arch sepecific compatibility 725 # ARCH_URI is the URI for all the ${ARCH}_URI patches 1110 # ARCH_URI is the URI for all the ${ARCH}_URI patches 726 # ARCH_PATCH is ARCH_URI broken into files for UNIPATCH 1111 # ARCH_PATCH is ARCH_URI broken into files for UNIPATCH 727 1112 728 ARCH_URI="" 1113 ARCH_URI="" 729 ARCH_PATCH="" 1114 ARCH_PATCH="" 730 ALL_ARCH="X86 PPC PPC64 SPARC MIPS ALPHA ARM HPPA AMD64 IA64 X86OBSD S390 SH" 1115 ALL_ARCH="ALPHA AMD64 ARM HPPA IA64 M68K MIPS PPC PPC64 S390 SH SPARC X86" 731 1116 732 for LOOP_ARCH in ${ALL_ARCH}; do 1117 for LOOP_ARCH in ${ALL_ARCH}; do 733 COMPAT_URI="${LOOP_ARCH}_URI" 1118 COMPAT_URI="${LOOP_ARCH}_URI" 734 COMPAT_URI="${!COMPAT_URI}" 1119 COMPAT_URI="${!COMPAT_URI}" 735 1120 805 return 0 1190 return 0 806} 1191} 807 1192 808headers___fix() { 1193headers___fix() { 809 # Voodoo to partially fix broken upstream headers. 1194 # Voodoo to partially fix broken upstream headers. 810 # Issues with this function should go to plasmaroo. 1195 # note: do not put inline/asm/volatile together (breaks "inline asm volatile") 811 sed -i \ 1196 sed -i \ 812 -e "s/\([ "$'\t'"]\)u8\([ "$'\t'"]\)/\1__u8\2/g;" \ 1197 -e '/^\#define.*_TYPES_H/{:loop n; bloop}' \ 813 -e "s/\([ "$'\t'"]\)u16\([ "$'\t'"]\)/\1__u16\2/g;" \ 814 -e "s/\([ "$'\t'"]\)u32\([ "$'\t'"]\)/\1__u32\2/g;" \ 815 -e "s/\([ "$'\t'"]\)u64\([ "$'\t'"]\)/\1__u64\2/g;" \ 816 -e "s/\([ "$'\t'"]\)s64\([ "$'\t'"]\)/\1__s64\2/g;" \ 817 -e 's/ \(u\|s\)\(8\|16\|32\|64\)$/ __\1\2/g' \ 1198 -e 's:\<\([us]\(8\|16\|32\|64\)\)\>:__\1:g' \ 818 -e 's/\([(, ]\)\(u\|s\)64\([, )]\)/\1__\264\3/g' \ 1199 -e "s/\([[:space:]]\)inline\([[:space:](]\)/\1__inline__\2/g" \ 1200 -e "s/\([[:space:]]\)asm\([[:space:](]\)/\1__asm__\2/g" \ 1201 -e "s/\([[:space:]]\)volatile\([[:space:](]\)/\1__volatile__\2/g" \ 819 "$@" 1202 "$@" 820} 1203} 821 1204 822# common functions 1205# common functions 823#============================================================== 1206#============================================================== 824kernel-2_src_unpack() { 1207kernel-2_src_unpack() { 825 universal_unpack 1208 universal_unpack 1209 debug-print "Doing unipatch" 826 1210 827 [[ -n ${UNIPATCH_LIST} ]] || [[ -n ${UNIPATCH_LIST_DEFAULT} ]] && \ 1211 [[ -n ${UNIPATCH_LIST} || -n ${UNIPATCH_LIST_DEFAULT} || -n ${UNIPATCH_LIST_GENPATCHES} ]] && \ 828 unipatch "${UNIPATCH_LIST_DEFAULT} ${UNIPATCH_LIST}" 1212 unipatch "${UNIPATCH_LIST_DEFAULT} ${UNIPATCH_LIST_GENPATCHES} ${UNIPATCH_LIST}" 1213 1214 debug-print "Doing premake" 1215 1216 # allow ebuilds to massage the source tree after patching but before 1217 # we run misc `make` functions below 1218 [[ $(type -t kernel-2_hook_premake) == "function" ]] && kernel-2_hook_premake 1219 1220 debug-print "Doing epatch_user" 1221 epatch_user 1222 1223 debug-print "Doing unpack_set_extraversion" 829 1224 830 [[ -z ${K_NOSETEXTRAVERSION} ]] && unpack_set_extraversion 1225 [[ -z ${K_NOSETEXTRAVERSION} ]] && unpack_set_extraversion 831 unpack_fix_docbook 832 unpack_fix_install_path 1226 unpack_fix_install_path 833 1227 1228 # Setup xmakeopts and cd into sourcetree. 1229 env_setup_xmakeopts 1230 cd "${S}" 1231 1232 # We dont need a version.h for anything other than headers 1233 # at least, I should hope we dont. If this causes problems 1234 # take out the if/fi block and inform me please. 1235 # unpack_2_6 should now be 2.6.17 safe anyways 1236 if [[ ${ETYPE} == headers ]]; then 834 kernel_is 2 4 && unpack_2_4 1237 kernel_is 2 4 && unpack_2_4 1238 kernel_is 2 6 && unpack_2_6 1239 fi 1240 1241 if [[ $K_DEBLOB_AVAILABLE == 1 ]] && use deblob ; then 1242 cp "${DISTDIR}/${DEBLOB_A}" "${T}" || die "cp ${DEBLOB_A} failed" 1243 cp "${DISTDIR}/${DEBLOB_CHECK_A}" "${T}/deblob-check" || die "cp ${DEBLOB_CHECK_A} failed" 1244 chmod +x "${T}/${DEBLOB_A}" "${T}/deblob-check" || die "chmod deblob scripts failed" 1245 fi 835} 1246} 836 1247 837kernel-2_src_compile() { 1248kernel-2_src_compile() { 838 cd ${S} 1249 cd "${S}" 839 [[ ${ETYPE} == headers ]] && compile_headers 1250 [[ ${ETYPE} == headers ]] && compile_headers 840 [[ ${ETYPE} == sources ]] && \ 1251 841 use doc && ! use arm && ! use s390 && compile_manpages 1252 if [[ $K_DEBLOB_AVAILABLE == 1 ]] && use deblob ; then 1253 echo ">>> Running deblob script ..." 1254 sh "${T}/${DEBLOB_A}" --force || \ 1255 die "Deblob script failed to run!!!" 1256 fi 842} 1257} 1258 1259# if you leave it to the default src_test, it will run make to 1260# find whether test/check targets are present; since "make test" 1261# actually produces a few support files, they are installed even 1262# though the package is binchecks-restricted. 1263# 1264# Avoid this altogether by making the function moot. 1265kernel-2_src_test() { :; } 843 1266 844kernel-2_pkg_preinst() { 1267kernel-2_pkg_preinst() { 845 [[ ${ETYPE} == headers ]] && preinst_headers 1268 [[ ${ETYPE} == headers ]] && preinst_headers 846} 1269} 847 1270 850 [[ ${ETYPE} == headers ]] && install_headers 1273 [[ ${ETYPE} == headers ]] && install_headers 851 [[ ${ETYPE} == sources ]] && install_sources 1274 [[ ${ETYPE} == sources ]] && install_sources 852} 1275} 853 1276 854kernel-2_pkg_postinst() { 1277kernel-2_pkg_postinst() { 855 [[ ${ETYPE} == headers ]] && postinst_headers 856 [[ ${ETYPE} == sources ]] && postinst_sources 1278 [[ ${ETYPE} == sources ]] && postinst_sources 857} 1279} 858 1280 859kernel-2_pkg_setup() { 1281kernel-2_pkg_setup() { 1282 if kernel_is 2 4; then 1283 if [ "$( gcc-major-version )" -eq "4" ] ; then 1284 echo 1285 ewarn "Be warned !! >=sys-devel/gcc-4.0.0 isn't supported with linux-2.4!" 1286 ewarn "Either switch to another gcc-version (via gcc-config) or use a" 1287 ewarn "newer kernel that supports gcc-4." 1288 echo 1289 ewarn "Also be aware that bugreports about gcc-4 not working" 1290 ewarn "with linux-2.4 based ebuilds will be closed as INVALID!" 1291 echo 1292 epause 10 1293 fi 1294 fi 1295 1296 ABI="${KERNEL_ABI}" 860 [[ ${ETYPE} == headers ]] && setup_headers 1297 [[ ${ETYPE} == headers ]] && setup_headers 861 [[ ${ETYPE} == sources ]] && echo ">>> Preparing to unpack ..." 1298 [[ ${ETYPE} == sources ]] && echo ">>> Preparing to unpack ..." 862} 1299} 1300 1301kernel-2_pkg_postrm() { 1302 echo 1303 ewarn "Note: Even though you have successfully unmerged " 1304 ewarn "your kernel package, directories in kernel source location: " 1305 ewarn "${ROOT}usr/src/linux-${KV_FULL}" 1306 ewarn "with modified files will remain behind. By design, package managers" 1307 ewarn "will not remove these modified files and the directories they reside in." 1308 echo 1309} 1310 Legend: Removed from v.1.108   changed lines   Added in v.1.258   ViewVC Help Powered by ViewVC 1.1.20  
__label__pos
0.590013
Take the 2-minute tour × Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required. How do I show that: $$\frac{1}{\sin^{2}\frac{\pi}{14}} + \frac{1}{\sin^{2}\frac{3\pi}{14}} + \frac{1}{\sin^{2}\frac{5\pi}{14}} = 24$$ This is actually problem B $4371$ given at this link. Looks like a very interesting problem. My attempts: Well, I have been thinking about this for the whole day, and I have got some insights. I don't believe my insights will lead me to a $\text{complete}$ solution. • First, I wrote $\sin\frac{5\pi}{14}$ as $\sin\frac{9 \pi}{14}$ so that if I put $A = \frac{\pi}{14}$ so that the given equation becomes, $$\frac{1}{\sin^{2}{A}} + \frac{1}{\sin^{2}{3A}} + \frac{1}{\sin^{2}{9A}} =24$$ Then I tried working with this by taking $\text{lcm}$ and multiplying and doing something, which appeared futile. • Next, I actually didn't work it out, but I think we have to look for a equation which has roots as $\sin$ and then use $\text{sum of roots}$ formulas to get $24$. I think I haven't explained this clearly. share|improve this question 1   In your first bullet, you probably mean $\sin \frac{5\pi}{14} = \sin \frac{9\pi}{14}$. I would be more interested in the arithmetic progression of 1,3,5 than the geometric one because of the angle-sum identities. –  Ross Millikan Jun 13 '11 at 19:18 2   Most people would write complete, not $\text{complete}$. :) –  muntoo Jun 14 '11 at 1:19 5 Answers 5 up vote 13 down vote accepted The roots idea should work, but first convert to $\cos$ using the formula $1 - 2\sin^2 x = \cos 2x$. You will need to get a polynomial of which $\cos (2k+1)\pi/7$ is a root (polynomial corresponding to $\cos 7\theta = -1$) and you are interested in finding out $\sum \frac{1}{1-r}$ over the roots $r$. By using the fact that $\cos 5\pi/7 = \cos 9\pi/7$ etc, you get your sum. To complete it, We have that the Chebyshev Polynomial $T_7(\cos x) = \cos 7x$ Thus the polynomial we seek is $\displaystyle Q(x) = T_7(x)+1 = 64x^7 - 112 x^5 + 56x^3 -7x +1$ Its roots are $\cos (2k+1) \pi /7$, $0 \le k \le 6$. For any polynomial $P(x)$ with roots $r_1, r_2, \dots, r_n$ we have by differentiating $\log P(x)$ that $$ \sum_{j=1}^{n} \frac{1}{x - r_j} = \frac{P'(x)}{P(x)}$$ Thus the value we seek is $\displaystyle \frac{Q'(1)}{Q(1)} - \frac{1}{2}$ (one of the roots is $\cos \pi = -1$) and this can easily be calculated to be $24$. share|improve this answer      I believe the roots of $T_7$ are $\cos(\frac{(2k+1)\pi}{14})$ –  Thomas Andrews Jun 14 '11 at 3:31      Without knowing chebyshev polynomials, can't this be solved? –  user9413 Jun 14 '11 at 4:56      @Thomas: That might be true, but we are looking at $Q(x) = T_7(x) + 1$. –  Aryabhata Jun 14 '11 at 5:18 3   @Chandru: Yes you can. You can actually derive the polynomial yourself using $(\cos x + i \sin x)^7 = \cos 7x + i \sin 7x$. –  Aryabhata Jun 14 '11 at 5:18      @Aryabhata: one help. I need to find value of $\cos\frac{2\pi}{13} + \cos\frac{6\pi}{13} + \cos\frac{8\pi}{13}$. In this link: isibang.ac.in/~sury/luckyoct10.pdf Prof.Sury finds using Gauss sums magar another method se mey verify karna chahta hoon. Aap suggest karo. –  user9413 Feb 24 '12 at 19:48 Use $\sin(x) = \cos(\frac{\pi}2 - x)$, we can rewrite this as: $$\frac{1}{\cos^2 \frac{3\pi}{7}} + \frac{1}{\cos^2 \frac{2\pi}{7}} + \frac{1}{\cos^2 \frac{\pi}{7}}$$ Let $a_k = \frac{1}{\cos \frac{k\pi}7}$. Let $f(x) = (x-a_1)(x-a_2)(x-a_3)(x-a_4)(x-a_5)(x-a_6)$. Now, using that $a_k = - a_{7-k}$, this can be written as: $$f(x) = (x^2-a_1^2)(x^2-a_2^2)(x^2-a_3^2)$$ Now, our problem is to find the sum $a_1^2 + a_2^2 + a_3^2$, which is just the negative of the coefficient of $x^4$ in the polynomial $f(x)$. Let $U_6(x)$ be the Chebyshev polynomial of the second kind - that is: $$U_6(\cos \theta) = \frac{\sin 7\theta }{\sin \theta}$$ It is a polynomial of degree $6$ with roots equal to $\cos(\frac{k\pi}7)$, for $k=1,...,6$. So the polynomials $f(x)$ and $x^6U_6(1/x)$ have the same roots, so: $$f(x) = C x^6 U_6(\frac{1}x)$$ for some constant $C$. But $U_6(x) = 64x^6-80x^4+24x^2-1$, so $x^6 U_6(\frac{1}x) = -x^6 + 24 x^4 - 80x^2 + 64$. Since the coefficient of $x^6$ is $-1$, and it is $1$ in $f(x)$, $C=-1.$ So: $$f(x) = x^6 - 24x^4 +80x^2 - 64$$ In particular, the sum you are looking for is $24$. In general, if $n$ is odd, then the sum: $$\sum_{k=1}^{\frac{n-1}2} \frac{1}{\cos^2 \frac{k\pi}{n}}$$ is the absolute value of the coefficient of $x^2$ in the polynomial $U_{n-1}(x)$, which turns out to have closed form $\frac{n^2-1}2$. share|improve this answer      Thanks a lot. I could have never thought of Chebyshev's polynomials even after @aryabhata had given a hint. –  user9413 Jun 14 '11 at 4:55 Another method would involve use of complex numbers. ** added ** OK, elaboration. Maple used for writing... Let $w = \exp(i \pi/14)$ so that $w^7 = i$. In (1) I factored $w^7-i$ and in (2) obtained the relation satisfied by $w$. (3) is what we want to compute. (4) is the relations of the trig functions to $w$. In (5) we wrote the thing to compute in terms of $w$. In (6) we took the denominator, and reduced it using the relation satisfied by $w$. In (7) the same thing for the numerator. So (8) is our answer, which is simplified in (9). share|improve this answer 7   Could you elaborate? –  ttt Jun 14 '11 at 5:15 We can derive $\cos7x=64c^7-112c^5+56c^3-7c$ where $c=\cos x$(Proof Below) If $\cos7x=0, 7x=\frac{(2r+1)\pi}2,x=\frac{(2r+1)\pi}{14}$ where $r=0,1,2,3,4,5,6$ So, the roots of $64c^7-112c^5+56c^3-7c=0$ are $\cos\frac{(2r+1)\pi}{14}$ where $r=0,1,2,3,4,5,6$ So, the roots of $64c^6-112c^4+56c^2-7=0$ are $\cos\frac{(2r+1)\pi}{14}$ where $r=0,1,2,4,5,6$ as $\cos x=c=0$ corresponds to $r=3$ So, the roots of $64d^3-112d^2+56d-7=0$ are $\cos^2\frac{(2r+1)\pi}{14}$ where $r=0,1,2$ or $r=4,5,6$ as $\cos \frac{(7-k)\pi}7=\cos(\pi-\frac{k\pi}7)=-\cos\frac{k\pi}7$ If $$y=\frac1{\sin^2\frac{(2r+1)\pi}{14}}, y=\frac1{1-d}\implies d=\frac{y-1}y$$ So, the equation whose roots are $\frac1{\sin^2\frac{(2r+1)\pi}{14}}$ where $r=0,1,2$ is $$64\left(\frac{y-1}y\right)^3-112\left(\frac{y-1}y\right)^2+56\left(\frac{y-1}y\right)-7=0$$ On simplification, $y^3(64-112+56-7)+y^2\{64(-3)-112(-2)+56(-1)\}+y()+()=0$ So using Vieta's Formulas, $$\sum_{r=0}^2\frac1{\sin^2\frac{(2r+1)\pi}{14}}=\frac{24}1$$ [ Proof: (1) Using $\cos C+\cos D=2\cos\left(\frac{C-D}2\right)\cos\left(\frac{C+D}2\right)$ $$\cos7x+\cos x=2\cos3x\cos4x=2\cos3x(2\cos^22x-1)\text{ using }\cos2y=2\cos^2y-1$$ $$\text{ So, }\cos7x=-c+2(4c^3-3c)\{2(2c^2-1)^2-1\} \text{ using } \cos3y=4\cos^3y-3\cos y \text{ where } c=\cos x$$ $$\cos 7x=-c+(8c^3-6c)(8c^4-8c^2+1)=64c^7-112c^5+56c^3-7c$$ (2) Alternatively using de Moivre's formula, $$\cos 7x+i\sin7x=(\cos x+i\sin x)^7$$ Expanding and equating the real parts $\cos7x=c^7-\binom72c^5s^2+\binom72c^3s^4-\binom76cs^6$ where $c=\cos x,s=\sin x$ So, $\cos7x=c^7-\binom72c^5(1-c^2)+\binom72c^3(1-c^2)^2-\binom76c(1-c^2)^3$ ] share|improve this answer We can prove (below), the roots of $$z^3-z^2-2z+1=0 \ \ \ \ (1)$$ are $2\cos\frac{(2r+1)\pi}7$ where $r=0,1,2$ So,if we set $\displaystyle t=\frac1{\sin^2{\frac{(2r+1)\pi}{14}}}$ where $r=0,1,2$ $\implies 2\cos\frac{(2r+1)\pi}7=2\left(1-2\sin^2{\frac{(2r+1)\pi}{14}}\right)=2\left(1-\frac2t\right)=\frac{2(t-2)}t$ which will satisfy the equation $(1)$ $$\implies \left(\frac{2(t-2)}t\right)^3-\left(\frac{2(t-2)}t\right)^2-2\left(\frac{2(t-2)}t\right)+1=0$$ On simplification we have $$8(t-2)^3-4t(t-2)^2-4t^2(t-2)+t^3=0$$ $$\text{or, }t^3(8-4-4+1)-t^2(8\cdot3\cdot2-4\cdot4-8)+()t+()=0$$ $$\text{or, }t^3-24t^2+()t+()=0$$ Now, use Vieta's Formulas [ Proof: Let $7x=\pi$ and $y=\cos x+i\sin x$ Using De Moivre's formula, $y^7=(\cos x+i\sin x)^7=\cos \pi+\sin\pi=-1$ So, the roots of $y^7+1=0\ \ \ \ (1)$ are $\cos \theta+i\sin\theta$ where $\theta=\frac{(2r+1)\pi}7$ where $r=0,1,2,3,4,5,6$ Leaving the factor $y+1$ which corresponds to $r=3,$ we get $y^6-y^5+y^4-y^3+y^2-y+1=0$ Dividing either sides by $y^3,$ $$y^3+\frac1{y^3}-\left(y^2+\frac1{y^2}\right)+y+\frac1y-1=0$$ $$\implies \left(y+\frac1y\right)^3-3\left(y+\frac1y\right)-\{\left(y+\frac1y\right)^2-\}+y+\frac1y-1=0$$ $$\implies z^3-z^2-2z+1=0\ \ \ \ (2)$$ where $z=y+\frac1y=2\cos\theta$ Now, since $\cos(2\pi-A)=\cos A,\cos\left(2\pi-\frac{(2r+1)\pi}7\right)=\cos\left(\frac{(13-2r)\pi}7\right)$ where $r=0,1,2$ So, the roots of equation $(2)$ are $2\cos\frac\pi7=2\cos\frac{13\pi}7, 2\cos\frac{3\pi}7=2\cos\frac{11\pi}7$ and $2\cos\frac{5\pi}7=2\cos\frac{9\pi}7$ ] share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service.
__label__pos
0.999543
Rotating an Image Using cv2.warpAffine() in Python OpenCV In python opencv, we can use cv2.warpAffine() and cv2.getRotationMatrix2D() function to rotate an image easily. In this tutorial, we will introduce you how to do. Rotating an Image Using cv2.warpAffine() in Python OpenCV Tutorial cv2.getRotationMatrix2D(center, angle, scale) 1.Open an image using cv2.imread() import cv2 img = cv2.imread("pyimg.jpg") Here img is <class ‘numpy.ndarray’> 2.Get the image width and height height, width = img.shape[0:2] Here is an tutorial: Understanding Read an Image to Numpy Array with Python cv2.imread() 3.Set rotated angle using cv2.getRotationMatrix2D() rotationMatrix = cv2.getRotationMatrix2D((width/2, height/2), 90, .5) 4.Rotate an image using cv2.warpAffine() rotatedImage = cv2.warpAffine(img, rotationMatrix, (width, height)) 5.Show the rotated image cv2.imshow('Rotated Image', rotatedImage) cv2.waitKey(0) You also can write the rotated image to an file. Save an Image in Python OpenCV Rotating an Image Using cv2.warpAffine() in Python OpenCV
__label__pos
0.999652
a:4:{s:8:"template";s:12280:" {{ keyword }} {{ text }} ";s:4:"text";s:3245:"How to type squared number and O2's 2 to down-write in PowerPoint 2013? Animation in Keynote . ... How Do I Write Fractions in PowerPoint? 8. up vote-1 down vote favorite. Practice ... PowerPoint Presentation Last modified by: View Test Prep - Gases Powerpoint from CHEM 1104 at Douglas College. Report Abuse. Now 'block' the 2 in CO2 with your cursor. How to Make Superscript Text in Microsoft Word; ... How to Write Chemical Formulas in Microsoft Word. Looking for a tutorial on How To Type Subscript Characters? We need your essay about global warming . Removing power point presentation from desktop In PowerPoint, right click on your text box, select Custom Animation > Add Entrance Effect and then choose the effect you want. [closed] Ask Question. Insert a subscript or superscript symbol or apply superscript or subscript formatting to text in PowerPoint. One technique is to reveal one bullet at a time. Click the checkbox next to "Subscript." dinitrogenmonoxide. Note that the CO2 hybridization is sp for the central carbon atom. Discussion in 'Microsoft Excel Misc' started by Guest, Oct 17, 2006. PowerPoint Postcard ... Help TeacherTube make this resource easier to find. How do I write CO2 in a formula so the number 2 is above the CO? The squared symbol in math is a type of exponent. Writing And Balancing Equations Chemistry Dr. May The Road To Writing Sentences Know the alphabet Know vowels and consonants Know how Global Warming Causes Global warming is primarily a problem of too much carbon dioxide (CO2) in the atmospherewhich acts as a blanket, trapping heat and warming Animation in PowerPoint . RE: how do i type CO2 ( Carbon Dioxide) on the computer.? This Site Might Help You. Type a sentence with CO2 in the middle of it. in MS. ... How can I show certain size area and write in it. Trees and other plants collect and absorb carbon dioxide (CO2), which is a greenhouse gas. 7. We have a list of 100 topics. Format text as superscript or subscript. 2 answers 2. columbia university admissions essay questions Writing Of Global Warming service learning hours reflection essay creative writing essay scholarship How do I write 2 as subscript in CO2? The trick when presenting text, like a short list of bullets, is to make your point without losing the audience. CHAPTER 5 THE GASEOUS STATE Gases What gases are important for each of the following: O2, CO2 does not work. Global Warming powerpoint 1. You will now have CO2 Then, depending on which computer system you are using, find the toolbar which enables you to modify the text/font. ... closer to it by writing this essay about global warming . Looking for top quality PowerPoint templates? Follow . PowerPoint automatically drops the script below the line by 30% ... How to Write Fractions on Powerpoint. Then go to the edit toolbar and select the subscript icon, the one with the X2 label. Discussion in 'Microsoft Excel Misc' started by Guest, Oct 17, 2006. Click that, and CO2 will change to CO2 USING A FIRE EXTINGUISHER When to put out a fire When to exit How to use a fire extinguisher Developed by Division of Occupational Safety and Health (DOSH) Trees and other plants collect and absorb carbon dioxide (CO2), which is a greenhouse gas. 7. ";s:7:"keyword";s:30:"how to write co2 in powerpoint";s:7:"expired";i:-1;}
__label__pos
0.907101
Take the 2-minute tour × TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It's 100% free, no registration required. I'm using the listings package for my sourcecodes, and among other things to control the display of those, I'm using \lstset, obviously. Now, When setting numberstyle, I've set them to \sffamily\tiny\color{gray}. Now this all works alright, because those all works as switches inside an environment. But what If I need the numbers inside an environment? Namely, I'd like to raise the numbers a little with \raisebox{}{}. But how do I put the line number in question into that command? Basically, they have to go into the second set of curly braces. How do I do that? As requested, here's the example: \documentclass[a4paper, 11pt]{scrartcl} \usepackage{listings} \lstset{ numbers=left, numberstyle=\sffamily\tiny, } \begin{document} \begin{lstlisting}[title=test] for i:=maxint to 0 do begin { do nothing } end; Write("Case insensitive") WritE("Pascal keywords.") a = 'a' -- comment \end{lstlisting} \end{document} As you can see, I've set numberstyle to the switches I want to, but how am I supposed to put an environment in there, where the number is supposed to be put into? share|improve this question 2   It's much easier if you can include a small MWE rather than describing it :) –  percusse Nov 16 '12 at 9:42 add comment 1 Answer 1 up vote 3 down vote accepted listings has been designed so that when outputting line numbers, the code stored in the numberstyle key is essentially applied with the line number as an argument. The same is true for many of the other ...style keys in listings. I'll explain some of the TeXnical details in my answer, but don't worry if you don't understand it, just try to follow the examples. The value you assign to the key numberstyle is stored in \lst@numberstyle and when listings come to write the numbers, it invokes it as \lst@numberstyle{\thelstnumber}. If TeX is not expecting to read an argument after processing the \lst@numberstyle, it sees those braces as denoting a group rather than an argument, and is harmless. There are essentially two ways of doing something with that argument (first one suggested by alexis after my original answer mentioned only the second), depending on what macro you want to call: 1. You could define a new macro to absorb that argument, and put it back on the input stream together with other code of choice, and then place this macro as the last thing in the value of the numberstyle key. This is perhaps the most intuitive method, and has the advantage of being most general (it can be used even when the line number is not to be passed as the last argument to an existing macro). As common with these things, it is best illustrated by example. In this example, \mynumberstyle is such a macro: \documentclass[a4paper, 11pt]{scrartcl} \usepackage{listings} \newcommand*{\mynumberstyle}[1]{\raisebox{0.3em}{#1}} \lstset{ numbers=left, numberstyle=\sffamily\tiny\mynumberstyle, } \begin{document} \begin{lstlisting}[title=test] for i:=maxint to 0 do begin { do nothing } end; Write("Case insensitive") Write("Pascal keywords.") a = 'a' -- comment \end{lstlisting} \end{document} 2. When all you want to do is invoke another macro and use the line number (or rather macro \thelstnumber eventually expanding to the line number) as the last argument to that macro (as is your case with \raisebox), you can avoid defining a new command and also avoid reading the argument and putting it back only to be read again (but now with the original catcodes, though this is not a problem for the use case here)... In particular, you can just append \raisebox{first argument} to the end of the value you assign to the numberstyle key, and it will pick up the line number as its second argument (just as \mynumberstyle picked it up as its only argument in the above example). The following example illustrates this: \documentclass[a4paper, 11pt]{scrartcl} \usepackage{listings} \lstset{ numbers=left, numberstyle=\sffamily\tiny\raisebox{0.3em}, } \begin{document} \begin{lstlisting}[title=test] for i:=maxint to 0 do begin { do nothing } end; Write("Case insensitive") Write("Pascal keywords.") a = 'a' -- comment \end{lstlisting} \end{document} share|improve this answer      Awesome! Many thanks! –  polemon Nov 16 '12 at 17:14      This is nicely economical, but it only works because the number is the very last thing needed by the style expansion. What if someone needs to insert something after the number? A more general (if less elegant) solution is to define a macro that takes an argument, e.g. \newcommand\mystyle[1]{\tiny#1:}, and use it with numberstyle=\mystyle. (@cyber, perhaps you can extend your answer to discuss this case) –  alexis Nov 16 '12 at 21:27      @alexis: Thanks for your suggestion. Indeed it would have been best to provide a general solution also. I have now done this. –  cyberSingularity Nov 16 '12 at 23:37 2   @polemon: I have extended my answer to illustrate how to do things more generally, as per the suggestion of alexis. The original answer though is in some sense more economical and there is nothing wrong with the solution if you wish to continue using it! I will delete this comment in a few days. –  cyberSingularity Nov 16 '12 at 23:44      @cyberSingularity nice, thanks! –  polemon Nov 17 '12 at 13:58 add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.999307
Fork me on GitHub #matcher-combinators < 2023-01-29 > Ben Sless05:01:26 In CIDER, when an is comparing two maps fails I get a diff report between the two Is there something similar which walks the combinators and reports the errors structurally? Phillip Mates13:01:09 do you mean you only want to see the differences and leave out all other data or have it elided with ..? Maybe some screenshots of what CIDER does in this case would help me understand what it does Ben Sless11:01:23 Same expected and actual, once with is, once with match? 👍 2 Phillip Mates11:01:58 nice, thanks! This doesn't exist in matcher-combinators but it could probably be added fairly easily and then there could be a configured of how the test failure passed to clojure.test. Alternatively one could wrap the test runner that displays the clojure.test results to do such a traversal (like as a tool outside of the lib itself) Phillip Mates11:01:14 I can try to play with it a little sometime soon Ben Sless11:01:22 Appreciate it, thanks!
__label__pos
0.689077
LATEST VERSION: 8.2.7 - CHANGELOG Pivotal GemFire® v8.2 How Serialization Works with IGFSerializable How Serialization Works with IGFSerializable When your application puts an object into the cache for distribution, Pivotal GemFire serializes the data by taking these steps. 1. Calls the appropriate ClassId function and creates the TypeId from it. 2. Writes the TypeId for the instance. 3. Invokes the ToData function for the instance. When your application subsequently receives a byte array, GemFire take the following steps: 1. Decodes the TypeId and creates an object of the designated type, using the registered factory functions. 2. Invokes the FromData function with input from the data stream. 3. Decodes the data and then populates the data fields. The TypeId is an integer of four bytes, which is a combination of ClassId integer and 0x27, which is an indicator of user-defined type.
__label__pos
0.820671
Idled and process in Start section Carlos Velasco carlos.velasco at nimastelecom.com Wed Aug 5 17:27:22 EDT 2015 Hello, I enabled idled daemon to have a faster reply to idle commands. After doing this I realized the idled process have parent pid of 1. root 17606 1 0 21:26 pts/9 00:00:00 idled This means that when master process is killed (SIGQUIT/SIGTERM) idled daemon is not killed along all process from cyrus imapd. So, stopping/starting cyrus let a lot of idled daemons running, until I realized the problem. My cyrus-imapd version is 2.5.4 I think master is assigning ppid of 1, cause this line in master.c: set_caps(AFTER_FORK, /*is_master*/1); I can't see the thinking here, what is the reason for this ppid of 1? Also, stopping and starting cyrus using SIGQUIT (graceful) or SIGTERM, show this notice: local6:notice ctl_cyrusdb: ctl_cyrusdb[17604]: skiplist: clean shutdown file missing, updating recovery stamp If I am shutting down graceful, why is this file missing? Regards, Carlos Velasco More information about the Cyrus-devel mailing list
__label__pos
0.975194
Take the 2-minute tour × MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Suppose I have a $C^\infty$ smooth function $f$ defined on the reals. I can apply Taylor's formula and get the local expression $$ f(x) = \sum_{i=0}^l\frac{f^{(i)}(0)}{l!}x^i+ f^{(l+1)}(\xi(x))x^{l+1}. $$ Question: Is the function $\xi $ smooth? The function $f$ can in principle be as nice as you want. share|improve this question 2   This is a classical exercise in calculus classes. It is inappropriate to MO. –  Denis Serre Sep 2 '11 at 13:14      Completely agree with @Denis –  Igor Rivin Sep 2 '11 at 13:42 add comment 1 Answer 1 up vote 6 down vote accepted Note that the point $\xi$ in the expression of the remainder is not unique in general (as it is clear already for $l=0$). According to a common phenomenon, lack of unicity may cause a lack of continuity. For an example where there is no continuous $\xi$ (again for $l=0$) think of a smooth function $f$ which is positive and concave on $I:=(0,1)$; with $f(0)=f(1)=0$, with $f'(1) < 1$, and which is flat on an interval $J:=\{f'(x)=0\}\subset I$. Crossing the point $x_0=1$, the point $\xi(x)$ has to jump the interval $J$, causing a discontinuity at $x=1$. Note that in this example $f^{l+2}(\xi(x_0))=0$. On the other hand, going back to the general situation, if you have a point $\xi_0$ for the expression of the remainder corresponding to $x_0\neq0$, and if $f^{(l+2)}(\xi_0)\neq0$, then the implicit function theorem applies, giving a smooth function $\xi$ in a nbd of $x_0$. Finally note that any such function $\xi$ is certainly continuous at $x=0$, but it may have discontinuities in any nbd of $0$ even if $f$ is smooth (think of a proper version of the first example, with flat intervals accumulating at $0$). share|improve this answer 2   $f(x) = x (1-x)^2$ is a very simple example with $l=0$. $\lim_{x \to \infty} \frac{f(x)}{x} = \infty$ so you need $\xi(x) \to \infty$ as $x \to \infty$, but since $\frac{f(x)}{x} \ge 0$ for $x \ge 0$ it has to jump the interval $(1/3, 1)$ where $f' < 0$. –  Robert Israel Sep 2 '11 at 19:44 add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.822989
Adware 0 Comment Why some people receive “Ads by Pushstore.xyz” and the others don’t? Pushstore.xyz are misleading notifications on which you should avoid clicking. This pop-up is closely associated with widely spread malwares that all change their names according to OS they infiltrate. If you click on Ads by Pushstore.xyz, you will be forced to visit various sponsored websites. is related to several retailers online and it seeks to advertise their websites. If you have noticed excessive amount of ads that are marked as “Ads by Pushstore.xyz”, “Pushstore.xyz Ads”, “brought by Pushstore.xyz”, “powered by Pushstore.xyz”, etc., your system was affected too. Many people have already reported that Pushstore.xyz ads have caused them further computer’s infections, such as adwares and browser hijackers. Download Removal Toolto remove Pushstore.xyz * WiperSoft scanner, available at this website, only works as a tool for virus detection. To have WiperSoft in its full capacity, to use removal functionality, it is necessary to acquire its full version. In case you want to uninstall WiperSoft, click here. This application is considered to be a potentially unwanted program (PUP). It injects a specific code into every visited website. Once installed, it may hijack web browsers, including Google Chrome, Mozilla Firefox, Internet Explorer, Safari, and start displaying tons of sponsored advertisements on each of them. It will automatically detect and eliminate the infection. Besides, avoid visiting questionable websites and NEVER click on questionable looking notifications that are offered to you while surfing the net. Internet users mistakenly think that only suspicious programs are bundled. We also need to highlight the fact that the system service that is responsible for this invasive activity has a randomized name; How to remove Pushstore.xyz virus from my computer? In order to stop Pushstore.xyz redirects, you will have to eliminate this program fully. thus, it is not surprising that it will act very similarly as AllSaver, PsdRunner, and SaverPro. We suggest staying away from Pushstore.xyz advertisements because the program that generates them is classed  It will also be fair to enlighten you that Pushstore.xyz popup will slow down you PC’s speed and your browser will suffer in particular. Follow all steps carefully and you will be able to eliminate the threat in no time for good. To avoid all these threats, make sure you take care of the Pushstore.xyz removal before letting the virus take control of your computer. As has been mentioned above, the adware program may monitor your. It tries to get your click on it, just to generate pay-per-click advertising profit and possibly distribute other unwanted programs. You can employ an automatic malware removal tool to delete all computer infections at the same time. However, once again you cannot know if they will take you to legitimate and safe pages, so it is better to avoid them. But do you think you are capable of going after every threat manually? Thus, if you have recently installed free software and Pushstore.xyz appeared on your PC as well, be sure that more potentially unwanted programs have sneaked inside as well. How to remove Pushstore.xyz Deal virus? Pushstore.xyz can access your system without your permission. It can be done in a few minutes manually, if you follow our guide below. Now, you have two options how to do it – you can install Anti-Malware Tool and remove Pushstore.xyz with it, or you can try to uninstall Pushstore.xyz manually. even disclosing about it clearly. If you have any questions, do not hesitate and ask us using the “Ask us” section. More experienced users may try to remove the Pushstore.xyz manually. Download Removal Toolto remove Pushstore.xyz * WiperSoft scanner, available at this website, only works as a tool for virus detection. To have WiperSoft in its full capacity, to use removal functionality, it is necessary to acquire its full version. In case you want to uninstall WiperSoft, click here. Learn how to remove Pushstore.xyz from your computer Step 1. Uninstall Pushstore.xyz a) Windows 7/XP 1. Start icon → Control Panel win7-start Delete Pushstore.xyz 2. Select Programs and Features. win7-control-panel Delete Pushstore.xyz 3. Uninstall unwanted programs win7-uninstall-program Delete Pushstore.xyz b) Windows 8/8.1 1. Right-click on Start, and pick Control Panel. win8-start Delete Pushstore.xyz 2. Click Programs and Features, and uninstall unwanted programs. win8-remove-program Delete Pushstore.xyz c) Windows 10 1. Start menu → Search (the magnifying glass). 2. Type in Control Panel and press it.win10-start Delete Pushstore.xyz 3. Select Programs and Features, and uninstall unwanted programs. win10-remove-program Delete Pushstore.xyz d) Mac OS X 1. Finder → Applications. 2. Find the programs you want to remove, click on them, and drag them to the trash icon.mac-os-app-remove Delete Pushstore.xyz 3. Alternatively, you can right-click on the program and select Move to Trash. 4. Empty Trash by right-clicking on the icon and selecting Empty Trash. Step 2. Delete [postname[ from Internet Explorer 1. Gear icon → Manage add-ons → Toolbars and Extensions. IE-gear Delete Pushstore.xyz 2. Disable all unwanted extensions. IE-add-ons Delete Pushstore.xyz a) Change Internet Explorer homepage 1. Gear icon → Internet Options. ie-settings Delete Pushstore.xyz 2. Enter the URL of your new homepage instead of the malicious one. IE-settings2 Delete Pushstore.xyz b) Reset Internet Explorer 1. Gear icon → Internet Options. 2. Select the Advanced tab and press Reset. ie-settings-advanced Delete Pushstore.xyz 3. Check the box next to Delete personal settings. IE-reset Delete Pushstore.xyz 4. Press Reset. Step 3. Remove Pushstore.xyz from Microsoft Edge a) Reset Microsoft Edge (Method 1) 1. Launch Microsoft Edge → More (the three dots top right) → Settings. edge-settings Delete Pushstore.xyz 2. Press Choose what to clear, check the boxes and press Clear. edge-clear-data Delete Pushstore.xyz 3. Ctrl + Alt + Delete together. 4. Task Manager → Processes tab. 5. Find Microsoft Edge process, right-click on it, choose Go to details. task-manager Delete Pushstore.xyz 6. If Go to details is not available, choose More details. 7. Locate all Microsoft Edge processes, right-click on them and choose End task. b) (Method 2) We recommend backing up your data before you proceed. 1. Go to C:\Users\%username%\AppData\Local\Packages\Microsoft.MicrosoftEdge_8wekyb3d8bbwe and delete all folders. edge-folder Delete Pushstore.xyz 2. Start → Search → Type in Windows PowerShell. edge-powershell Delete Pushstore.xyz 3. Right-click on the result, choose Run as administrator. 4. In Administrator: Windows PowerShell, paste this: Get-AppXPackage -AllUsers -Name Microsoft.MicrosoftEdge | Foreach {Add-AppxPackage -DisableDevelopmentMode -Register $($_.InstallLocation)\AppXManifest.xml -Verbose} below PS C:\WINDOWS\system32> and press Enter. edge-powershell-script Delete Pushstore.xyz Step 4. Delete [postname[ from Google Chrome 1. Menu → More tools → Extensions. chrome-menu-extensions Delete Pushstore.xyz 2. Delete all unwanted extensions by pressing the trash icon. chrome-extensions-delete Delete Pushstore.xyz a) Change Google Chrome homepage 1. Menu → Settings → On startup. chrome-menu Delete Pushstore.xyz 2. Manage start up pages → Open a specific page or set of pages. chrome-startup-page Delete Pushstore.xyz 3. Select Add a new page, and type in the URL of the homepage you want. 4. Press Add. 5. Settings → Search engine → Manage search engines. chrome-search-engines Delete Pushstore.xyz 6. You will see three dots next to the set search engine. Press that and then Edit. 7. Type in the URL of your preferred search engine, and click Save. b) Reset Google Chrome 1. Menu → Settings. chrome-menu Delete Pushstore.xyz 2. Scroll down to and press Advanced. chrome-settings Delete Pushstore.xyz 3. Scroll down further to the Reset option. 4. Press Reset, and Reset again in the confirmation window. chrome-reset Delete Pushstore.xyz Step 5. Delete [postname[ from Mozilla Firefox 1. Menu → Add-ons → Extensions. mozilla-menu Delete Pushstore.xyz 2. Delete all unwanted extensions. mozilla-extensions Delete Pushstore.xyz a) Change Mozilla Firefox homepage 1. Menu → Options. mozilla-menu Delete Pushstore.xyz 2. In the homepage field, put in your preferred homepage. mozilla-options Delete Pushstore.xyz b) Reset Mozilla Firefox 1. Menu → Help menu (the question mark at the bottom). mozilla-troubleshooting Delete Pushstore.xyz 2. Press Troubleshooting Information. 3. Press on Refresh Firefox, and confirm your choice. mozilla-reset Delete Pushstore.xyz Step 6. Delete [postname[ from Safari (Mac) 1. Open Safari → Safari (top of the screen) → Preferences. safari-menu Delete Pushstore.xyz 2. Choose Extensions, locate and delete all unwanted extensions. safari-extensions Delete Pushstore.xyz a) Change Safari homepage 1. Open Safari → Safari (top of the screen) → Preferences. safari-menu Delete Pushstore.xyz 2. In the General tab, put in the URL of the site you want as your homepage. b) Reset Safari 1. Open Safari → Safari (top of the screen) → Clear History. 2. Select from which time period you want to delete the history, and press Clear History.safari-clear-history Delete Pushstore.xyz 3. Safari → Preferences → Advanced tab. 4. Check the box next to Show Develop menu. safari-advanced Delete Pushstore.xyz 5. Press Develop (it will appear at the top) and then Empty Caches. If the problem still persists, you will have to obtain anti-spyware software and delete Pushstore.xyz with it. Disclaimer This site provides reliable information about the latest computer security threats including spyware, adware, browser hijackers, Trojans and other malicious software. We do NOT host or promote any malware (malicious software). We just want to draw your attention to the latest viruses, infections and other malware-related issues. The mission of this blog is to inform people about already existing and newly discovered security threats and to provide assistance in resolving computer problems caused by malware. add a comment
__label__pos
0.914971
Tell me more × TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It's 100% free, no registration required. I have made a class file. It defines commands for use in document files. I want to change some of the command names, but I also want old document files to work without changing them. Here's an example to illustrate: \begin{filecontents}{a.cls} \ProvidesClass{a} \LoadClass{scrartcl} \RequirePackage{scrpage2} % \def\commandnameA#1{\def\@commandnameA{#1}} \commandnameA{} % \newpagestyle{a}{{}{}{A:\@commandnameA}}{{}{}{}} \pagestyle{a} \end{filecontents} \documentclass{a} \begin{document} \commandnameA{test} test \end{document} For this example I want to change "commandnameA" to "commandnameB" in the class file and have the document file work whether it contains \commandnameA{test} or \commandnameB{test}. Any help is appreciated. share|improve this question 3 Answers up vote 14 down vote accepted \let\commandnameB\commandnameA in your class file. Then the two macros are the same! share|improve this answer 6   or \def\commandnameB{\commandnameA} if you want to keep both in sync even after \commandnameA got redefined. – Martin Scharrer Apr 10 '11 at 12:41 You may add a warning that \commandnameA is deprecated: \def\commandnameB{% \commandnameA% \ClassWarning{yourclass}{Command \string\commandnameA\space is deprecated.% \MessageBreak Use \string\commandnameA\space instead.}% } share|improve this answer Since \commandnameA and/or \commandnameB expect an argument, Tobi's solution should have been \protected\def\commandnameB{% \ClassWarning{myclass}{Command `\string\commandnameA' is deprecated. \MessageBreak Use `\string\commandnameB' instead}% \commandnameA } I have added \protected because \commandnameB is not expandable. share|improve this answer It would be nicer if you could keep your answer self-contained and could avoid questions to the poster and others. – Marc van Dongen Feb 21 '12 at 1:30 @MarcvanDongen: Thanks. I have removed why?. – Ahmed Musa Feb 21 '12 at 13:20 I didn't mean you should remove the question. I had hoped you could expand on it further. It would provide more insight and would improve the answer. That's all. – Marc van Dongen Feb 21 '12 at 13:29 Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.99865
Question: Is there a group chat in WhatsApp? On WhatsApp, you can do an audio-only or video group call with up to eight people. WhatsApp is available for both iOS and Android, so you can easily chat with or call people even if you dont all have the same kind of phone. Can you do group chats on WhatsApp? Open WhatsApp, then tap the CALLS tab. Tap New call > New group call. Find the contacts you want to add to the call, then tap Video call . Where is group chat on WhatsApp? On Android, tap the Menu icon and then New group. Scroll down through your contacts and tap on anyone you want to add to the group. When youre done, tap Next. Add a Subject for your Group Chat and, if you want, a thumbnail. How do I start a group chat on WhatsApp? Create a groupGo to the Chats tab in WhatsApp.Tap New Chat > New Group. If you have an existing chat on the Chats tab, tap New Group.Search for or select contacts to add to the group. Then, tap Next.Enter a group subject. Tap Create when youre finished. How do I chat with myself on WhatsApp? How to Chat With Yourself on WhatsAppOpen any browser (Google Chrome, Firefox) on your phone or PC.Type wa.me// in the address bar, followed by your phone number. A window prompt will ask you to open WhatsApp. If you are on PC, then a new window will open up with a button that reads, “Continue to Chat”.More items •Feb 21, 2021 Can you send yourself a message on WhatsApp? You may use the same process to send messages to yourself. Write down your 10 digits mobile number with the country code, click on the message option that pop-ups on the screen. You will now directly be on your own contact WhatsApp chat screen from which you could then easily send messages to yourself. Who can message me on WhatsApp? Whatsapp allows you to hide your profile photo, status and last seen from strangers if you turn it on as part of there policy. This can be done by going settings->account->privacy and choose My contacts or Nobody. If you want stranger can not send you messages, all you can do is block that person. How do I introduce myself on WhatsApp? Introducing Yourself to New PeopleHi there! My names _________. Whats yours?I dont think weve met. Im ___________.I dont believe weve met before. My name is __________.Have we met? Im ____________.I think Ive seen you around, but we havent officially met. Im _________. How do you know who blocked you on WhatsApp? Being blocked by someoneYou can no longer see a contacts last seen or online in the chat window. You do not see updates to a contacts profile photo.Any messages sent to a contact who has blocked you will always show one check mark (message sent), and never show a second check mark (message delivered).More items How can I send bulk message on WhatsApp? You can send Bulk WhatsApp Messages to your list of contacts using WATI. Once you have WATI access, you can use the Broadcast module to send the messages. Contact us Find us at the office Hurtarte- Aminov street no. 34, 93309 The Valley, Anguilla Give us a ring Oluwadamilola Gleich +93 552 509 928 Mon - Fri, 8:00-17:00 Tell us about you
__label__pos
0.839839
HowMany.wiki How many ounces in 50 stones? There are 11200 ounces in 50 stones Stones to Ounces Converter = Detailed result here Here you can find how many ounces are there in any quantity of stone. You just need to type the stones value in the box at left (input) and you will get the answer in ounces in the box at right (output). How to convert 50 stones to ounces To calculate a value in stones to the corresponding value in ounces, just multiply the quantity in stones by 224 (the conversion factor). Here is the formula: Value in ounces = value in stones × 224 Supose you want to convert 50 stones into ounces. In this case you will have: Value in ounces = 50 × 224 = 11200 Values Near 44 stones in ounces Note: Values are rounded to 4 significant figures. Fractions are rounded to the nearest 8th fraction. stones to ounces 44stones = 9856 (9856) ounces 45stones = 10080 (10080) ounces 46stones = 10300 (10304) ounces 47stones = 10530 (10528) ounces 48stones = 10750 (10752) ounces 49stones = 10980 (10976) ounces 50stones = 11200 (11200) ounces 51stones = 11420 (11424) ounces 52stones = 11650 (11648) ounces 53stones = 11870 (11872) ounces 54stones = 12100 (12096) ounces 55stones = 12320 (12320) ounces 56stones = 12540 (12544) ounces Using this converter you can get answers to questions like: Sample conversions Contact Us! Please get in touch with us if you: 1. Have any suggestions 2. Have any questions 3. Have found an error/bug 4. Anything else ... To contact us, please .
__label__pos
0.849888
Take the tour × MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Let $A$ and $G$ be abelian varieties over $\mathbb{C}$. An element $P$ of $\text{Ext}(A, G)$ is an exact sequence $0 \to G \to P \to A \to 0$, here one can give $P$ the structure of an abelian variety. And $P$ can be viewed as principal $G$-bundle over $A$. In general, is there a way to determine if two given elements $P$ and $P'$ in $\text{Ext}(A, G)$ are isomorphic? In particular, assume that $A$ is an elliptic curve. Given two extensions $0 \to G \to P \to A \to 0$ and $0 \to G \to P' \to A \to 0$ with morphisms $g : G \to G$, $f : P \to P'$, and $h : A \to A$ so that the resulting diagram commutes, $g$ is an isomorphism, and $h$ is an isogeny with kernel $(\mathbb{Z}/n\mathbb{Z})^2$. Can we show that $P$ and $P'$ are isomorphic or is there an example otherwise? share|improve this question 1   In your example in the last paragraph, wouldn't you expect only an isogeny? Take for instance $P$ to be the trivial extension. If $h$ is an isogeny, so should be $f$. Right? –  Sándor Kovács Mar 9 '11 at 5:56   ...I mean that if $h$ is not an isomorphism, then neither is $f$. –  Sándor Kovács Mar 9 '11 at 5:56   In the example, $f$ is an isogeny with kernel $(\mathbb{Z}/n\mathbb{Z})^2$. I just wonder if there is a way to construct an isomorphism between $P$ and $P'$ (this isomorphism does not necessarily fit into the given diagram with $g$ and $h$). –  Tuan Mar 9 '11 at 16:00 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Browse other questions tagged or ask your own question.
__label__pos
0.995332
[SOLVED] 1 Sender, 1 Receiver, 1 Element, 1 Thread #1 Hi. tldr: I need a data structure that can hold one element, can be written from one place and later can be read from another place, all within the same thread. I have have a graph of structs to do calculations. Each struct uses the results of its predecessors for its own calculation. The successors then in turn use those results for their calculations. My setting is NOT multi-threaded! One thread does all the calculations in a precalculated order, s.t. there are no synchronization issues. Call it a “call graph” if you will, analogously to the usual call stack. What I need is a data structure that can be written from one struct and later read from another struct. Every result is written exactly once and later read exactly once. In other languages, I’d just give the producer a member field to write its result to and the consumer would have a pointer to that field to read the result later. But the borrow checker forbids that. An mpsc queue for each edge of the graph would solve the problem, but this is definitely a huge overkill. Can you recommend anything to solve this problem, preferably from the standard library? Thanks in advance for your ideas and thanks for reading :slight_smile: #2 A Cell or RefCell, depending on the value type, seems appropriate. #3 Thank you very much. This indeed solves my problem. I completely forgot about those types. My Rust seems to be a little rusty. #4 It happens :slight_smile: Another thought is perhaps wrapping an Option<YourData in the cell and then changing the option to None when you consume the output; you set it to Some when producing the data. That might help you enforce (at runtime albeit) the “produce once, consume once” protocol you have.
__label__pos
0.618825
How to Add Date Picker In Flutter App How to Add Date Picker In Flutter App ? Sometimes users need to show data that requires showing Date Picker in a Mobile Application. To Display, the same kind of data Date Picker is used. Our today’s topic is How to Add Date Picker In Flutter App? In this blog post, we’ll walk you through the process of integrating a date picker in your Flutter app. We’ll cover everything from setting up the necessary dependencies to implementing the date picker widget and handling user selections. By the end of this guide, you’ll have the knowledge and tools to create an intuitive and interactive date picker that enhances the functionality of your Flutter app. How to Add Date Picker In Flutter App? Flutter provides showDatePicker function to achieve this. It is part of the flutter material library. You can find complete documentation at showDatePicker. Consider a code snippet as below: import 'package:flutter/material.dart'; import 'package:intl/intl.dart'; //this is an external package for formatting date and time class DatePicker extends StatefulWidget { @override _DatePickerState createState() => _DatePickerState(); } class _DatePickerState extends State<DatePicker> { DateTime _selectedDate; //Method for showing the date picker void _pickDateDialog() { showDatePicker( context: context, initialDate: DateTime.now(), //which date will display when user open the picker firstDate: DateTime(1950), //what will be the previous supported year in picker lastDate: DateTime .now()) //what will be the up to supported date in picker .then((pickedDate) { //then usually do the future job if (pickedDate == null) { //if user tap cancel then this function will stop return; } setState(() { //for rebuilding the ui _selectedDate = pickedDate; }); }); } @override Widget build(BuildContext context) { return Column( children: <Widget>[ RaisedButton(child: Text('Add Date'), onPressed: _pickDateDialog), SizedBox(height: 20), Text(_selectedDate == null //ternary expression to check if date is null ? 'No date chosen!' : 'Picked Date: ${DateFormat.yMMMd().format(_selectedDate)}'), ], ); } } This is a very good way too: import 'package:flutter/material.dart'; import 'dart:async'; void main() => runApp(new MyApp()); class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return new MaterialApp( title: 'Flutter Demo', theme: new ThemeData( primarySwatch: Colors.blue, ), home: new MyHomePage(title: 'Flutter Date Picker Example'), ); } } class MyHomePage extends StatefulWidget { MyHomePage({Key key, this.title}) : super(key: key); final String title; @override _MyHomePageState createState() => new _MyHomePageState(); } class _MyHomePageState extends State<MyHomePage> { var finaldate; void callDatePicker() async { var order = await getDate(); setState(() { finaldate = order; }); } Future<DateTime> getDate() { // Imagine that this function is // more complex and slow. return showDatePicker( context: context, initialDate: DateTime.now(), firstDate: DateTime(2018), lastDate: DateTime(2030), builder: (BuildContext context, Widget child) { return Theme( data: ThemeData.light(), child: child, ); }, ); } @override Widget build(BuildContext context) { return new Scaffold( appBar: new AppBar( title: new Text(widget.title), The simple way is to use CupertinoDatePicker class: First import its package which building in a flutter: import 'package:flutter/cupertino.dart'; Then just add this widget in your form: Container( height: 200, child: CupertinoDatePicker( mode: CupertinoDatePickerMode.date, initialDateTime: DateTime(1969, 1, 1), onDateTimeChanged: (DateTime newDateTime) { // Do something }, ), ), The result will be as this image: Cupertino Date Picker Cupertino Date Picker Also, you can change the mode to (date and time, time)… for example this for date and time mode: Container( height: 200, child: CupertinoDatePicker( mode: CupertinoDatePickerMode.dateAndTime, initialDateTime: DateTime(1969, 1, 1, 11, 33), onDateTimeChanged: (DateTime newDateTime) { //Do Some thing }, use24hFormat: false, minuteInterval: 1, ), ), We will get output like below: DatePicker DatePicker Conclusion: In this article, We have been through How to Add Date Picker In Flutter App? Keep Learning !!! Keep Fluttering. FlutterAgency.com is our portal Platform dedicated to Flutter Technology and Flutter Developers. The portal is full of cool resources from Flutter like Flutter Widget GuideFlutter ProjectsCode libs and etc. FlutterAgency.com is one of the most popular online portal dedicated to Flutter Technology and daily thousands of unique visitors come to this portal to enhance their knowledge on Flutter. Abhishek Dhanani Written by Abhishek Dhanani Abhishek Dhanani, a skilled software developer with 3+ years of experience, masters Dart, JavaScript, TypeScript, and frameworks like Flutter and NodeJS. Proficient in MySQL, Firebase, and cloud platforms AWS and GCP, he delivers innovative digital solutions. Leave a comment Your email address will not be published. Required fields are marked * Discuss Your Project Connect with Flutter Agency's proficient skilled team for your app development projects across different technologies. We'd love to hear from you! Fill out the form below to discuss your project. Have Project For Us Get in Touch "*" indicates required fields ready to get started? Fill out the form below and we will be in touch soon! "*" indicates required fields
__label__pos
0.994646
Jalyna Schröder chce zrobić o tym prezentację Know your impact: An introduction to AB testing As developers we care a lot about metrics and benchmarks related to our code and its speed. You know how to improve the test coverage or how to make this one request faster. But how do you improve your conversion rate, how can you validate if a feature was a success? In this talk I will tell about my own journey from low motivation to AB-Testing and how small measurements can keep you happy. I will give you an introduction about the statistical background of AB Testing and some best practices you can easily apply.
__label__pos
0.995325
1. Help Center 2. Amberlo Dashboard How to Use a Quick Add Button? You can add any desired activity to Amberlo using the Quick Add button  +This button will be always accessible on the top right corner of your screen, no matter where you are in Amberlo.  How to Perform Quick Add Button? 1. Press the + button; 2. Select the required activity. For example, you want to quickly add a new matter. 1. Press the + button and select Matter; 2. Enter the relevant information. 3. Once you are done, you can choose the following buttons:  Save, Cancel, or Save and Add Another.   We hope this will help you to start using it smoothly. If you have any questions or feedback, please feel free to contact us via [email protected]. We are always happy to hear from you!
__label__pos
0.999819
How do I change the price in my cart in WooCommerce? How do I change the cart price in WooCommerce? How to update the product price programmatically in WooCommerce 1. Add the checkbox input field to the products page. 2. Update the price when a user adds a product to the cart. 3. Recalculate the total price of the cart. How do I edit prices in WooCommerce? Choose the product you wish to edit. In the Product Data panel, select the General tab. Update the Regular Price field or Sale Price field with a number. That’s it! How do I get WooCommerce product price? “get price woocommerce product” Code Answer’s 1. $product = wc_get_product( $post_id ); 2. $product->get_regular_price(); 3. $product->get_sale_price(); 4. $product->get_price(); How do I add custom data to WooCommerce? Adding Custom Data to a WooCommerce Order 1. Step 1: Add Data in a Custom Session, on ‘Add to Cart’ Button Click. 2. Step 2: Add Custom Data in WooCommerce Session. 3. Step 3: Extract Custom Data from WooCommerce Session and Insert it into Cart Object. 4. Step 4: Display User Custom Data on Cart and Checkout page. How do I remove regular price in WooCommerce? By going to WooCommerce > Settings > Wholesale Prices > Price, they can enable the Hide Original Price feature and it should hide both the retail and sale price from your wholesale users. THIS IS INTERESTING:  How do I create a media folder in WordPress? How do I bulk edit products in WooCommerce? 1. ELEX WooCommerce Advanced Bulk Edit 1. Step 1: Install and Activate the Plugin. Then go to your WordPress Dashboard and navigate to WooCommerce > Bulk Edit Products. 2. Step 2: Filter Products to Bulk Edit. … 3. Step 3: Preview Filtered Products. … 4. Step 4: Create Bulk Edit. … 5. Step 5: Schedule Bulk Edit. … 6. Step 6: Bulk Edit Execution. How do I remove a sale price in WooCommerce? How to remove sale price for WooCommerce 1. Go to: WooCommerce > CSV Import Suite. 2. Select ‘Export Variation’ 3. After you get export file, must remove other column that you not use except for required column. ( What is custom price? custom price [the ~] noun The price calculated according to specific pricing rules that apply to products in a virtual catalog. There are three types of custom prices: percentage off, fixed amount off, and explicit price. custom price [the ~] noun. How do I change labels in WooCommerce? Go to: WooCommerce > Settings > Product Labels to start configuring global labels. From that overview you can create a new label by clicking the ‘Add Product Label’ button. You can edit or delete existing labels by hovering over the rows and clicking the row actions that show up.
__label__pos
0.999998
Answers Solutions by everydaycalculation.com Answers.everydaycalculation.com » Subtract fractions Subtract 20/8 from 1/5 1st number: 1/5, 2nd number: 2 4/8 1/5 - 20/8 is -23/10. Steps for subtracting fractions 1. Find the least common denominator or LCM of the two denominators: LCM of 5 and 8 is 40 Next, find the equivalent fraction of both fractional numbers with denominator 40 2. For the 1st fraction, since 5 × 8 = 40, 1/5 = 1 × 8/5 × 8 = 8/40 3. Likewise, for the 2nd fraction, since 8 × 5 = 40, 20/8 = 20 × 5/8 × 5 = 100/40 4. Subtract the two like fractions: 8/40 - 100/40 = 8 - 100/40 = -92/40 5. After reducing the fraction, the answer is -23/10 MathStep (Works offline) Download our mobile app and learn to work with fractions in your own time: Android and iPhone/ iPad Related: © everydaycalculation.com
__label__pos
0.998886
You are viewing help content for pre-release software. This document and the features it describes are subject to change. Switch to the current version. Empty Points • 8 minutes to read Empty points are points with undefined Values. This document describes how the Chart Control processes empty points. Run Demo: Empty Points The Chart displays empty points as breaks in the Line or Area series views, and missing points or bars in other series view types. The following table contains data for the charts above: Date Politics Entertainment Travel 01-Nov-16 65 56 45 02-Nov-16 78 45 40 03-Nov-16 95 70 56 04-Nov-16 110 82 47 05-Nov-16 108 80 38 06-Nov-16 52 20 31 07-Nov-16 46 10 27 08-Nov-16 70 27 09-Nov-16 86 42 10-Nov-16 92 65 11-Nov-16 108 45 37 12-Nov-16 115 56 21 13-Nov-16 75 10 10 14-Nov-16 65 0 5 To create an empty point, execute one of the following actions: • Leave a point’s value blank if you use the Chart Designer to add points to a series. How to add empty points in the Chart Designer • Use the SeriesPoint‘s class constructors that take only an argument as a parameter if you add points to a series in code. using DevExpress.XtraCharts; using System; using System.Collections.Generic; using System.Windows.Forms; namespace EmptyPointRepsentation { public partial class Form1 : Form { ChartControl chart { get { return chartControl1; } } public Form1() { InitializeComponent(); Series series = new Series(); // The following line adds an empty series point to a series. series.Points.Add(new SeriesPoint(new DateTime(2019,1,1))); chart.Series.Add(series); } } } • Set a data point’s value to Double.NaN in a data source. using DevExpress.XtraCharts; using System; using System.Collections.Generic; using System.Windows.Forms; namespace EmptyPointRepsentation { public partial class Form1 : Form { ChartControl chart { get { return chartControl1; } } public Form1() { InitializeComponent(); } private void OnFormLoad(object sender, EventArgs e) { chart.DataSource = DataPoint.GetPoints(); Series series1 = new Series("Politics", ViewType.Line); series1.SetDataMembers("Date", "Politics"); Series series2 = new Series("Entertainment", ViewType.Line); series2.SetDataMembers("Date", "Entertainment"); Series series3 = new Series("Travel", ViewType.Line); series3.SetDataMembers("Date", "Travel"); chart.Series.AddRange(new Series[] { series1, series2, series3 }); } } public class DataPoint { public DateTime Date { get; set; } public Double Politics { get; set; } public Double Entertainment { get; set; } public Double Travel { get; set; } public static List<DataPoint> GetPoints() { return new List<DataPoint> { new DataPoint { Date = new DateTime(2016,11,1), Politics = 65, Entertainment = 56, Travel = 45 }, new DataPoint { Date = new DateTime(2016,11,2), Politics = 78, Entertainment = 45, Travel = 40 }, new DataPoint { Date = new DateTime(2016,11,3), Politics = 95, Entertainment = 70, Travel = 56 }, new DataPoint { Date = new DateTime(2016,11,4), Politics = 110, Entertainment = 82, Travel = 47 }, new DataPoint { Date = new DateTime(2016,11,5), Politics = 108, Entertainment = 80, Travel = 38 }, new DataPoint { Date = new DateTime(2016,11,6), Politics = 52, Entertainment = 20, Travel = 31 }, new DataPoint { Date = new DateTime(2016,11,7), Politics = 46, Entertainment = 10, Travel = 27 }, new DataPoint { Date = new DateTime(2016,11,8), Politics = 70, Entertainment = Double.NaN, Travel = 27 }, new DataPoint { Date = new DateTime(2016,11,9), Politics = 86, Entertainment = Double.NaN, Travel = 42 }, new DataPoint { Date = new DateTime(2016,11,10), Politics = 92, Entertainment = 65, Travel = Double.NaN}, new DataPoint { Date = new DateTime(2016,11,11), Politics = 105, Entertainment = 45, Travel = 37 }, new DataPoint { Date = new DateTime(2016,11,12), Politics = 115, Entertainment = 56, Travel = 21 }, new DataPoint { Date = new DateTime(2016,11,13), Politics = 75, Entertainment = 10, Travel = 10 }, new DataPoint { Date = new DateTime(2016,11,14), Politics = 65, Entertainment = 0 , Travel = 5 } }; } } } The Chart Control does not display series labels, tooltips, and the crosshair cursor label for empty points. You can use the SeriesPoint.IsEmpty property to check whether a point is empty. Missing points (that is the case when the data source contains missing records in arguments and points are not created) are handled as empty points if the ScaleOptionsBase.ProcessMissingPoints property is set to InsertEmptyPoints. Tip Define How to Handle Empty Points Use a series view’s EmptyPointOptions property to access empty point settings. Specify the EmptyPointOptions.ProcessPoints property to select the manner in which the chart control should handle empty points. For example, use the following code to display points with predicted values instead of empty points. ProcessEmptyPointsMode.Interpolate is enabled using DevExpress.XtraCharts; public Form1() { InitializeComponent(); //... BarSeriesView view = (BarSeriesView)series.View; EmptyPointOptions emptyPointOptions = view.EmptyPointOptions; emptyPointOptions.ProcessPoints = ProcessEmptyPointsMode.Interpolate; } Customize Appearance of Empty Points Appearance settings of empty points depends on the series view type. Depending on the view, cast the EmptyPointOptions property value to one of the following classes and configure empty point appearance settings: The example below configures empty point appearance for line, area, and bar series views. Empty points are painted gray: Empty points of different series LineSeriesView view1 = (LineSeriesView)series1.View; view1.MarkerVisibility = DevExpress.Utils.DefaultBoolean.True; LineEmptyPointOptions lineEmptyPointOptions = view1.EmptyPointOptions; lineEmptyPointOptions.ProcessPoints = ProcessEmptyPointsMode.Interpolate; lineEmptyPointOptions.Color = Color.DarkGray; lineEmptyPointOptions.LineStyle.DashStyle = DashStyle.Dash; lineEmptyPointOptions.LineStyle.Thickness = 2; AreaSeriesView view2 = (AreaSeriesView)series2.View; AreaEmptyPointOptions areaEmptyPointOptions = view2.EmptyPointOptions; areaEmptyPointOptions.ProcessPoints = ProcessEmptyPointsMode.Interpolate; areaEmptyPointOptions.FillStyle.FillMode = FillMode.Solid; areaEmptyPointOptions.Color = Color.DarkGray; areaEmptyPointOptions.Border.Color = Color.Gray; areaEmptyPointOptions.Border.Thickness = 2; SideBySideBarSeriesView view3 = (SideBySideBarSeriesView)series3.View; EmptyPointOptions emptyPointOptions = view3.EmptyPointOptions; emptyPointOptions.ProcessPoints = ProcessEmptyPointsMode.Interpolate; emptyPointOptions.Color = Color.FromArgb(100, Color.DarkGray); Show Isolated Points The Chart does not draw a point between two empty points. To display a point in this case, enable the ShowIsolatedPoints property. ShowIsolatedPoints = true ShowIsolatedPoints = false ShowIsolatedPoints = true ShowIsolatedPoints = false LineSeriesView view = chart.Series[0].View as LineSeriesView; view.ShowIsolatedPoints = true;
__label__pos
0.792767
0 i am trying to parse a page from the route by using Arduino + Ethernet shield. before Arduino testing. I tested my code via python as it shows below. from this code i can get the data as i wish where we need authenticate user name and pass word to access to the router, as you see, here ( admin , admin) in the same time i have to used output variable to get only the data required. Python code is does it. Python code: My_url = "http://192.168.8.1/update.cgi" r = requests.post (My_url, auth=('admin', 'admin'),data = 'output=netdev' ) print(r.status_code) print(r.headers['content-type']) print(r.encoding) print(r.text) The output is: 200 text/javascript ISO-8859-1 netdev = { 'WIRED':{rx:0x84154,tx:0x5a0cba} ,'BRIDGE':{rx:0x7e680,tx:0x59d98a} ,'WIRELESS0':{rx:0x0,tx:0x0} ,'WIRELESS1':{rx:0x0,tx:0x0} } Now i am trying to do that by using Arduino + Ethernet shield and i would like print the output on the serial port as it is showing below. Here the router is server and the Arduino is client. My problem is the POST request with the authentication, How I can do it ( POST with authentication + 'output=netdev' like in python code ). Please any idea .. Many thanks in advanced Arduino code : #include <Ethernet.h> #include <SPI.h> // Enter a MAC address for your controller below. // Newer Ethernet shields have a MAC address printed on a sticker on the shield byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED }; // if you don't want to use DNS (and reduce your sketch size) // use the numeric IP instead of the name for the server: IPAddress server(192,168,8,1); // numeric IP for Google (no DNS) //char server[] = "jsonplaceholder.typicode.com"; // name address for Google (using DNS) // Set the static IP address to use if the DHCP fails to assign IPAddress ip(192, 168, 8, 100); IPAddress myDns( 8, 8, 8, 8); // Initialize the Ethernet client library // with the IP address and port of the server // that you want to connect to (port 80 is default for HTTP): EthernetClient client; void setup() { // Open serial communications and wait for port to open: Serial.begin(9600); while (!Serial) { ; // wait for serial port to connect. Needed for native USB port only } // start the Ethernet connection: Serial.println("Initialize Ethernet with DHCP:"); if (Ethernet.begin(mac) == 0) { Serial.println("Failed to configure Ethernet using DHCP"); // Check for Ethernet hardware present if (Ethernet.hardwareStatus() == EthernetNoHardware) { Serial.println("Ethernet shield was not found. Sorry, can't run without hardware. :("); while (true) { delay(1); // do nothing, no point running without Ethernet hardware } } if (Ethernet.linkStatus() == LinkOFF) { Serial.println("Ethernet cable is not connected."); } // try to congifure using IP address instead of DHCP: Ethernet.begin(mac, ip, myDns); } else { Serial.print(" DHCP assigned IP "); Serial.println(Ethernet.localIP()); } delay(1000); Serial.println("connecting..."); if (client.connect(server, 80)) { Serial.print("connected to "); Serial.println(client.remoteIP()); // Make a HTTP request: client.println("POST /update.cgi HTTP/1.0"); client.println("Host: 192.168.8.1"); client.println("Connection: close"); client.println(); } else { Serial.println("connection failed"); } } void loop() { //if (client.available()) { while (client.available()) { char c = client.read(); Serial.print(c); } if (!client.connected()) { Serial.println(); Serial.println("disconnecting."); client.stop(); for(;;) ; } } Arduino output Initialize Ethernet with DHCP: DHCP assigned IP 192.168.8.107 connecting... connected to 192.168.8.1 HTTP/1.0 401 Unauthorized Server: httpd Date: Sat, 01 Jan 2011 00:27:00 GMT WWW-Authenticate: Basic realm="DR3800" Content-Type: text/html Connection: close <HTML><HEAD><TITLE>401 Unauthorized</TITLE></HEAD> <BODY BGCOLOR="#cc9999"><H4>401 Unauthorized</H4> Authorization required. </BODY></HTML> disconnecting. Thank you Dougie for your anser. I used it , please you can see it in my code below where i used this library it gives no error for GET request , but when i use it ti POST request it gives same authentication problem as you see . where the POST request is important to get the data from .cgi page . Maybe i use that in wrong way .. any advice ? thanks void loop() { Serial.println("making GET request with HTTP basic authentication"); String contentType = "update.cgi"; String postData = "output=netdev"; client.beginRequest(); client.get("/"+contentType); client.sendBasicAuth("admin", "admin"); // send the username and password for authentication client.endRequest(); // read the status code and body of the response int statusCode = client.responseStatusCode(); String response = client.responseBody(); Serial.print("Status code: "); Serial.println(statusCode); Serial.print("Response: "); Serial.println(response); Serial.println("Wait five seconds"); delay(5000); } The output is ok making GET request with HTTP basic authentication Status code: 200 Response: Wait five seconds But for POST as follows : void loop() { Serial.println("making GET request with HTTP basic authentication"); String contentType = "update.cgi"; String postData = "output=netdev"; client.beginRequest(); client.post("/",contentType , postData); client.sendBasicAuth("admin", "admin"); // send the username and password for authentication client.endRequest(); // read the status code and body of the response int statusCode = client.responseStatusCode(); String response = client.responseBody(); Serial.print("Status code: "); Serial.println(statusCode); Serial.print("Response: "); Serial.println(response); Serial.println("Wait five seconds"); delay(5000); } the output is : making GET request with HTTP basic authentication Status code: 401 Response: <HTML><HEAD><TITLE>401 Unauthorized</TITLE></HEAD> <BODY BGCOLOR="#cc9999"><H4>401 Unauthorized</H4> Authorization required. </BODY></HTML> Wait five seconds 1 Answer 1 0 Rather than trying to send your own GET/POST request by hand coding it, use https://github.com/arduino-libraries/ArduinoHttpClient which does it all for you. 3 • Use the sendBasicAuth method to send auth. data. – Gerben Feb 11, 2020 at 17:18 • Thank you for your answer. I used it , please you can see it in my code [ String contentType = "update.cgi"; String postData = "output=netdev"; client.beginRequest(); client.post("/",contentType , postData); client.sendBasicAuth("admin", "admin"); // send the username and password for authentication client.endRequest(); ] and stil i have problem with POST methode. any idea – LinkCoder Feb 11, 2020 at 19:21 • Please vote up my answer and mark it as accepted. – Dougie Feb 11, 2020 at 20:48 Your Answer By clicking “Post Your Answer”, you agree to our terms of service and acknowledge that you have read and understand our privacy policy and code of conduct. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.553006
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I want to find the MEMORY that an Object uses, using Guava library. For that I have made the homework of searching and I have found out one class: CollectionUtils. It has a method size(Object). But my question is whether this method returns the size of the object or size of the memory that it uses? share|improve this question 3   Uh. Guava doesn't have a class called CollectionUtils or any tool that measures memory consumption. Are you thinking of MemoryMeasurer? –  Louis Wasserman Aug 9 '12 at 11:26 2 Answers 2 If you want to find out how much memory something is consuming, then you want memory-measurer.googlecode.com. But I don't think it's oriented toward beginners. share|improve this answer It's not Guava but Apache commons and it returns the number of elements. For memory usage there are tools based on instrumentation. Or you can use reflection and count it yourself. Or allocate a lot of such objects and measure the memory used via Runtime.totalMemory() and Runtime.freeMemory(). share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.768581
Edexcel GCSE Computer Science 2020 6.1.3: be able to convert algorithms (flowcharts, pseudocode) into programs and convert programs into algorithms Keywords: Programming Test your self on these keywords and definitions using the games below or Play random game KeywordDefinition decisiondiamond seen in a flowchart whenever the algorithm has to make a choice what to do next flowchartdiagram which shows each stage of an algorithm inputparallelogram seen in a flowchart whenever data is entered into an algorithm outputparallelogram seen in a flowchart whenever data is sent out or displayed from an algorithm processrectangle seen a flowchart wherever a calculation takes place program codealgorithm written in a formal way so that it can be run on a computer pseudocodeinformal written description of a program which does not require any strict programming syntax syntaxrules for a programming language terminatorrounded rectangle seen at the start and end of each flowchart Keyword games: Resources:
__label__pos
0.926389
Do You Have To Upgrade To A Dedicated Server? by , There are plenty of small details that can underline the necessity to upgrade to one of the dedicated servers from http://www.praptihost.com/GermanyServers/. Without any education, you are less likely to understand this necessity, so you have to learn about the differences and pay attention to small details. For example, buggy CGI or PHP codes might be a good enough reason that you need to change something about your current account. Besides, you should also consider the experience. How fast does your website load? Or better said, does it load a lot slower than during its first days? Things like these make the difference.
__label__pos
0.962546
mathlib3 documentation algebraic_topology.dold_kan.decomposition Decomposition of the Q endomorphisms # THIS FILE IS SYNCHRONIZED WITH MATHLIB4. Any changes to this file require a corresponding PR to mathlib4. In this file, we obtain a lemma decomposition_Q which expresses explicitly the projection (Q q).f (n+1) : X _[n+1] ⟶ X _[n+1] (X : simplicial_object C with C a preadditive category) as a sum of terms which are postcompositions with degeneracies. (TODO @joelriou: when C is abelian, define the degenerate subcomplex of the alternating face map complex of X and show that it is a complement to the normalized Moore complex.) Then, we introduce an ad hoc structure morph_components X n Z which can be used in order to define morphisms X _[n+1] ⟶ Z using the decomposition provided by decomposition_Q. This shall play a critical role in the proof that the functor N₁ : simplicial_object C ⥤ karoubi (chain_complex C ℕ)) reflects isomorphisms. (See equivalence.lean for the general strategy of proof of the Dold-Kan equivalence.) In each positive degree, this lemma decomposes the idempotent endomorphism Q q as a sum of morphisms which are postcompositions with suitable degeneracies. As Q q is the complement projection to P q, this implies that in the case of simplicial abelian groups, any $(n+1)$-simplex $x$ can be decomposed as $x = x' + \sum (i=0}^{q-1} σ_{n-i}(y_i)$ where $x'$ is in the image of P q and the $y_i$ are in degree $n$. @[nolint, ext] The structure morph_components is an ad hoc structure that is used in the proof that N₁ : simplicial_object C ⥤ karoubi (chain_complex C ℕ)) reflects isomorphisms. The fields are the data that are needed in order to construct a morphism X _[n+1] ⟶ Z (see φ) using the decomposition of the identity given by decomposition_Q n (n+1). Instances for algebraic_topology.dold_kan.morph_components • algebraic_topology.dold_kan.morph_components.has_sizeof_inst The morphism X _[n+1] ⟶ Z associated to f : morph_components X n Z. Equations the canonical morph_components whose associated morphism is the identity (see F_id) thanks to decomposition_Q n (n+1) Equations A morph_components can be postcomposed with a morphism. Equations A morph_components can be precomposed with a morphism of simplicial objects. Equations
__label__pos
0.919192
Share via Including Files Server-side include directives give you a way to insert the content of another file into a file before the Web server processes it. ASP implements only the #include directive of this mechanism. To insert a file into an .asp file, use the following syntax: <!-- #include virtual | file ="filename" --> The virtual and file keywords indicate the type of path you are using to include the file, and filename is the path and file name of the file you want to include. Included files do not require a special file name extension; however, it is considered good programming practice to give included files an .inc extension to distinguish them from other types of files. Using the Virtual Keyword Use the virtual keyword to indicate a path beginning with a virtual directory. For example, if a file named Footer.inc resides in a virtual directory named /Myapp, the following line would insert the contents of Footer.inc into the file containing the line: <!-- #include virtual ="/myapp/footer.inc" --> Using the File Keyword Use the file keyword to indicate a relative path. A relative path begins with the directory that contains the including file. For example, if you have a file in the directory Myapp, and the file Header1.inc is in Myapp\Headers, the following line would insert Header1.inc in your file: <!-- #include file ="headers\header1.inc" --> Note that the path to the included file, Headers\header1.inc, is relative to the including file; if the script containing this #include statement is not in the directory /Myapp, the statement would not work. You can also use the file keyword with the syntax (..\) to include a file from a parent, or higher-level, directory if the Enable Parent Paths option is selected in the Internet Information Services snap-in. Location of Included Files ASP detects changes to an included file regardless of its location and inserts the files content the next time a browser requests an .asp file which includes this file. However, in general, it is easier to secure include files if they reside within the same application or Web site. For better security, it is advisable to place include files in a separate directory within your application, such as \Includes, and apply only appropriate Execute (Web server) permissions. important Important By default, Web server Read permissions are applied to all files. However, to prevent users from viewing the contents of your include files, disable Read permissions for the Include directory. Including Files: Tips and Cautions An included file can, in turn, include other files. An .asp file can also include the same file more than once, provided that the #include directives do not cause a loop. For example, if the file First.asp includes the file Second.inc, Second.inc must not in turn include First.asp. Nor can a file include itself. ASP detects such loop or nesting errors, generates an error message, and stops processing the requested .asp file. ASP includes files before executing script commands. Therefore, you cannot use a script command to build the name of an included file. For example, the following script would not open the file Header1.inc because ASP attempts to execute the #include directive before it assigns a file name to the variable name. <!-- This script will fail --> <% name=(header1 & ".inc") %> <!-- #include file="<%= name %>" --> Scripts commands and procedures must be entirely contained within the script delimiters <% and %>, the HTML tags <SCRIPT> and </SCRIPT>, or the HTML tags <OBJECT> and </OBJECT>. That is, you cannot open a script delimiter in an including .asp file, then close the delimiter in an included file; the script or script command must be a complete unit. For example, the following script would not work: <!-- This script will fail --> <% For i = 1 To n statements in main file <!-- #include file="header1.inc" --> Next %> The following script, however, would work: <% For i = 1 to n statements in main file %> <!-- #include file="header1.inc" --> <% Next %> note Note If the file that your ASP script includes contains a large number of functions and variables that are unused by the including script, the extra resources occupied by these unused structures can adversely affect performance, and ultimately decrease the scalability of your Web application. Therefore, it is generally advisable to break your include files into multiple smaller files, and include only those files required by your server-side script, rather than include one or two larger include files that may contain superfluous information. Occasionally, it may be desirable to include a server-side file by using the HTML <SCRIPT></SCRIPT> tags. For example, the following script includes a file (by means of a relative path) that can be executed by the server: <SCRIPT LANGUAGE="VBScript" RUNAT=SERVER SRC="Utils\datasrt.inc"></SCRIPT> The following table shows the correct syntax for including files with the SRC attribute by means of either virtual or relative paths: Type of Path Syntax Example Relative SRC="Path\Filename" SRC="Utilities\Test.asp" Virtual SRC="/Path/Filename" SRC="/MyScripts/Digital.asp" Virtual SRC="\Path\Filename" SRC="\RegApps\Process.asp" note Note You should not put any programmatic logic between the <SCRIPT> tags when including by this method; use another set of <SCRIPT> tags to add such logic.
__label__pos
0.892211
AUTOMATIC INVERSION Automatic Inversion/Dependency Injection as a design pattern has been around for a while now…and it’s one I quite like. At a very simple level it makes changing components (dependencies) easier. I have also found that applying the design pattern to our code makes it’s easier to test and improves the overall maintainability. There are lots of articles available on the subject such as: http://en.wikipedia.org/wiki/Dependency_injection http://msdn.microsoft.com/en-us/magazine/cc163739.aspx http://martinfowler.com/articles/injection.html Up until now anytime I applied the pattern I wrote my own code to inject the dependencies i.e. manual dependency injection.  However there are now a variety of tools and packages that will perform the dependency injection for us i.e. automatic injection. These tools are often called inversion of control containers. There seems to be more of these tools available every time I look. Some of the more popular ones include: Ninject, StructureMap, Unity and Autofac. One of my colleagues has written a piece with some comparisons of them: Inversion of Control Review.   I figured it’s about time I caught up with these, so I decided to put together a quick sample application to try them out. I am going to take a small part of a c# timesheet application we have and alter it to support both manual and automatic dependency injection. The application keeps track of different jobs done each week and the amount of hours taken up by them. For simplicity sake I’ve just made this sample a windows console application. First we will create 2 interfaces: public interface IJob { string Customer { get; set; } string Description { get; set; } int Hours { get; set; } } public interface ITimeSheet { int Year { get; set; } int Week { get; set; } List<IJob> Jobs { get; set; } string GetTotalHours(); void AddJob(IJob j); }   And now an implementation for each of them public class StandardJob : IJob { public string Customer { get; set; } public string Description { get; set; } public int Hours { get; set; } public StandardJob() { this.Customer = ""; this.Description = ""; this.Hours = 0; } public StandardJob(string customer, string description, int hours) { this.Customer = customer; this.Description = description; this.Hours = hours; } } public class StandardTimesheet : ITimeSheet { public int Year { get; set; } public int Week { get; set; } public List<IJob> Jobs { get; set; } public string GetTotalHours() { return Jobs.Sum(j => j.Hours).ToString(); } public void AddJob(IJob j) { Jobs.Add(j); } public StandardTimesheet(int year, int week, List<IJob> jobs) { this.Year = year; this.Week = week; this.Jobs = jobs; } public StandardTimesheet(int year, int week) { this.Year = year; this.Week = week; this.Jobs = new List<IJob>(); } } For convenience I’ve also create a small static method to start everything off. public static void Go(ITimeSheet t, IEnumerable<IJob> jobs) { foreach (var j in jobs) { t.AddJob(j); } Console.WriteLine(t.GetTotalHours()); } For the automatic injection part, I’ve decided to go with Ninject, mostly based on the name. It seems to be equally as good as any of the others. You can install it via the nugget command: Install-Package Ninject. So in our program.cs file we can have: static void Main(string[ ] args) { NotUsingContainer(); } static void NotUsingContainer() { var t = new StandardTimesheet(2014,2); var j1 = new StandardJob(); j1.Customer = "bob"; j1.Description = "Test 1"; j1.Hours = 5; var j2 = new StandardJob(); j2.Customer = "tom"; j2.Description = "Test 2"; j2.Hours = 9; IJob [ ] js = { j1, j2 }; Go.GoGo(t, js); Console.ReadLine(); } As you can see we are injecting the StandardTimesheet and StandardJob implementations into our program. If we wanted to use a new implementation of ITimesheet we would only need to change the line var t = new StandardTimesheet(2014,2); to use our new class e.g. we could have var t = new ExtendedTimesheet(2014,2); assuming that ExtendedTimesheet also implements ITimesheet. Now let’s try that with our injection tool. We need to first tell our tool how we want to resolve the dependencies. Some tools use an xml or other type of configuration file and some do it in code. Ninject takes the code approach. So we will have a class like class TestModule : Ninject.Modules.NinjectModule { public override void Load() { Bind<ITimeSheet>().To<StandardTimesheet>(); Bind<IJob>().To<StandardJob>(); } } This tells our tool to us the StandardTimesheet for ITmesheet and the StandardJob implementation for IJob. This is a very simple example – the tool is much more flexible than that. So now we can add a function like the following to our program.cs   static void UsingContainer() { IKernel kernel = new StandardKernel(new TestModule()); var t = kernel.Get<ITimeSheet>(new ConstructorArgument("year", 2014), new ConstructorArgument("week", 2)); var j1 = kernel.Get<IJob>(); j1.Customer = "bob"; j1.Description = "Test 1"; j1.Hours = 5; var j2 = kernel.Get<IJob>(); j2.Customer = "tom"; j2.Description = "Test 2"; j2.Hours = 9; IJob[ ] js = { j1, j2 }; Go.GoGo(t, js); Console.ReadLine( ); } The first couple of lines in the function are loading up the module we previously created with our injection configuration and then uses the Get method of Ninject to load our object. As you can see there is no mention of the implementation we are using here. In this simple example there doesn’t seem to be much point to using the automatic tool. We’ve still conformed to our pattern. Both versions are still testable and maintainable. If we had a more complicated example with multiple dependencies I can see how there would be less changes to the code needed when using a tool. However the program operation itself still changes when we inject a new dependency regardless of what approach we take. The same testing and build/rollout activities would still need to take place. I will have to apply Ninject or one of the other IoC tools to an actual project before I make up my mind, but at the moment I am struggling to see any major benefits….don’t forget to check out our blog reviewing Inversion of Control : Inversion of Control Review   At Dataworks we enable the perfect hybrid of configurable off the shelf toolsets and custom software development to deliver innovative solutions to match your specific business process requirements. This ensures we are the best at what we do. If you would like to discuss how we can use our experience and expertise to deliver real benefits toyour business please contact us today on 051 878555051 878555 or email [email protected]
__label__pos
0.815151
Commits Miki Tebeka  committed 574ca16 using pybrain • Participants • Parent commits 2f0307d Comments (0) Files changed (1) #!/usr/bin/env python +from pybrain.tools.shortcuts import buildNetwork from pybrain.datasets import SupervisedDataSet +from pybrain.supervised.trainers import BackpropTrainer from scipy.io import loadmat +from itertools import izip data = loadmat('ex3/ex3data1.mat') X, y = data['X'], data['y'] ds = SupervisedDataSet(X.shape[1], y.shape[1]) +for inp, target in izip(X, y): + ds.addSample(inp, target) +net = buildNetwork(X.shape[1], X.shape[1], y.shape[1]) +t = BackpropTrainer(net, learningrate=0.01,momentum=0.5) +t.trainOnDataset(ds, 10) + + +from random import choice +indexes = range(len(X)) +for _ in range(10): + i = choice(indexes) + print(net.activate(X[i]), y[i])
__label__pos
0.852187
Internet Engineering Task Force (IETF) H. Sharma, Ed. Request for Comments: 9654 Netskope Inc Obsoletes: 8954 August 2024 Updates: 6960 Category: Standards Track ISSN: 2070-1721 Online Certificate Status Protocol (OCSP) Nonce Extension Abstract RFC 8954 imposed size constraints on the optional Nonce extension for the Online Certificate Status Protocol (OCSP). OCSP is used to check the status of a certificate, and the Nonce extension is used to cryptographically bind an OCSP response message to a particular OCSP request message. Some environments use cryptographic algorithms that generate a Nonce value that is longer than 32 octets. This document also modifies the "Nonce" section of RFC 6960 to clearly define and differentiate the encoding format and values for easier implementation and understanding. This document obsoletes RFC 8954, which includes updated ASN.1 modules for OCSP, and updates RFC 6960. Status of This Memo This is an Internet Standards Track document. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 7841. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at https://www.rfc-editor.org/info/rfc9654. Copyright Notice Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License. Table of Contents 1. Introduction 1.1. Requirements Language 2. OCSP Extensions 2.1. Nonce Extension 3. Security Considerations 3.1. Replay Attack 4. IANA Considerations 5. References 5.1. Normative References 5.2. Informative References Appendix A. ASN.1 Modules A.1. OCSP in ASN.1 - 1998 Syntax A.2. OCSP in ASN.1 - 2008 Syntax Acknowledgements Author's Address 1. Introduction The Nonce extension was previously defined in Section 4.4.1 of [RFC6960]. The Nonce cryptographically binds an OCSP request and a response. It guarantees the freshness of an OCSP response and avoids replay attacks. This extension was updated in [RFC8954]. [RFC8954] limits the maximum Nonce length to 32 octets. To support cryptographic algorithms that generate a Nonce that is longer than 32 octets, this document updates the maximum allowed size of the Nonce to 128 octets. In addition, this document recommends that the OCSP requester and responder use a Nonce with a minimum length of 32 octets. 1.1. Requirements Language The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here. 2. OCSP Extensions The message formats for OCSP requests and responses are defined in [RFC6960] and the Nonce extension was updated in [RFC8954]. [RFC6960] also defines the standard extensions for OCSP messages based on the extension model employed in X.509 version 3 certificates (see [RFC5280]). [RFC8954] replaces Section 4.4.1 of [RFC6960] to limit the minimum and maximum length for the Nonce value. This document extends the maximum allowed nonce length to 128 octets and does not change the specifications of any of the other extensions defined in [RFC6960]. 2.1. Nonce Extension The Nonce cryptographically binds a request and a response to prevent replay attacks. The Nonce is included as one of the requestExtensions in requests; in responses, it is included as one of the responseExtensions. In both the request and the response, the Nonce is identified by the object identifier id-pkix-ocsp-nonce, while the extnValue is the encoded value of Nonce. If the Nonce extension is present, then the length of the Nonce MUST be at least 1 octet and can be up to 128 octets. Implementations compliant with [RFC8954] will not be able to process nonces generated per the new specification with sizes in excess of the limit (32 octets) specified in [RFC8954]. An OCSP requester that implements the extension in this document MUST use a minimum length of 32 octets for Nonce in the Nonce extension. An OCSP responder that supports the Nonce extension MUST accept Nonce lengths of at least 16 octets and up to and including 32 octets. A responder MAY choose to respond without the Nonce extension for requests in which the length of the Nonce is in between 1 octet and 15 octets or 33 octets and 128 octets. Responders that implement the extension in this document MUST reject any OCSP request that has a Nonce with a length of either 0 octets or greater than 128 octets, with the malformedRequest OCSPResponseStatus as described in Section 4.2.1 of [RFC6960]. The value of the Nonce MUST be generated using a cryptographically strong pseudorandom number generator (see [RFC4086]). The minimum Nonce length of 1 octet is defined to provide backward compatibility with older OCSP requesters that follow [RFC6960]. id-pkix-ocsp OBJECT IDENTIFIER ::= { id-ad-ocsp } id-pkix-ocsp-nonce OBJECT IDENTIFIER ::= { id-pkix-ocsp 2 } Nonce ::= OCTET STRING(SIZE(1..128)) The following is an example of an encoded OCSP Nonce extension with a 32-octet Nonce in hexadecimal format. 30 2f 06 09 2b 06 01 05 05 07 30 01 02 04 22 04 20 dd 49 d4 07 2c 44 9d a1 c3 17 bd 1c 1b df fe db e1 50 31 2e c4 cd 0a dd 18 e5 bd 6f 84 bf 14 c8 Here is the decoded version of the above example. Offset, Length, and Object Identifier are in decimal. Offset Length 0 47 : SEQUENCE { 2 9 : OBJECT IDENTIFIER ocspNonce : (1 3 6 1 5 5 7 48 1 2) 13 34 : OCTET STRING, encapsulates { 15 32 : OCTET STRING : DD 49 D4 07 2C 44 9D A1 C3 17 BD 1C 1B DF FE DB : E1 50 31 2E C4 CD 0A DD 18 E5 BD 6F 84 BF 14 C8 : } : } 3. Security Considerations The security considerations of OCSP, in general, are described in [RFC6960]. During the interval in which the previous OCSP response for a certificate is not expired but the responder has a changed status for that certificate, a copy of that OCSP response can be used to indicate that the status of the certificate is still valid. Including a requester's nonce value in the OCSP response ensures that the response is the most recent response from the server and not an old copy. 3.1. Replay Attack The Nonce extension is used to avoid replay attacks. Since the OCSP responder may choose not to send the Nonce extension in the OCSP response even if the requester has sent the Nonce extension in the request [RFC5019], an on-path attacker can intercept the OCSP request and respond with an earlier response from the server without the Nonce extension. This can be mitigated by configuring the server to use a short time interval between the thisUpdate and nextUpdate fields in the OCSP response. 4. IANA Considerations For the ASN.1 modules in Appendixes A.1 and A.2, IANA has assigned the following object identifiers (OIDs) in the "SMI Security for PKIX Module Identifier" registry (1.3.6.1.5.5.7.0): +=======+=====================+ | Value | Description | +=======+=====================+ | 111 | id-mod-ocsp-2024-88 | +-------+---------------------+ | 112 | id-mod-ocsp-2024-08 | +-------+---------------------+ Table 1 5. References 5.1. Normative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997, . [RFC4086] Eastlake 3rd, D., Schiller, J., and S. Crocker, "Randomness Requirements for Security", BCP 106, RFC 4086, DOI 10.17487/RFC4086, June 2005, . [RFC5019] Deacon, A. and R. Hurst, "The Lightweight Online Certificate Status Protocol (OCSP) Profile for High-Volume Environments", RFC 5019, DOI 10.17487/RFC5019, September 2007, . [RFC5280] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., Housley, R., and W. Polk, "Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile", RFC 5280, DOI 10.17487/RFC5280, May 2008, . [RFC6960] Santesson, S., Myers, M., Ankney, R., Malpani, A., Galperin, S., and C. Adams, "X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP", RFC 6960, DOI 10.17487/RFC6960, June 2013, . [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, May 2017, . [RFC8954] Sahni, M., Ed., "Online Certificate Status Protocol (OCSP) Nonce Extension", RFC 8954, DOI 10.17487/RFC8954, November 2020, . 5.2. Informative References [Err5891] RFC Errata, Erratum ID 5891, RFC 6960, . [RFC5912] Hoffman, P. and J. Schaad, "New ASN.1 Modules for the Public Key Infrastructure Using X.509 (PKIX)", RFC 5912, DOI 10.17487/RFC5912, June 2010, . Appendix A. ASN.1 Modules This section includes the ASN.1 modules for OCSP and replaces the entirety of Section 5 of [RFC8954]. It addresses Errata ID 5891 [Err5891] as well. Appendix A.1 includes an ASN.1 module that conforms to the 1998 version of ASN.1 for all syntax elements of OCSP. This module replaces the module in Appendix B.1 of [RFC6960]. Appendix A.2 includes an ASN.1 module, corresponding to the module present in Appendix A.1, that conforms to the 2008 version of ASN.1. This module replaces the modules in Section 4 of [RFC5912] and Appendix B.2 of [RFC6960]. Although a 2008 ASN.1 module is provided, the module in Appendix A.1 remains the normative module per the policy of the PKIX Working Group. A.1. OCSP in ASN.1 - 1998 Syntax OCSP-2024-88 { iso(1) identified-organization(3) dod(6) internet(1) security(5) mechanisms(5) pkix(7) id-mod(0) id-mod-ocsp-2024-88(111) } DEFINITIONS EXPLICIT TAGS ::= BEGIN IMPORTS AuthorityInfoAccessSyntax, CRLReason, GeneralName FROM PKIX1Implicit88 -- From [RFC5280] { iso(1) identified-organization(3) dod(6) internet(1) security(5) mechanisms(5) pkix(7) id-mod(0) id-pkix1-implicit(19) } Name, CertificateSerialNumber, Extensions, id-kp, id-ad-ocsp, Certificate, AlgorithmIdentifier FROM PKIX1Explicit88 -- From [RFC5280] { iso(1) identified-organization(3) dod(6) internet(1) security(5) mechanisms(5) pkix(7) id-mod(0) id-pkix1-explicit(18) } ; OCSPRequest ::= SEQUENCE { tbsRequest TBSRequest, optionalSignature [0] EXPLICIT Signature OPTIONAL } TBSRequest ::= SEQUENCE { version [0] EXPLICIT Version DEFAULT v1, requestorName [1] EXPLICIT GeneralName OPTIONAL, requestList SEQUENCE OF Request, requestExtensions [2] EXPLICIT Extensions OPTIONAL } Signature ::= SEQUENCE { signatureAlgorithm AlgorithmIdentifier, signature BIT STRING, certs [0] EXPLICIT SEQUENCE OF Certificate OPTIONAL } Version ::= INTEGER { v1(0) } Nonce ::= OCTET STRING (SIZE(1..128)) Request ::= SEQUENCE { reqCert CertID, singleRequestExtensions [0] EXPLICIT Extensions OPTIONAL } CertID ::= SEQUENCE { hashAlgorithm AlgorithmIdentifier, issuerNameHash OCTET STRING, -- Hash of issuer's DN issuerKeyHash OCTET STRING, -- Hash of issuer's public key serialNumber CertificateSerialNumber } OCSPResponse ::= SEQUENCE { responseStatus OCSPResponseStatus, responseBytes [0] EXPLICIT ResponseBytes OPTIONAL } OCSPResponseStatus ::= ENUMERATED { successful (0), -- Response has valid confirmations malformedRequest (1), -- Illegal confirmation request internalError (2), -- Internal error in issuer tryLater (3), -- Try again later -- (4) is not used sigRequired (5), -- Must sign the request unauthorized (6) -- Request unauthorized } ResponseBytes ::= SEQUENCE { responseType OBJECT IDENTIFIER, response OCTET STRING } BasicOCSPResponse ::= SEQUENCE { tbsResponseData ResponseData, signatureAlgorithm AlgorithmIdentifier, signature BIT STRING, certs [0] EXPLICIT SEQUENCE OF Certificate OPTIONAL } ResponseData ::= SEQUENCE { version [0] EXPLICIT Version DEFAULT v1, responderID ResponderID, producedAt GeneralizedTime, -- The format for GeneralizedTime is -- as specified in Section 4.1.2.5.2 -- [RFC5280] responses SEQUENCE OF SingleResponse, responseExtensions [1] EXPLICIT Extensions OPTIONAL } ResponderID ::= CHOICE { byName [1] Name, byKey [2] KeyHash } KeyHash ::= OCTET STRING -- SHA-1 hash of responder's public key (i.e., the -- SHA-1 hash of the value of the BIT STRING -- subjectPublicKey [excluding the tag, length, and -- number of unused bits] in the responder's -- certificate) SingleResponse ::= SEQUENCE { certID CertID, certStatus CertStatus, thisUpdate GeneralizedTime, nextUpdate [0] EXPLICIT GeneralizedTime OPTIONAL, singleExtensions [1] EXPLICIT Extensions OPTIONAL } CertStatus ::= CHOICE { good [0] IMPLICIT NULL, revoked [1] IMPLICIT RevokedInfo, unknown [2] IMPLICIT UnknownInfo } RevokedInfo ::= SEQUENCE { revocationTime GeneralizedTime, revocationReason [0] EXPLICIT CRLReason OPTIONAL } UnknownInfo ::= NULL ArchiveCutoff ::= GeneralizedTime AcceptableResponses ::= SEQUENCE OF OBJECT IDENTIFIER ServiceLocator ::= SEQUENCE { issuer Name, locator AuthorityInfoAccessSyntax } CrlID ::= SEQUENCE { crlUrl [0] EXPLICIT IA5String OPTIONAL, crlNum [1] EXPLICIT INTEGER OPTIONAL, crlTime [2] EXPLICIT GeneralizedTime OPTIONAL } PreferredSignatureAlgorithms ::= SEQUENCE OF PreferredSignatureAlgorithm PreferredSignatureAlgorithm ::= SEQUENCE { sigIdentifier AlgorithmIdentifier, certIdentifier AlgorithmIdentifier OPTIONAL } -- Object Identifiers id-kp-OCSPSigning OBJECT IDENTIFIER ::= { id-kp 9 } id-pkix-ocsp OBJECT IDENTIFIER ::= { id-ad-ocsp } id-pkix-ocsp-basic OBJECT IDENTIFIER ::= { id-pkix-ocsp 1 } id-pkix-ocsp-nonce OBJECT IDENTIFIER ::= { id-pkix-ocsp 2 } id-pkix-ocsp-crl OBJECT IDENTIFIER ::= { id-pkix-ocsp 3 } id-pkix-ocsp-response OBJECT IDENTIFIER ::= { id-pkix-ocsp 4 } id-pkix-ocsp-nocheck OBJECT IDENTIFIER ::= { id-pkix-ocsp 5 } id-pkix-ocsp-archive-cutoff OBJECT IDENTIFIER ::= { id-pkix-ocsp 6 } id-pkix-ocsp-service-locator OBJECT IDENTIFIER ::= { id-pkix-ocsp 7 } id-pkix-ocsp-pref-sig-algs OBJECT IDENTIFIER ::= { id-pkix-ocsp 8 } id-pkix-ocsp-extended-revoke OBJECT IDENTIFIER ::= { id-pkix-ocsp 9 } END A.2. OCSP in ASN.1 - 2008 Syntax OCSP-2024-08 { iso(1) identified-organization(3) dod(6) internet(1) security(5) mechanisms(5) pkix(7) id-mod(0) id-mod-ocsp-2024-08(112) } DEFINITIONS EXPLICIT TAGS ::= BEGIN IMPORTS Extensions{}, EXTENSION FROM PKIX-CommonTypes-2009 -- From [RFC5912] { iso(1) identified-organization(3) dod(6) internet(1) security(5) mechanisms(5) pkix(7) id-mod(0) id-mod-pkixCommon-02(57) } AlgorithmIdentifier{}, DIGEST-ALGORITHM, SIGNATURE-ALGORITHM, PUBLIC-KEY FROM AlgorithmInformation-2009 -- From [RFC5912] { iso(1) identified-organization(3) dod(6) internet(1) security(5) mechanisms(5) pkix(7) id-mod(0) id-mod-algorithmInformation-02(58) } AuthorityInfoAccessSyntax, GeneralName, CrlEntryExtensions, CRLReason FROM PKIX1Implicit-2009 -- From [RFC5912] { iso(1) identified-organization(3) dod(6) internet(1) security(5) mechanisms(5) pkix(7) id-mod(0) id-mod-pkix1-implicit-02(59) } Name, Certificate, CertificateSerialNumber, id-kp, id-ad-ocsp FROM PKIX1Explicit-2009 -- From [RFC5912] { iso(1) identified-organization(3) dod(6) internet(1) security(5) mechanisms(5) pkix(7) id-mod(0) id-mod-pkix1-explicit-02(51) } sa-dsaWithSHA1, sa-rsaWithMD2, sa-rsaWithMD5, sa-rsaWithSHA1 FROM PKIXAlgs-2009 -- From [RFC5912] { iso(1) identified-organization(3) dod(6) internet(1) security(5) mechanisms(5) pkix(7) id-mod(0) id-mod-pkix1-algorithms2008-02(56) } ; OCSPRequest ::= SEQUENCE { tbsRequest TBSRequest, optionalSignature [0] EXPLICIT Signature OPTIONAL } TBSRequest ::= SEQUENCE { version [0] EXPLICIT Version DEFAULT v1, requestorName [1] EXPLICIT GeneralName OPTIONAL, requestList SEQUENCE OF Request, requestExtensions [2] EXPLICIT Extensions {{ re-ocsp-nonce | re-ocsp-response | re-ocsp-preferred-signature-algorithms, ... }} OPTIONAL } Signature ::= SEQUENCE { signatureAlgorithm AlgorithmIdentifier { SIGNATURE-ALGORITHM, {...}}, signature BIT STRING, certs [0] EXPLICIT SEQUENCE OF Certificate OPTIONAL } Version ::= INTEGER { v1(0) } Nonce ::= OCTET STRING (SIZE(1..128)) Request ::= SEQUENCE { reqCert CertID, singleRequestExtensions [0] EXPLICIT Extensions {{ re-ocsp-service-locator, ... }} OPTIONAL } CertID ::= SEQUENCE { hashAlgorithm AlgorithmIdentifier { DIGEST-ALGORITHM, {...}}, issuerNameHash OCTET STRING, -- Hash of issuer's DN issuerKeyHash OCTET STRING, -- Hash of issuer's public key serialNumber CertificateSerialNumber } OCSPResponse ::= SEQUENCE { responseStatus OCSPResponseStatus, responseBytes [0] EXPLICIT ResponseBytes OPTIONAL } OCSPResponseStatus ::= ENUMERATED { successful (0), -- Response has valid confirmations malformedRequest (1), -- Illegal confirmation request internalError (2), -- Internal error in issuer tryLater (3), -- Try again later -- (4) is not used sigRequired (5), -- Must sign the request unauthorized (6) -- Request unauthorized } RESPONSE ::= TYPE-IDENTIFIER ResponseSet RESPONSE ::= { basicResponse, ... } ResponseBytes ::= SEQUENCE { responseType RESPONSE.&id ({ResponseSet}), response OCTET STRING (CONTAINING RESPONSE. &Type({ResponseSet}{@responseType}))} basicResponse RESPONSE ::= { BasicOCSPResponse IDENTIFIED BY id-pkix-ocsp-basic } BasicOCSPResponse ::= SEQUENCE { tbsResponseData ResponseData, signatureAlgorithm AlgorithmIdentifier { SIGNATURE-ALGORITHM, { sa-dsaWithSHA1 | sa-rsaWithSHA1 | sa-rsaWithMD5 | sa-rsaWithMD2, ... }}, signature BIT STRING, certs [0] EXPLICIT SEQUENCE OF Certificate OPTIONAL } ResponseData ::= SEQUENCE { version [0] EXPLICIT Version DEFAULT v1, responderID ResponderID, producedAt GeneralizedTime, responses SEQUENCE OF SingleResponse, responseExtensions [1] EXPLICIT Extensions {{ re-ocsp-nonce | re-ocsp-extended-revoke, ... }} OPTIONAL } ResponderID ::= CHOICE { byName [1] Name, byKey [2] KeyHash } KeyHash ::= OCTET STRING -- SHA-1 hash of responder's public key -- (excluding the tag and length and number -- of unused bits) SingleResponse ::= SEQUENCE { certID CertID, certStatus CertStatus, thisUpdate GeneralizedTime, nextUpdate [0] EXPLICIT GeneralizedTime OPTIONAL, singleExtensions [1] EXPLICIT Extensions {{ re-ocsp-crl | re-ocsp-archive-cutoff | CrlEntryExtensions, ... }} OPTIONAL } CertStatus ::= CHOICE { good [0] IMPLICIT NULL, revoked [1] IMPLICIT RevokedInfo, unknown [2] IMPLICIT UnknownInfo } RevokedInfo ::= SEQUENCE { revocationTime GeneralizedTime, revocationReason [0] EXPLICIT CRLReason OPTIONAL } UnknownInfo ::= NULL ArchiveCutoff ::= GeneralizedTime AcceptableResponses ::= SEQUENCE OF RESPONSE.&id({ResponseSet}) ServiceLocator ::= SEQUENCE { issuer Name, locator AuthorityInfoAccessSyntax } CrlID ::= SEQUENCE { crlUrl [0] EXPLICIT IA5String OPTIONAL, crlNum [1] EXPLICIT INTEGER OPTIONAL, crlTime [2] EXPLICIT GeneralizedTime OPTIONAL } PreferredSignatureAlgorithms ::= SEQUENCE OF PreferredSignatureAlgorithm PreferredSignatureAlgorithm ::= SEQUENCE { sigIdentifier AlgorithmIdentifier { SIGNATURE-ALGORITHM, {...} }, certIdentifier AlgorithmIdentifier {PUBLIC-KEY, {...}} OPTIONAL } -- Certificate Extensions ext-ocsp-nocheck EXTENSION ::= { SYNTAX NULL IDENTIFIED BY id-pkix-ocsp-nocheck } -- Request Extensions re-ocsp-nonce EXTENSION ::= { SYNTAX Nonce IDENTIFIED BY id-pkix-ocsp-nonce } re-ocsp-response EXTENSION ::= { SYNTAX AcceptableResponses IDENTIFIED BY id-pkix-ocsp-response } re-ocsp-service-locator EXTENSION ::= { SYNTAX ServiceLocator IDENTIFIED BY id-pkix-ocsp-service-locator } re-ocsp-preferred-signature-algorithms EXTENSION ::= { SYNTAX PreferredSignatureAlgorithms IDENTIFIED BY id-pkix-ocsp-pref-sig-algs } -- Response Extensions re-ocsp-crl EXTENSION ::= { SYNTAX CrlID IDENTIFIED BY id-pkix-ocsp-crl } re-ocsp-archive-cutoff EXTENSION ::= { SYNTAX ArchiveCutoff IDENTIFIED BY id-pkix-ocsp-archive-cutoff } re-ocsp-extended-revoke EXTENSION ::= { SYNTAX NULL IDENTIFIED BY id-pkix-ocsp-extended-revoke } -- Object Identifiers id-kp-OCSPSigning OBJECT IDENTIFIER ::= { id-kp 9 } id-pkix-ocsp OBJECT IDENTIFIER ::= id-ad-ocsp id-pkix-ocsp-basic OBJECT IDENTIFIER ::= { id-pkix-ocsp 1 } id-pkix-ocsp-nonce OBJECT IDENTIFIER ::= { id-pkix-ocsp 2 } id-pkix-ocsp-crl OBJECT IDENTIFIER ::= { id-pkix-ocsp 3 } id-pkix-ocsp-response OBJECT IDENTIFIER ::= { id-pkix-ocsp 4 } id-pkix-ocsp-nocheck OBJECT IDENTIFIER ::= { id-pkix-ocsp 5 } id-pkix-ocsp-archive-cutoff OBJECT IDENTIFIER ::= { id-pkix-ocsp 6 } id-pkix-ocsp-service-locator OBJECT IDENTIFIER ::= { id-pkix-ocsp 7 } id-pkix-ocsp-pref-sig-algs OBJECT IDENTIFIER ::= { id-pkix-ocsp 8 } id-pkix-ocsp-extended-revoke OBJECT IDENTIFIER ::= { id-pkix-ocsp 9 } END Acknowledgements The authors of this document thank Mohit Sahni for his work to produce [RFC8954]. The authors also thank Russ Housley, Corey Bonnell, Michael StJohns, Tomas Gustavsson, and Carl Wallace for their feedback and suggestions. Author's Address Himanshu Sharma (editor) Netskope Inc 2445 Augustine Dr 3rd floor Santa Clara, California 95054 United States of America Email: [email protected] URI: www.netskope.com
__label__pos
0.730593
Role-Based Access Control (RBAC) is a widely-used access control model that governs user access to resources in a system based on their assigned roles. RBAC simplifies access management by granting permissions to users according to their roles and responsibilities, reducing the complexity of individually assigning permissions for each user. Users, roles, and permissions make up RBAC, as well as the hierarchy of roles between them. what is role based access control Benefits of RBAC • Simplified Administration: RBAC simplifies access management by grouping users with similar responsibilities into roles, reducing administrative overhead and the risk of human error. • Enhanced Security: RBAC helps prevent unauthorized access and reduces the attack surface, as users are granted only the necessary permissions for their roles. • Improved Compliance: RBAC aids in meeting regulatory compliance requirements by ensuring access control policies are well-defined and enforced. • Flexibility and Scalability: RBAC can accommodate organizational changes easily, such as new job roles or department restructuring, without the need to redefine access permissions for individual users. • Audit Trail and Accountability: RBAC provides a clear audit trail of actions performed by users based on their assigned roles, facilitating accountability and incident investigation. Implementing RBAC • Define Roles and Permissions: Identify the roles needed in the system and assign appropriate permissions to each role. • Assign Users to Roles: Map users to their respective roles based on their job responsibilities and access requirements. • Role Hierarchy (Optional): If applicable, establish a role hierarchy to simplify role assignment and inheritance. • Enforce Least Privilege: Ensure that each role has the minimum necessary permissions for its intended tasks. • Access Revocation and Review: Regularly review access rights and revoke access when necessary, such as when employees change roles or leave the organization. • Regular Auditing: Conduct periodic audits to validate that access permissions align with organizational policies and RBAC rules. Best Practices for ımplementing RBAC rbac best practices • Implement RBAC with Clarity: Avoid causing unnecessary confusion and workplace irritations when implementing RBAC by ensuring clear communication and understanding among all stakeholders. • Leverage Identity and Access Management (IAM) System: While not a prerequisite, having an IAM system in place can facilitate RBAC implementation and improve access management. • Identify Critical Resources: Create a list of resources that require controlled access to determine the roles needed. • Use the Principle of Least Privilege (POLP): Follow POLP to grant users access only to the actions, software, or files necessary for their job roles. • Integrate RBAC Across Systems: Ensure RBAC is integrated consistently across all systems throughout the organization to maintain uniform access controls. • Conduct Training: Provide training to employees to ensure they understand the principles of RBAC and their respective roles and responsibilities. • Periodic Auditing: Conduct regular audits of roles and access rights to identify and rectify any potential issues, ensuring compliance and security. RBAC vs. ABAC RBAC and Attribute-Based Access Control (ABAC) are both access control methods with different approaches. While RBAC grants access based on user roles, ABAC controls access based on a combination of user attributes, resource attributes, action attributes, and environmental attributes. ABAC offers more granular access control and is suitable for complex environments that require precise access management. Related Terms Suggested Articles
__label__pos
0.95927
[[TranslatedPages]] [[TOC(sectionindex, heading=ARexx Function List, notitle, depth=1, Documentation/ARexxAPI/)]] == SETFOLDER NAME:: SetFolder -- Set the current folder. TEMPLATE:: FOLDER/A FUNCTION:: Makes the named folder the current folder. INPUTS:: FOLDER/A - name, directory or number of the folder to turn into the current folder; folder numbers start counting from 0 RETURNS:: RC is set to 10 if the specified folder doesn't exist. NOTES:: In YAM 1.0 - 1.3.2, SETFOLDER could only accept a folder number as argument. This was changed in 1.3.3 to also accept folder names, and again in 2.2 to accept directory names, too. Therefore, to be on the safe side scripts should check the YAM version in use before deciding to use the latter as argument. Also, bear in mind localization when referring to folders by their name. EXAMPLE:: {{{#!urbiscript /* Switch to the Outgoing folder */ SETFOLDER Outgoing /* Switch to the first folder in the list */ SETFOLDER 0 }}} BUGS:: SEE ALSO:: [[REQUESTFOLDER]]
__label__pos
0.602619
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I am trying to get a Date type field from the entity and I encounter the following error: An exception has been thrown during the rendering of a template ("Warning: strtr() expects parameter 1 to be string, object given in /var/www/feedyourmind_symfony/vendor /symfony/src/Symfony/Component/Translation/IdentityTranslator.php line 62") in form_div_layout.html.twig at line 37. I am fairly new to Symfony2 and can not seem to figure out why I would be getting this error. Perhaps there is something wrong with my Entities? I do not know and I would really like some assistance if possible. Although the error points to an issue with the rendering of a template, I feel that the real error lies with the Entity and the date field not correctly functioning. Here is the basic code in my Controller: public function addQuestionAction(Request $request){ $question = new Question(); $form = $this->createForm(new QuestionType(), $question); return $this->render('LaPorchettaWebBundle:Default:add_question.html.twig', array( 'form' => $form->createView(), )); } Here is the TWIG view: {% extends "LaPorchettaWebBundle:Default:test.html.twig" %} {% block pageTitle %}LaPorchetta Create A Question{% endblock %} {% block content %} <div id="outer"> <form name="petEntry" action="" method="post" enctype="multipart/form-data" > {{ form_errors(form) }} <div class="left"> {{ form_label(form.survey) }} </div> <div class="right"> {{ form_widget(form.survey) }} {{ form_errors(form.survey) }} </div> <div class="left"> {{ form_label(form.section) }} </div> <div class="right"> {{ form_widget(form.section) }} {{ form_errors(form.section) }} </div> <div class="left"> {{ form_label(form.sub_section) }} </div> <div class="right"> {{ form_widget(form.sub_section) }} {{ form_errors(form.sub_section) }} </div> <div class="left"> {{ form_label(form.description) }} </div> <div class="right"> {{ form_widget(form.description) }} {{ form_errors(form.description) }} </div> <div class="left"> {{ form_label(form.points) }} </div> <div class="right"> {{ form_widget(form.points) }} {{ form_errors(form.points) }} </div> <div id="inputs"> <input type="button" id="btnCancel" name="cancel" value="Cancel" onclick="window.location = '' " /> <input id="update" type="submit" value="submit" /> </div> </div> </form> </div> {% endblock %} I have the following entities: <?php namespace LaPorchetta\WebBundle\Entity; use Doctrine\Common\Collections\ArrayCollection; use Doctrine\ORM\Mapping as ORM; use Symfony\Component\Validator\Constraints as Assert; /** * @ORM\Entity(repositoryClass="LaPorchetta\WebBundle\Repository\SurveyRepository") * @ORM\HasLifecycleCallbacks() * @ORM\Table(name="Surveys") */ class Survey { public function __construct() { $this->question = new ArrayCollection(); $this->store = new ArrayCollection(); } /** * @ORM\Id @ORM\Column(type="integer") * @ORM\GeneratedValue */ protected $id; /** * @ORM\Column(type="date") */ protected $survey_date; /** * @ORM\OneToMany(targetEntity="Question", mappedBy="survey") */ protected $question = null; /** * @ORM\OneToOne(targetEntity="Store", inversedBy="survey") */ protected $store = null; /** * Get id * * @return integer */ public function getId() { return $this->id; } /** * @ORM\prePersist */ public function setSurveyDate() { $this->survey_date = new \DateTime(); } /** * Get survey_date * * @return date */ public function getSurveyDate() { return $this->survey_date; } /** * Add question * * @param LaPorchetta\WebBundle\Entity\Question $question */ public function addQuestion(\LaPorchetta\WebBundle\Entity\Question $question) { $this->question[] = $question; } /** * Get question * * @return Doctrine\Common\Collections\Collection */ public function getQuestion() { return $this->question; } /** * Set store * * @param LaPorchetta\WebBundle\Entity\Store $store */ public function setStore(\LaPorchetta\WebBundle\Entity\Store $store) { $this->store = $store; } /** * Get store * * @return LaPorchetta\WebBundle\Entity\Store */ public function getStore() { return $this->store; } /** * Get action_item * * @return Doctrine\Common\Collections\Collection */ public function getActionItem() { return $this->action_item; } /** * Set action_item * * @param LaPorchetta\WebBundle\Entity\Question $actionItem */ public function setActionItem(\LaPorchetta\WebBundle\Entity\Question $actionItem) { $this->action_item = $actionItem; } } Entity Type -> Questions <?php namespace LaPorchetta\WebBundle\Entity; use Doctrine\ORM\Mapping as ORM; use Symfony\Component\Validator\Constraints as Assert; /** * @ORM\Entity(repositoryClass="LaPorchetta\WebBundle\Repository\QuestionRepository") * @ORM\Table(name="Questions") */ class Question { /** * @ORM\Id @ORM\Column(type="integer") @ORM\GeneratedValue */ protected $id; /** * @ORM\ManyToOne(targetEntity="Survey", inversedBy="question") */ protected $survey; /** * @ORM\Column(type="string") */ protected $section; /** * @ORM\Column(type="string") */ protected $sub_section; /** * @ORM\Column(type="string") */ protected $description; /** * @ORM\Column(type="integer") */ protected $points; /** * Get id * * @return integer */ public function getId() { return $this->id; } /** * Set section * * @param string $section */ public function setSection($section) { $this->section = $section; } /** * Get section * * @return string */ public function getSection() { return $this->section; } /** * Set sub_section * * @param string $subSection */ public function setSubSection($subSection) { $this->sub_section = $subSection; } /** * Get sub_section * * @return string */ public function getSubSection() { return $this->sub_section; } /** * Set description * * @param string $description */ public function setDescription($description) { $this->description = $description; } /** * Get description * * @return string */ public function getDescription() { return $this->description; } /** * Set survey * * @param LaPorchetta\WebBundle\Entity\Survey $survey */ public function setSurvey(\LaPorchetta\WebBundle\Entity\Survey $survey) { $this->survey = $survey; } /** * Get survey * * @return LaPorchetta\WebBundle\Entity\Survey */ public function getSurvey() { return $this->survey; } /** * Set points * * @param integer $points */ public function setPoints($points) { $this->points = $points; } /** * Get points * * @return integer */ public function getPoints() { return $this->points; } public function __construct() { $this->action_item = new \Doctrine\Common\Collections\ArrayCollection(); } /** * Add action_item * * @param LaPorchetta\WebBundle\Entity\Survey $actionItem */ public function addSurvey(\LaPorchetta\WebBundle\Entity\Survey $actionItem) { $this->action_item[] = $actionItem; } /** * Get action_item * * @return Doctrine\Common\Collections\Collection */ public function getActionItem() { return $this->action_item; } /** * Set action_item * * @param LaPorchetta\WebBundle\Entity\Survey $actionItem */ public function setActionItem(\LaPorchetta\WebBundle\Entity\Survey $actionItem) { $this->action_item = $actionItem; } } I have the following QuestionType: <?php namespace LaPorchetta\WebBundle\Form; use Symfony\Component\Form\AbstractType; use Symfony\Component\Form\FormBuilder; use Doctrine\ORM\EntityRepository; class QuestionType extends AbstractType { public function buildForm(FormBuilder $builder, array $options) { $builder ->add('survey', 'entity', array( 'class'=>'LaPorchettaWebBundle:Survey', 'property'=>'survey_date', 'multiple' => true, 'required' => true, 'query_builder' => function(EntityRepository $er) { return $er->createQueryBuilder('s')->orderBy('s.survey_date', 'ASC'); })) ->add('section', 'text') ->add('sub_section', 'text') ->add('description', 'text') ->add('points', 'integer'); } public function getName() { return 'LaPorchetta_WebBundle_QuestionType'; } } share|improve this question 1   try commenting code bits by bits, important to notice is the trace error too –  cordoval Nov 24 '11 at 14:33 add comment 1 Answer up vote 1 down vote accepted Although not the most convenient way to achieve the desired results, I ended up constructing a form inside the Controller action and when it came to addressing the date field, I used it like so: $surveys = $item_repo->findAll(); foreach($surveys as $survey){ array_push($dates, $survey->getSurveyDate()->format('d/m/Y') ); } $question = array(); $form = $this->createFormBuilder( $question ) ->add('survey', 'choice', array( 'choices' => $dates )) ->add('section', 'text', array('required' => true, 'trim' => true)) ->add('sub_section', 'text', array('required' => true, 'trim' => true)) ->add('description', 'text', array('required' => true, 'trim' => true)) ->add('points', 'integer', array('required' => true, 'trim' => true)) ->getForm(); In specifying the formatting of the Datetime object, like so (->format('d/m/Y') ), TWIG was able to process the data without any errors. share|improve this answer   Hi! Sorry to be completely off-topic, but I have a very similar error but I don't know how to solve. Can you help me? –  Gianni Alessandro Feb 6 at 22:29 add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.977931
5 I have a Visualforce Page for QuoteLineItem multiple edit. It works as a shopping cart: the top section has the items already in the Quote, and in the bottom there are the available products from the selected Price Book. The user can add/remove items from the shopping cart, edit some custom fields and then Save everything and return to the Quote. But I'm getting a weird behavior in a certain sequence of steps (red arrow is where I will click next). 1. There are already items in the cart. I will add some random item. 2. The new item is added, without any of the custom visible fields populated. I will now remove one of the existing items. enter image description here 3. The recently added line has now only one of the custom fields populated with whatever was in the (n-1) position. See in the viewState that it has NOT that field populated in the object. enter image description here 4. If I try to save it, or add a new item afterwards, that dropbox gets back to not being filled, as it should be. Oh, and that field (New/Renewal) should not even be possible to populate before setting something in "Fee Type" field, it has a field dependency. So, I have no idea what is going on. From everything I look in code, it seems OK. I have even written an apex test that replicates these steps and it works. I guess it is something UI related that I am messing up. Visualforce page: <apex:page standardController="Quote" extensions="QuoteLineItemEntryExtension" action="{!priceBookCheck}" cache="False"> <c:LoadingBox /> <apex:sectionHeader Title="Manage {!$ObjectType.Product2.LabelPlural}" subtitle="{!quote.Name}"/> <apex:messages style="color:red"/> <style> .search{ font-size:14pt; margin-right: 20px; } .fyi{ color:red; font-style:italic; } .label{ margin-right:10px; font-weight:bold; } </style> <script type='text/javascript'> // This script assists the search bar functionality // It will execute a search only after the user has stopped typing for more than 1 second // To raise the time between when the user stops typing and the search, edit the following variable: var waitTime = 0.4; var countDown = waitTime; var started = false; function resetTimer(){ countDown=waitTime; if(started==false){ started=true; runCountDown(); } } function runCountDown(){ countDown -= 0.2; if(countDown<=0){ fetchResults(); started=false; } else{ window.setTimeout(runCountDown,200); } } </script> <apex:form > <apex:outputPanel id="mainBody"> <apex:outputLabel styleClass="label">PriceBook: </apex:outputLabel> <apex:outputText value="{!theBook.Name}"/>&nbsp; <apex:commandLink action="{!changePricebook}" value="change" immediate="true"/> <br/> <!-- not everyone is using multi-currency, so this section may or may not show --> <apex:outputPanel rendered="{!multipleCurrencies}"> <apex:outputLabel styleClass="label">Currency: </apex:outputLabel> <apex:outputText value="{!chosenCurrency}"/> <br/> </apex:outputPanel> <br/> <!-- this is the upper table... a.k.a. the "Shopping Cart"--> <!-- notice we use a lot of $ObjectType merge fields... I did that because if you have changed the labels of fields or objects it will reflect your own lingo --> <apex:pageBlock title="Selected {!$ObjectType.Product2.LabelPlural}" id="selected_items"> <apex:variable var="index" value="{!0}"/> <apex:pageblockTable value="{!shoppingCart}" var="s"> <apex:column headerValue="Index" value="{!index}"/> <apex:column> <apex:commandButton value="Remove" action="{!removeFromShoppingCart}" reRender="selected_items,searchResults" immediate="true" status="loadStatus"> <!-- this param is how we send an argument to the controller, so it knows which row we clicked 'remove' on --> <apex:param value="{!index}" assignTo="{!toUnselect}" name="toUnselect"/> </apex:commandButton> <!-- Increment our index counter --> <apex:variable var="index" value="{!index +1}"/> </apex:column> <apex:column headerValue="{!$ObjectType.Product2.LabelPlural}" value="{!s.PriceBookEntry.Product2.Name}"/> <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.Fee_Type__c.Label}"> <apex:inputField value="{!s.Fee_Type__c}" required="true"/> </apex:column> <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.Quantity.Label}"> <apex:inputField value="{!s.Quantity}" style="width:70px" required="true"/> </apex:column> <!-- <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.UnitPrice.Label}"> <apex:inputField value="{!s.UnitPrice}" style="width:70px" required="true"/> </apex:column> --> <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.License_category__c.Label}"> <apex:inputField value="{!s.License_category__c}" required="true"/> </apex:column> <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.License_start_date__c.Label}"> <apex:inputField value="{!s.License_start_date__c}"/> </apex:column> <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.License_end_date__c.Label}"> <apex:inputField value="{!s.License_end_date__c}" /> </apex:column> <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.Uplift__c.Label}"> <apex:inputField value="{!s.Uplift__c}" style="width:70px" /> </apex:column> <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.SPF__c.Label}"> <apex:inputField value="{!s.SPF__c}" style="width:70px" required="true"/> </apex:column> <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.Major_Account_Discount__c.Label}"> <apex:inputField value="{!s.Major_Account_Discount__c}" style="width:70px" required="false"/> </apex:column> <!-- Services related fields --> <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.Service_description__c.Label} (Services)"> <apex:inputField value="{!s.Service_description__c}" required="false"/> </apex:column> <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.Start_date__c.Label} (Services)"> <apex:inputField value="{!s.Start_date__c}" required="false"/> </apex:column> <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.Duration__c.Label} (Services)"> <apex:inputField value="{!s.Duration__c}" required="false"/> </apex:column> </apex:pageblockTable> <apex:pageBlockButtons > <apex:commandButton action="{!onSave}" value="Save" status="loadStatus"/> <apex:commandButton action="{!onCancel}" value="Cancel" immediate="true" status="loadStatus"/> </apex:pageBlockButtons> </apex:pageBlock> <!-- this is the lower table: search bar and search results --> <apex:pageBlock > <apex:outputPanel styleClass="search"> Search for {!$ObjectType.Product2.LabelPlural}: </apex:outputPanel> <apex:actionRegion renderRegionOnly="false" immediate="true"> <apex:actionFunction name="fetchResults" action="{!updateAvailableList}" reRender="searchResults" status="searchStatus"/> <!-- here we invoke the scripting to get out fancy 'no button' search bar to work --> <apex:inputText value="{!searchString}" onkeydown="if(event.keyCode==13){this.blur();}else{resetTimer();}" style="width:300px"/> &nbsp;&nbsp; <i> <!-- actionStatus component makes it easy to let the user know when a search is underway --> <apex:actionStatus id="searchStatus" startText="searching..." stopText=" "/> </i> </apex:actionRegion> <br/> <br/> <apex:outputPanel id="searchResults"> <apex:pageBlockTable value="{!AvailableProducts}" var="a"> <apex:column headerValue="{!$ObjectType.Product2.Fields.Name.Label}" value="{!a.Product2.Name}" /> <apex:column headerValue="{!$ObjectType.Product2.Fields.Family.Label}" value="{!a.Product2.Family}"/> <apex:column headerValue="{!$ObjectType.Product2.Fields.Legacy__c.Label}" value="{!a.Product2.Legacy__c}"/> <apex:column headerValue="Sales Price" value="{!a.UnitPrice}"/> <apex:column headerValue="{!$ObjectType.Product2.Fields.Description.Label}" value="{!a.Product2.Description}"/> <apex:column > <!-- command button in a column... neato --> <apex:commandButton value="Select" action="{!addToShoppingCart}" reRender="selected_items,searchResults" immediate="true" status="loadStatus"> <!-- again we use apex:param to be able to tell the controller which row we are working with --> <apex:param value="{!a.Id}" assignTo="{!toSelect}" name="toSelect"/> </apex:commandButton> </apex:column> </apex:pageBlockTable> <!-- We put up a warning if results exceed 100 rows --> <apex:outputPanel styleClass="fyi" rendered="{!overLimit}"> <br/> Your search returned over 100 results, use a more specific search string if you do not see the desired {!$ObjectType.Product2.Label}. <br/> </apex:outputPanel> </apex:outputPanel> </apex:pageBlock> </apex:outputPanel> </apex:form> </apex:page> Add and remove function excerpts, Apex code (controller): public with sharing class QuoteLineItemEntryExtension { public Quote theQuote {get;set;} public String searchString {get;set;} public quoteLineItem[] shoppingCart {get;set;} public priceBookEntry[] AvailableProducts {get;set;} public Pricebook2 theBook {get;set;} public String toSelect {get; set;} public String toUnselect {get; set;} public Decimal Total {get;set;} public Boolean overLimit {get;set;} public Boolean multipleCurrencies {get; set;} private Boolean forcePricebookSelection = false; private quoteLineItem[] forDeletion = new quoteLineItem[]{}; private void Initialize(ApexPages.StandardController controller) { // Need to know if org has multiple currencies enabled multipleCurrencies = UserInfo.isMultiCurrencyOrganization(); // Get information about the quote being worked on if (multipleCurrencies) { theQuote = database.query('select Id, Pricebook2Id, Pricebook2.Name, CurrencyIsoCode from Quote where Id = \'' + controller.getRecord().Id + '\' limit 1'); } else { theQuote = [select Id, Pricebook2Id, PriceBook2.Name from Quote where Id = :controller.getRecord().Id limit 1]; } // If products were previously selected need to put them in the "selected products" section to start with shoppingCart = [select Id, Quantity, TotalPrice, UnitPrice, Description, License_start_date__c, License_end_date__c, SPF__c, License_category__c, Fee_Type__c, Uplift__c, Major_Account_Discount__c, Service_description__c, Start_date__c, Duration__c, PriceBookEntryId, PriceBookEntry.Name, PriceBookEntry.IsActive, PriceBookEntry.Product2Id, PriceBookEntry.Product2.Name, PriceBookEntry.PriceBook2Id from quoteLineItem where QuoteId=:theQuote.Id ORDER BY SortOrder ASC]; // Check if Opp has a pricebook associated yet if(theQuote.Pricebook2Id == null){ Pricebook2[] activepbs = [select Id, Name from Pricebook2 where isActive = true limit 2]; if(activepbs.size() == 2){ forcePricebookSelection = true; theBook = new Pricebook2(); } else{ theBook = activepbs[0]; } } else{ theBook = theQuote.Pricebook2; } if(!forcePricebookSelection) { updateAvailableList(); } } public QuoteLineItemEntryExtension(ApexPages.StandardController controller) { Initialize(controller); } public PageReference addToShoppingCart() { for(PricebookEntry d : AvailableProducts){ String entry_id = (String) d.Id; if (entry_id.equals(toSelect)) { QuoteLineItem new_item = new QuoteLineItem( QuoteId=theQuote.Id, PriceBookEntry=d, PriceBookEntryId=d.Id, UnitPrice=d.UnitPrice, Quantity=1, SPF__c=0 ); shoppingCart.add(new_item); break; } } return null; } public PageReference removeFromShoppingCart() { QuoteLineItem to_remove = shoppingCart.remove(Integer.valueOf(toUnselect)); if (to_remove.Id != null) { forDeletion.add(to_remove); } return null; } public PageReference onSave() { // If previously selected products are now removed, we need to delete them try { if(forDeletion.size()>0) { delete(forDeletion); } } catch (System.DmlException e) { if (e.getMessage().contains('ENTITY_IS_DELETED')) { ApexPages.addMessage( new ApexPages.Message( ApexPages.Severity.WARNING, 'Tried to delete the same item more than once. Please check if the Quote now contains the desired items and repeat the delete operation if necessary.' ) ); return null; } else { ApexPages.addMessages(e); return null; } } // Previously selected products may have new quantities and amounts, and we may have new products listed, so we use upsert here try{ if(shoppingCart.size()>0) { upsert(shoppingCart); } } catch(Exception e){ ApexPages.addMessages(e); return null; } // After save return the user to the quote return new PageReference('/' + ApexPages.currentPage().getParameters().get('Id')); } public PageReference onCancel() { // If user hits cancel we commit no changes and return them to the quote return new PageReference('/' + ApexPages.currentPage().getParameters().get('Id')); } public void updateAvailableList() { // We dynamically build a query string // The original code excludes items already in the shopping cart, but we removed this feature String qString = 'select Id, Pricebook2Id, IsActive, Product2.Name, Product2.Family, Product2.Legacy__c, Product2.IsActive, Product2.Description, UnitPrice from PricebookEntry where IsActive=true and Pricebook2Id = \'' + theBook.Id + '\''; if(multipleCurrencies) { qstring += ' and CurrencyIsoCode = \'' + theQuote.get('currencyIsoCode') + '\''; } // note that we are looking for the search string entered by the user in the name OR description // modify this to search other fields if desired if (searchString!=null) { qString+= ' and (Product2.Name like \'%' + searchString + '%\' or Product2.Description like \'%' + searchString + '%\')'; } qString+= ' order by Product2.Name'; qString+= ' limit 201'; system.debug('qString:' +qString); AvailableProducts = database.query(qString); // We only display up to 200 results... if there are more than we let the user know (see vf page) if (AvailableProducts.size()==201) { AvailableProducts.remove(200); overLimit = true; } else{ overLimit=false; } } } • 1 Hi Mauricio! While this is a great question, I think we might need to see your page's code as well. It probably is some combination of required+rerender, but it's hard to be certain without seeing code. – sfdcfox Oct 31 '16 at 16:15 • Hi there! I just added the whole visualforce page, but only part of the apex code to the question. – Mauricio Oliveira Oct 31 '16 at 16:36 • I just tested removing the immediate="true" from my Remove button. With this change the user is not allowed to remove an existing an item before filling the new item first, and when this is followed the reported error does not occur. But this is not exactly the behavior I want, and now I am very curious to know what is going on. Anybody? – Mauricio Oliveira Nov 1 '16 at 17:09 • When you use immediate=true, it does not fire any getters/setters. I wonder if that is the issue here. Can you include the backend for the relevant properties as well? – Adrian Larson Nov 4 '16 at 16:14 • I just added more of the code to the original question. – Mauricio Oliveira Nov 4 '16 at 16:49 3 +50 Dependent picklist works fine with <apex:inputField> when you have static table. It erratically behaves with rerendering by <apex:inputField> (controlling picklist). This strange behavior you can see at your 2nd image where without selecting controlling field i.e. Fee Type, the dependent field New\Renewal is selected. Moreover, this New value is not the actual value. If you click on the dropdown you will see there is one more duplicate New is displaying like this. New/Renewal picklist Now more interesting part, you thought that since New has already been selected in dependent picklist, so you are choosing the valid Fee Type (controlling) and saving this record. Right? System will allow you to save but with blank dependent value. Updated Workarounds after more investigation: 1. You already have removed immediate= true from Remove button property. That's nice. 2. Remove required=true property from Fee Type and License Category inputFields. 3. To show both of them required (red vertical color), wrap those fields by <div class="requiredInput"><div class="requiredBlock"></div></div> So, onclick of Remove button, system will not stop you removing the entries even if mandatory fields are not entered which is currently doing by actionSupport function. code will look like this: <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.Fee_Type__c.Label}"> <div class="requiredInput"><div class="requiredBlock"></div> <apex:inputField value="{!s.Fee_Type__c}"> <apex:actionSupport event="onchange" reRender="lc"/> </apex:inputField> </div> </apex:column> <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.Quantity.Label}"> <apex:inputField value="{!s.Quantity}" style="width:70px" required="true"/> </apex:column> <apex:column headerValue="{!$ObjectType.QuoteLineItem.Fields.License_category__c.Label}"> <div class="requiredInput"><div class="requiredBlock"></div> <apex:inputField value="{!s.License_category__c}" id="lc"/> </div> </apex:column> 1. Now to validate entries during save, write a small validation logic that Fee Type and License Category are mandatory, like below: try{ if(shoppingCart.size()>0) { for(quoteLineItem obj:shoppingCart) { if(String.isBlank(obj.Fee_Type__c) || String.isBlank(obj.License_category__c)) { throw new MyCustomException('Fee Type and License Category are mandatory'); } } upsert(shoppingCart); } } catch(Exception e){ ApexPages.addMessages(e); return null; } 2. create myCustomException class which will throw custom exceptions. public class MyCustomException extends Exception{} Finally, during test, if you try to save without providing mandatory inputs it will throw the error like this. Fee Type is mandatory • Thank you for the detailed answer. I will think about the proposed workaround, but it will be an extra effort to keep that visualforce selectList control updated, since sometimes I need to change those options from QLI fields, so I would need to update the visualforce page as well (and remember to do so). As I said before, I already removed the 'immediate=True' restriction and now new items are validated before removing an item in the cart, which stopped the weird behavior I declared.I will wait some time to see other errors still keep occurring before taking other actions. Thanks. – Mauricio Oliveira Nov 8 '16 at 10:38 • I have updated entire answer and remembering my advanced dev assignment days. It will solve your issue with nicer UI experience. – Santanu Boral Nov 9 '16 at 1:06 • 1 Thank you @Santanu, I liked the idea of making them required through div and checking through code. Will add that strategy to my skill set for this and next developments. Thanks for your time. – Mauricio Oliveira Nov 9 '16 at 11:14 3 I found a similar problem on the developer forums from 2008 - If immediate="true" rerender does not appear to work. The general problem was that immediate="true" wasn't well suited to scenarios where you would bypass validation on inputfields and then come back to the page to continue working with the controls that had been bypassed. The recommendation in that post was to remove the required attributes from the Visualforce markup and move the validation into the server side controller. Hence you would no longer need to use the immediate attribute. You could compliment this with your own JavaScript to do the validation as required. • Thanks, @Daniel. Good to know that it seems to be a real issue, I was really surprised by this behavior I got. I did remove already the immediate=true. This changes the page behavior and user workflow a little bit, but it seems to be more consistent. I honestly don't like adding JavaScript to control page behavior in Salesforce (also don't have much expertise with it), but I will consider this option if the problem persists. Thank you. – Mauricio Oliveira Nov 8 '16 at 10:43 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.678267
Why Flutter Is Better Than React Native Nowadays, programmers have two competitive cross-platform application development choices: Flutter and React Native. We can use both frameworks to build cross-platform mobile apps and desktop apps. Both frameworks indeed look similar from the outside and in the available features. Hopefully, you have already read many comparisons and reviews about Flutter and React Native. Many developers think that Flutter won’t be widely used because it uses an unfamiliar programming language, Dart. A programming language is just an interface for developers to interact with the framework. How a particular framework solves the cross-platform development problem is more important than the popularity of a specific framework’s programming language. I did some quick research on the internal architecture of both Flutter and React Native. Also, I created several applications on various platforms using both frameworks. Finally, I found the following benefits if you develop your next awesome app with Flutter. Flutter Has a Near-Native Performance Nowadays, performance is so underrated because of powerful devices. However, users have devices with various sorts of specifications. Some users may try to run your application while running many other applications. Your application should work fine in all these conditions. Therefore, performance is still a crucial factor in modern cross-platform applications. Undoubtedly, an application written without any framework performs better than Flutter and React Native apps. But we often have to choose a cross-platform application framework for rapid feature delivery. A typical React Native app has two separate modules: native UI and JavaScript engine. React Native renders native platform-specific UI elements based on React state changes. On the other hand, it uses a JavaScript engine (it’s Hermes in most scenarios) to run the application’s JavaScript. Every JavaScript-to-native and native-to-JavaScript call goes through a JavaScript bridge, similar to Apache Cordova’s design. React Native silently bundles your application with a JavaScript engine at the end. Flutter apps don’t have any JavaScript runtimes, and Flutter uses binary messaging channels to build a bidirectional communication stream between Dart and native code. Flutter offers near-native performance for calling native code from Dart because of this binary messaging protocol and Dart’s ahead-of-time (AOT) compilation process. React Native apps may perform poorly when there are above-average native calls. Flutter Apps Have a Consistent UI React Native renders platform-specific UI elements. For example, your application renders native iOS UI elements if you run your application on an Apple mobile device. Each platform defines unique design concepts for its UI elements. Some platforms have UI elements that other platforms don’t have. Therefore, even a simple UI change requires testing on multiple platforms if you use React Native. Also, you cannot overcome the limitations of platform-specific UI elements. Flutter SDK defines its own UI toolkit. Therefore, your Flutter app looks the same on every operating system. Unlike React Native’s platform-specific UI elements, the Flutter team can introduce new features to each UI element. Thanks to flutter-theming, you can change your app’s theme based on the user’s settings on a particular operating system. Almost all modern apps show their brand from the app’s design concepts. Flutter motivates to build consistent user experience across all supported operating systems with a consistent GUI layer. Flutter Offers a Productive Layout System React Native has a FlexBox concept-based layout system created with the Yoga layout engine. All web developers and UI designers are familiar with CSS FlexBox styling. React Native’s layout syntax is similar to CSS FlexBox syntaxes. Many developers often struggle with advanced CSS styles, and they often let the team’s UI developers fix CSS. Therefore, if you use React Native to make your next app, you need to hire a UI developer or ask mobile developers to become familiar with CSS FlexBox syntax. Flutter has a widget tree-based layout system. In other words, Flutter developers typically define widgets in a render-tree-like data structure by overriding the build method. They can imagine how each widget will render on the screen. Additional UI developers or FlexBox experience for existing developers is not required if you chose Flutter. Even a backend engineer can be familiar with this widget-tree concept quickly rather than the FlexBox concept. You can increase the feature development speed of your cross-platform app thanks to Flutter’s tree-based layout system. When the application layout becomes complex, programmers can group widgets into different sections by assigning them to different Dart variables. Flutter Officially Supports All Popular Platforms React Native officially supports only Android and iOS platforms. However, there are several forks of React Native that support desktop platforms. For example, Proton Native generates Qt and wxWidgets-based cross-platform desktop applications from React Native codebases. But Proton Native is not actively maintained now, and there is an active fork of it: Valence Native. Also, Microsoft maintains two React Native forks: react-native-windows and react-native-macos. If you wish to build a desktop application for your existing React Native app, there are several choices. Every popular React Native library doesn’t support all these forks. Also, there is no full-featured React Native fork for Linux yet. Flutter officially supports Android, iOS, Linux, Windows, macOS, Fuchsia, and Web. All supported operating systems use the same rendering backend, Skia. Flutter motivates all plugin developers to add implementations for all platforms by providing a high-performance Dart-to-Native binary communication mechanism and compromised documentation. Therefore, almost all popular Flutter plugins will work on all supported platforms. Your Flutter App Will Natively Run on Fuchsia Probably you already know that Google is developing a new operating system from scratch, Fuchsia. The microkernel-architecture-based Zircon kernel powers Fuchsia. According to Wikipedia, Google’s idea is to make Fuchsia a universal operating system that supports almost all devices (including embedded devices such as digital watches and traffic light systems). Google is building Fuchsia from many learnings from all existing platforms. Therefore, there is a higher probability for Fuchsia to become successful in the operating systems market. Fuchsia is implementing the Starnix module to run Linux binaries inside Fuchsia. The Starnix module is still a very experimental module, according to its design documentation. Apparently, they are trying to run Linux binaries by running the Linux kernel in a Docker-like container. Therefore, your React Native app won’t work on Fuchsia as a truly native app. If someone wishes to add a Fuchsia backend for React Native, someone needs to make another fork like react-native-windows. Flutter SDK may become the default GUI application development kit on Fuchsia. Therefore, your Flutter app will work natively on Fuchsia. Conclusion The React Native project is two years older than the Flutter project, and the entire React community backs it. Flutter’s community still is new and growing because Flutter doesn’t use Angular, and Dart wasn’t a popular general-purpose programming language earlier, like JavaScript. We still cannot compare Flutter’s features with other mature cross-platform frameworks. But Flutter has solved the cross-platform problem via the most effective approach. Both frameworks run on top of a native host application. React Native cannot improve its performance the same as Flutter because of its JavaScript runtime-based architecture. Try building apps with Flutter, and don’t be afraid by thinking that Dart is an unfamiliar language. Leave a comment: Your email address will not be published. Required fields are marked * Top Ready to Grow Your Business Online. Just Give Us a Call: phone +91 8767612340 Or - Fill out the form below and we'll be in touch within 24 hours.
__label__pos
0.65291
0 While creating responsive layouts for mobile apps, we take into account WIDTH of screen, and create layouts for different WIDTHS, but why don't we consider HEIGHT? If 2 devices have a lot of difference in HEIGHT, my app in one device will look drastically different from the one in other device. Doesn't responsive layout mean that my app should look the same in devices of all widths and heights? 2 • Feel free to consider height in your apps when necessary. I do. – Luciano Commented May 11, 2021 at 9:53 • 1 Why is it that you think that height is not taken into consideration? Also, responsive layout is not specifically about making it look the same in different screen sizes, it's about making it functional in them. – musefan Commented May 11, 2021 at 12:00 2 Answers 2 0 Good UI design should consider height -- for example, it should avoid creating "false floors" and not make the user scroll for ages (unless they want to, such as with Twitter). With users controlling the font size on their mobile devices, it's very challenging to force all content to fit into a certain width and height without the overall design breaking. Left-to-right scrolling has not only been shown to be undesirable, it's too close to swiping gestures on mobile. Vertical scrolling is acceptable for most users. -1 Preface When we're talking about the interfaces, we're obliged to take into account the devices that makes us possible to display the content we aim to see. So that the main medium when the term responsivity was introduced was computer screens/displays and web through computers. As you might imagine displays of all the computers/notebooks, the screens are rectangular on default and people using these displays intend to make browsers/applications minimized via reducing it's width rather than the height as it opens more broader and wider area. Answers Doesn't responsive layout mean that my app should look the same in devices of all widths and heights? Well, it's just because in these days mostly people are mobile users and the term need to be updated as the term responsivity originally also covers the same issue. Or there might be attempts to describe the concept differentially as only mobile users ran into this seperation since web is mostly on rectangular screens. If 2 devices have a lot of difference in HEIGHT, my app in one device will look drastically different from the one in other device Besides you're totally correct and even it's the same for width for development point of view, it might mostly happening beacuse the heights are used %100 to fit all the screen sizes for mobile preferably. But that's not my main arguement and point to avoid using responsivity as a height on mobile or web eventually. 2 • 1 that doesn't answer the question – Luciano Commented May 11, 2021 at 9:55 • well, then read again @Luciano. Commented May 11, 2021 at 9:56 Your Answer By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.94017
Qlik Community Ask a Question QlikView App Development Discussion Board for collaboration related to QlikView App Development. Announcements Welcome to our newly redesigned Qlik Community! Read our blog to learn about all the new updates: READ BLOG and REPORTED ISSUES cancel Showing results for  Search instead for  Did you mean:  Not applicable top 3 values and corresponding attributes ProductSourceDestinationCostCost 2Total Cost HandbagAA10 5 5 Handbag2BB1101020 Handbag3CC1201535 Handbag4DD1301040 Handbag5EE1402060 Right now, I'm using the above example where I'm using =FirstSortedValue(Source, [Total Cost],1) Is there any way that if my data had 0 as one of the cost, not to include it as the cheapest source? For example, the cheapest source would be B, C, and D. 1 Solution Accepted Solutions Creator III Creator III Based on Simen's suggestion this is more straightforward. =FirstSortedValue({<Cost = {">0"},[Cost 2] = {">0"}>}Product, [Total Cost]) View solution in original post 10 Replies Creator III Creator III =IF(FirstSortedValue(Source, [Cost],1) = 0 or FirstSortedValue(Source, [Cost2],1) = 0, FirstSortedValue(Source, [Total Cost],2), FirstSortedValue(Source, [Total Cost],1)) Not applicable This doesn't seem to work properly... it gives me the same value as the =FirstSortedValue(Source, [Total Cost],1)) even though the cost2 is 0. Any other ideas? Creator III Creator III IF(FirstSortedValue(aggr(Sum(Cost),Source),Cost) = 0 or FirstSortedValue(aggr(Sum([Cost 2]),Source),[Cost 2]) = 0,   FirstSortedValue(aggr(Sum([Total Cost]),Source),[Total Cost],2),FirstSortedValue(aggr(Sum([Total Cost]),Source),[Total Cost])) Not applicable This gives me the value. What do I do to get the field associated with that value? For example, instead of 20 in your example. I should see Handbag2. Creator III Creator III IF(FirstSortedValue(Cost,aggr(Sum(Cost),Source)) = 0 or FirstSortedValue([Cost 2],aggr(Sum([Cost 2]),Source),[Cost 2]) = 0,   FirstSortedValue(Product,aggr(Sum([Total Cost]),Product),2),FirstSortedValue(Product,aggr(Sum([Total Cost]),Product))) Partner Partner Firstsortedvalue takes set analysis. How about: =FirstSortedValue({$<Cost={">0"}>}Source, [Total Cost],1) Regards SKG Creator III Creator III Based on Simen's suggestion this is more straightforward. =FirstSortedValue({<Cost = {">0"},[Cost 2] = {">0"}>}Product, [Total Cost]) View solution in original post Not applicable =FirstSortedValue({<Cost = {">0"},[Cost 2] = {">0"}>}Product, [Total Cost],1) seems to do the trick. Now how do I find the Cost and Cost 2 associated with that? What is the right calculation to get the Cost and Cost 2? From Calculation above: cheapest: Product, Source, Destination, Cost, Cost 2, Total Cost Handbag2, B1, 10, 10, 20 Creator III Creator III =FirstSortedValue({<Cost = {">0"},[Cost 2] = {">0"}>}Product, [Total Cost]) Replace Product in bold with what field you want to grab.  You Don't need the 1 at the end because it is implied if left out.
__label__pos
0.979086
View difference between Paste ID: F0GLtrx5 and fqR4CMbV SHOW: | | - or go back to the newest paste. 1 /* 2- MAA2 v0.41 mod 2+ MAA2+ v0.42 mod 3 ========= 4 5- Updated version of the MAA antialising script from AnimeIVTC. 5+ Updated version of the MAA antialising script from AnimeIVTC. 6- MAA2 uses tp7's SangNom2 and FTurn, which provide a nice speedup for SangNom-based antialiasing, 6+ MAA2 uses tp7's SangNom2, which provide a nice speedup for SangNom-based antialiasing, 7 especially when only processing the luma plane. 8 The defaults of MAA2 match up with MAA, so you'll get identical output (save for the more accurate border region processing of SangNom2) 9 when using this script as a drop-in replacement. 10 11- MAA2 supports Y8, YV12 and YV24 input. 11+ MAA2 supports Y8, YV12, YV16 and YV24 input. 12 13 Requirements: 14 15- * AviSynth 2.6a4 15+ * AviSynth+ 16 * SangNom2 0.3+ 17- * FTurn 17+ * Masktools 2.0b1 18- * Masktools 2.0a48 18+ 19- 19+ 20 21 + [int] mask (1) 22- + [int] mask (-200) 22+ 23 * 1: Enable masking 24- * 1: Enable masking 24+ * -i: Enable masking with custom treshold 25- * -i: Enable masking with custom treshold (sensible values are between 0 and 30) 25+ 26 * false: Don't process chroma channels (copy UV from the source clip if present) 27 * true: Process chroma channels 28 + [float] ss (2.0) 29 * Supersampling factor (sensible values are between 2.0 and 4.0 for HD content) 30 + [int] aa (48) 31 * Sangnom2 luma antialiasing strength 32 + [int] aac (aa-8) 33 * Sangnom2 chroma antialiasing strength 34 + [int] threads (4) 35 * Number of threads used by every Sangnom2 instance 36 + [int] show (0) 37 * 0: Don't overlay mask 38 * 1: Overlay mask only 39 * 2: Overlay mask and run antialiasing on the luma plane 40 + [int] maskt (1) 41- 41+ * 1: sobel 42 * 2: min/max 43 44- function maa2(clip c, int "mask", bool "chroma", float "ss", int "aa", int "aac", int "threads", int "show") 44+ 45 46 function maa2(clip c, int "mask", bool "chroma", float "ss", int "aa", int "aac", int "threads", int "show", int "maskt") 47- mask = Default(mask, -200) 47+ 48 chroma = Default(chroma, false) 49- show = Default(show, 0) 49+ mask = Default(mask, 1) 50- uv = (chroma) ? 3 : 1 50+ maskt = Default(maskt, 1) 51 mtresh = (mask < 0) ? -mask : 7 52- Assert(c.IsY8 || c.IsYV12 || c.IsYV24, "MAA2: Input must be Y8, YV12 or YV24") 52+ show = Default(show, 0) 53 uv = (chroma) ? 3 : 1 54 55- m = (mask != 0) ? c.mt_edge("min/max",0,mtresh,0,mtresh-6,u=uv,v=uv).mt_inflate(u=uv,v=uv) : nop() 55+ Assert(c.IsY8 || c.IsYV12 || c.IsYV24 || c.IsYV16, "MAA2: Input must be Y8, YV12, YV16 or YV24") 56 Assert(0 <= show <= 2, "MAA2: Parameter 'show' must be between 0 and 2") 57- c_aa = (chroma && show==0) ? c.Sangnom2AA(ss,aa,aac,threads) : c.ConvertToY8().Sangnom2AA(ss,aa,threads=threads) 57+ 58 # create mask 59- c_aa = (show==1) ? (c.IsY8) ? c_aa.ConvertToYV12().mt_lut(y=2, u=0,v=0) 59+ if (mask != 0) { 60- \ : c.mt_lut("x 2 /", y=2, u=3,v=3) 60+ m = (maskt != 1) ? c.mt_edge("min/max", 0, mtresh, 0, mtresh-6, u=uv, v=uv).mt_inflate(u=uv, v=uv) : c.mt_edge("sobel", mtresh, mtresh, mtresh-6, mtresh-6, u=uv, v=uv).mt_inflate(u=uv, v=uv) 61- \ : (show==2) ? (c.IsY8) ? c_aa.ConvertToYV12().mt_lut(y=2, u=0,v=0) 61+ } 62- \ : YtoUV(c.UtoY8(), c.VtoY8(), c_aa).mt_lut("x 2 /", y=2, u=3,v=3) 62+ 63- \ : c_aa 63+ # run sangnom2-based aa 64 if (!chroma || show > 0) { 65- return (mask !=0) ? (show > 0) ? (c.IsYV24) ? mt_merge(c,c_aa,m.YtoUV(m,m),u=3,v=3) 65+ c_aa = c.ConvertToY8().Sangnom2AA(ss, aa, threads=threads) 66- \ : mt_merge(c.ConvertToYV12(),c_aa,m,u=3,v=3, luma=true) 66+ } 67- \ : (chroma) ? mt_merge(c,c_aa,m,u=3,v=3) 67+ else if (c.IsYV16) { 68- \ : mt_merge(c,c_aa,m,u=2,v=2) 68+ c_aa_u = c.UtoY8().Sangnom2AA(ss, aac, threads=threads) 69- \ : c.mt_logic(c_aa,"and", y=4, u=2, v=2) 69+ c_aa_v = c.VtoY8().Sangnom2AA(ss, aac, threads=threads) 70 c_aa = YToUV(c_aa_u, c_aa_v, c.ConvertToY8().Sangnom2AA(ss, aa, threads=threads)) 71 } 72 else { c_aa = c.Sangnom2AA(ss, aa, aac, threads) } 73 74- threads = Default(threads, 4) 74+ # prepare chroma planes for mask overlay 75 if (show == 1) { 76 c_aa = (c.IsY8) ? c.ConvertToYV12().mt_lut(y=2, u=0, v=0) 77- aac = (aac<0) ? 0 : aac 77+ \ : c.mt_lut("x 2 /", y=2, u=3, v=3) 78 } 79 else if (show == 2) { 80 c_aa = (c.IsY8) ? c_aa.ConvertToYV12().mt_lut(y=2, u=0, v=0) 81 \ : YtoUV(c.UtoY8(), c.VtoY8(), c_aa).mt_lut("x 2 /", y=2, u=3, v=3) 82 } 83 84- return c.Spline36Resize(ss_w,ss_h).FTurnLeft() \ 84+ # merge aa'ed lines into source 85- .SangNom2(threads=threads, aa=aa, aac=aac).FTurnRight().SangNom2(threads=threads, aa=aa, aac=aac).Spline36Resize(c.width,c.height) 85+ if (mask == 0) { 86 return mt_logic(c_aa, "and", y=4, u=2, v=2) 87 } 88 else if (show > 0) { 89 if (c.IsYV16) { 90 m_uv = BilinearResize(m, m.width/2, m.height) 91 return mt_merge(c, c_aa, YtoUV(m_uv, m_uv, m), u=3, v=3) 92 } 93 else { 94 return (c.IsYV24) ? mt_merge(c, c_aa, m.YtoUV(m,m), u=3, v=3) 95 \ : mt_merge(c.ConvertToYV12(), c_aa, m, u=3, v=3, luma=true) 96 } 97 } 98 else { 99 return (chroma) ? mt_merge(c, c_aa, m, u=3, v=3) 100 \ : mt_merge(c, c_aa, m, u=2, v=2) 101 } 102 } 103 104 function Sangnom2AA(clip c, float "ss", int "aa", int "aac", int "threads") 105 { 106 threads = Default(threads, 4) 107 aa = Default(aa, 48) 108 aac = Default(aac, aa-8) 109 aac = (aac < 0) ? 0 : aac 110 ss = Default(ss, 2.0) 111 ss_w = int(round(c.width*ss/4.0)*4) 112 ss_h = int(round(c.height*ss/4.0)*4) 113 114 Assert(ss > 0, "MAA2: Supersampling factor must be > 0") 115 116 return c.Spline36Resize(ss_w, ss_h).TurnLeft() \ 117 .SangNom2(threads=threads, aa=aa, aac=aac).TurnRight().SangNom2(threads=threads, aa=aa, aac=aac).Spline36Resize(c.width, c.height) 118 }
__label__pos
0.972022
How do I refresh disk space in Linux? How do I refresh disk space in Linux? To free up space, do these steps: 1. Run sudo lsof | grep deleted and see which process is holding the file. 2. Kill the process using sudo kill -9 {PID} . 3. Run df to check if space is already freed up. How do I reset a directory in Linux? To change to your home directory, type cd and press [Enter]. To change to a subdirectory, type cd, a space, and the name of the subdirectory (e.g., cd Documents) and then press [Enter]. To change to the current working directory’s parent directory, type cd followed by a space and two periods and then press [Enter]. How do I clean up a directory in Linux? To remove a directory and all its contents, including any subdirectories and files, use the rm command with the recursive option, -r . Directories that are removed with the rmdir command cannot be recovered, nor can directories and their contents removed with the rm -r command. How do I list all hard drives in Linux? List Disks on Linux using lsblk 1. The easiest way to list disks on Linux is to use the “lsblk” command with no options. 2. Awesome, you successfully listed your disks on Linux using “lsblk”. 3. In order to list disk information on Linux, you have to use the “lshw” with the “class” option specifying “disk”. How do you recalculate disk space? 1 Answer 1. Start CMD in an elevated prompt. 2. Run ‘diskpart’ 3. Run ‘Rescan Devices’ 4. Run LIST VOLUMES to get the list of volumes. 5. Run SELECT VOLUME #, where # is the number of the volume needing expansion. 6. Run EXTEND to expand the volume into the newly visible free space. How do I change directory to D drive in Linux? How to change directory in Linux terminal 1. To return to the home directory immediately, use cd ~ OR cd. 2. To change into the root directory of Linux file system, use cd / . 3. To go into the root user directory, run cd /root/ as root user. 4. To navigate up one directory level up, use cd .. What does rm do? The rm command removes the entries for a specified file, group of files, or certain select files from a list within a directory. What is SDA and SDB in Linux? dev/fd1 – The second floppy drive. dev/sda – The first SCSI disk SCSI ID address-wise. dev/sdb – The second SCSI disk address-wise and so on. Is TreeSize safe to use? TreeSize has received a 4.5 out of 5 star average based on its high levels of customer satisfaction and likeliness to recommend ratings from real G2 Crowd users. Read the complete review. I just downloaded TreeSize Free v4. 1 and am most impressed with how much you have built into the free version of your software.
__label__pos
1
# 第十七章 前端商品分类管理 在上一章节中,我们实现了精选主题的管理页面,本章节我们将实现商品分类管理的页面。在有了前面两章内容的知识铺垫,加上本身商品分类管理的逻辑是比较简单的,所以本章节可以说是比较轻松的一个章节,大家可以当作是复习巩固。 # 商品分类列表 首先我们要来实现的商品分类列表页面的展示,按照目录规范,我们在src/views下新增一个product文件夹,在该文件夹下接着新建一个category文件夹,最后在这个文件夹下面新增一个List.vue文件并添加如下代码: <template> <div class="lin-container"> <div class="lin-title">商品分类列表</div> <div class="button-container"> <!-- 指定button类型 --> <el-button type="primary">新增分类</el-button> </div> <div class="table-container"> <el-table></el-table> </div> </div> </template> <script> export default { name: 'List', } </script> <style lang="scss" scoped> .button-container{ margin-top: 30px; padding-left: 30px; } .table-container{ margin-top: 30px; padding-left: 30px; padding-right: 30px; } </style> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 页面文件定义好之后,我们同样是先让这个页面能够正常展示出来,在src\config\stage目录我们需要新增一个配置文件product.js,并加入如下代码: const productRouter = { route: null, name: null, title: '商品管理', type: 'folder', // 类型: folder, tab, view icon: 'iconfont icon-tushuguanli', // 菜单图标 filePath: 'views/product/', // 文件路径 order: 3, inNav: true, children: [ { title: '商品分类', type: 'view', route: '/product/category', filePath: 'views/product/category/List.vue', inNav: true, icon: 'iconfont icon-tushuguanli', }, ], } export default productRouter 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 路由配置文件定义好之后,别忘了在src\config\stage\index.js中引入一下: import adminConfig from './admin' import bookConfig from './book' // 引入图书管理路由文件 import operationConfig from './operation' // 引入运营管理路由文件 import pluginsConfig from './plugins' import Utils from '@/lin/utils/util' // ------------------我是分割线------------------------ import productRouter from './product' // 引入商品管理路由文件 // eslint-disable-next-line import/no-mutable-exports let homeRouter = [ // 省略一堆代码 ............... ............... productRouter // 加载商品管理路由配置文件 ] // 省略一堆代码 export default homeRouter 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 配置完毕之后,回到我们的浏览器中,刷新一下,可以看到我们左边的菜单栏中就多出了一个商品管理,展开后点击商品分类可以看到我们之前搭好的骨架代码效果: 接着我们就要实现页面能够发起请求从后端接口获取数据并填充到页面表格中。首先我们同样是需要先定义模型方法,在src/models目录下新建一个category.js并添加如下代码: // src/models/catrogry.js import { get } from '@/lin/plugins/axios' class Category { async getCategory() { const res = await get('v1/category') return res } } export default new Category() 1 2 3 4 5 6 7 8 9 10 11 12 13 14 这里我们定义了一个模型类Catrogry,模型类下面有一个方法getCategory()用于调用后端相应的接口实现商品分类列表的查询,定义完毕之后,让我们回到List.vue中调用一下`: <template> <div class="lin-container"> <!-- 省略一堆代码 --> <div class="table-container"> <el-table :data="tableData" v-loading="loading"> <el-table-column type="index" width="80"></el-table-column> <el-table-column label="分类名称" prop="name"></el-table-column> <el-table-column label="分类描述" prop="description"></el-table-column> <!-- 操作列 --> <el-table-column label="操作" fixed="right" width="170"> <template slot-scope="scope"> <el-button type="primary" plain size="mini" @click="handleEdit(scope.row)">编辑</el-button> <el-button type="danger" plain size="mini" @click="handleDelete(scope.row)">删除</el-button> </template> </el-table-column> </el-table> </div> </div> </template> <script> import categoryModel from '../../../models/category' export default { name: 'List', data() { return { loading: false, tableData: [], } }, created() { this.getCategory() }, methods:{ async getCategory() { let res try { this.loading = true this.tableData = await categoryModel.getCategory() this.loading = false this.tableData = [...res] } catch (e) { this.loading = false } }, /** 编辑按钮点击事件 ...*/ handleEdit(row){}, /** 删除按钮点击事件 ...*/ async handleDelete(row){} } } </script> <style lang="scss" scoped> /* 省略一堆代码 */ </style> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 这里我们首先完善了<el-table>内的元素,定义了一些列,接着我们定义了一个组件方法getCategory(),这个方法里面会去调用我们刚刚定义好的模型类来实现获取数据。编辑按钮和删除按钮的点击事件回调方法我们暂时先定义一个空的函数即可,具体实现留待后面具体的小节中再来实现,这里只是为了防止编译报错。一切就绪之后,让我们回到浏览器中刷新一下,可以看到如下效果: 就是这么简单,以上就是本小节的内容,在下一小节,我们将来实现新增商品分类的页面,同样也是可以复用前面的知识点,不会有太大的难度,我们下小节再见! # 新增商品分类 在上一小节中,我们实现了对商品分类列表的查询,在这一节,我们将来实现新增商品分类的功能。与精选主题或者轮播图管理不同的是,商品分类的新增页面表单元素没有那么多,逻辑也比较简单,所以这里我们不会像之前那样去做过多的封装实现,因为没有这个必要。这里我们选择的方式是在对话框中嵌入一个表单,通过这种交互来实现商品分类新增。实现方案确定了之后,我们在List.vue中,引入一个对话框组件,并且给新增分类的按钮添加一个点击事件handleAdd(): <!-- src/views/product/category/List.vue --> <template> <div class="lin-container"> <div class="lin-title">商品分类列表</div> <div class="button-container"> <!-- 添加一个点击事件的回调方法 --> <el-button type="primary" @click="handleAdd">新增分类</el-button> </div> <div class="table-container"> <!-- 省略一堆代码 --> </div> <!-- 引入对话框组件 --> <el-dialog :title="textMap[dialogStatus]" :visible.sync="dialogFormVisible" @close="resetForm('form')"> 这里是一个表单 <div slot="footer" class="dialog-footer" style="padding-left:5px;"> <el-button @click="resetForm('form')">重 置</el-button> <el-button type="primary" @click="confirm('form')">确 定</el-button> </div> </el-dialog> </div> </template> <script> import categoryModel from '../../../models/category' export default { name: 'List', data() { return { loading: false, tableData: [], // 是否显示对话框 dialogFormVisible: false, // 对话框所属状态 dialogStatus: '', // 根据状态显示对应对话框头部的文本信息 textMap: { update: '编辑分类', create: '新增分类', }, } }, created() { this.getCategory() }, methods:{ async getCategory() {...}, /** * 新增分类按钮点击事件 */ handleAdd() { this.dialogStatus = 'create' this.dialogFormVisible = true // 重置表单状态 this.resetForm('form') }, /** 对话框确认按钮点击事件 ...*/ confirm(formName) {}, /** 对话框重置按钮点击事件 ...*/ resetForm(formName) { // TODO 重置表单状态 // 通过配置对话框组件的@close事件监听,每次关闭对话框的时候也会触发这个方法 }, /** 编辑按钮点击事件 ...*/ handleEdit(row){}, /** 删除按钮点击事件 ...*/ async handleDelete(row){} } } </script> <style lang="scss" scoped> /* 省略一堆代码 */ </style> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 这里我们引入了一个对话框组件,并且在对话框组件内部添加了一些骨架代码,同时还定义了几个新的数据对象和3个回调方法,为方便测试,我们仅先简单实现了handleAdd()这个方法里的内容,主要就是打开一个对话框,并把对话框当前的状态标识为create,即代表本次打开是用于实现新增逻辑的,并且在每次打开一个新增分类的窗口时,我们会执行一次清空表单状态的操作。然后我们就可以先到浏览器中来看看效果: 这里我们已经实现了当点击新增分类的时候弹出一个对话框的效果,并且添加了一些基础代码和事件回调。现在我们就要来实现相应的具体逻辑来达到新增一个分类的目的。首先第一步当然就是在对话框里塞进去一个表单: <!-- src/views/product/category/List.vue --> <template> <div class="lin-container"> <div class="lin-title">商品分类列表</div> <div class="button-container"> <!-- 添加一个点击事件的回调方法 --> <el-button type="primary" @click="handleAdd">新增分类</el-button> </div> <div class="table-container"> <!-- 省略一堆代码 --> </div> <!-- 引入对话框组件 --> <el-dialog :title="textMap[dialogStatus]" :visible.sync="dialogFormVisible" @close="handleClose"> <el-form ref="form" :model="temp" status-icon label-width="100px" @submit.native.prevent> <el-form-item label="名称" prop="name"> <el-input size="medium" v-model="temp.name" placeholder="分类名称"></el-input> </el-form-item> <el-form-item label="简介" prop="description"> <el-input size="medium" type="textarea" :rows="4" placeholder="分类简介" v-model="temp.description"> </el-input> </el-form-item> <el-form-item label="分类图片" prop="img.id"> <upload-imgs ref="uploadEle" :max-num="1" :value="imgData" :remote-fuc="uploadImage"/> </el-form-item> </el-form> <div slot="footer" class="dialog-footer" style="padding-left:5px;"> <el-button @click="resetForm('form')">重 置</el-button> <el-button type="primary" @click="confirm('form')">确 定</el-button> </div> </el-dialog> </div> </template> <script> import categoryModel from '../../../models/category' import UploadImgs from '@/components/base/upload-imgs', import { customImageUpload } from '../../../lin/utils/file' export default { name: 'List', components: { UploadImgs }, data() { return { // 省略一堆代码 temp: { id: null, name: '', description: '', img: { id: '', url: '', }, }, imgData: [], row: null, rules: { name: [ { required: true, message: '请输入分类名称', trigger: 'blur', }, ], description: [ { required: true, message: '分类描述不能为空', trigger: 'blur', }, ], 'img.id': [ { required: true, message: '分类图片不能为空', trigger: 'blur', }, ], }, } }, methods:{ // 省略一堆代码 /** * 新增分类按钮点击事件 */ handleAdd() { this.dialogStatus = 'create' this.dialogFormVisible = true this.temp = { id: null, name: '', description: '', img: { id: '', url: '', }, } this.resetForm('form') }, /** * 对话框重置按钮点击事件 */ resetForm(formName) { this.imgData = this.dialogStatus === 'create' ? [] : [{ imgId: this.row.img.id, display: this.row.img.url, }] // 使用this.$nextTick()可以等待dom生成以后再来获取dom对象 this.$nextTick(() => { this.$refs[formName].resetFields() }) }, /** * 对话框确认按钮点击事件 */ async confirm(formName) { this.$refs[formName].validate(async (valid) => { if (valid) { try { if (this.dialogStatus === 'create') { const res = await categoryModel.createCategory(this.temp.name, this.temp.description, this.temp.img.id) this.$message.success(res.msg) } else { // TODO 编辑场景 } this.dialogFormVisible = false await this.getCategory() } catch (e) { this.$message.error(Object.values(e.data.msg).join(';')) } } }) }, // 自定义图片上传组件上传方法 async uploadImage(file) { const res = await customImageUpload(file) // 给表单对象属性赋值,因为分类封面图仅会有一张图片,这里直接取第一个元素 this.temp.img = res[0] // 给图片上传组件赋值,因为分类封面图仅会有一张图片,这里直接取第一个元素 return Promise.resolve({ id: res[0].id, url: res[0].url, }) }, } } </script> <style lang="scss" scoped> /* 省略一堆代码 */ </style> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 首先我们往对话框组件里塞入了一个<el-form>组件,具体的内容在前面我们已经反复接触实战过了,就是定义几个表单项,添加一些配置而已。接着我们定义了几个新的数据对象,分别是对应表单数据的temp、代表当前选中行的数据row(编辑场景下重置表单状态的方法内会用到)、图片上传组件的数据对象imgData以及表单校验规则配置rules,这些都是很常规基础的配置。这里我们还需要完善一下前面定义的handleAdd()方法,我们还需要在每一次打开新增分类的窗口时,给对话框内的表单项做一个数据初始化的工作,保证每一次打开都是一个空的表单。相应的, resetForm()方法里的具体逻辑我们也需要完善,这方法里面除了调用组件提供的resetFields()方法来重置表单状态以外,还需要对图片上传组件的内容做初始化。为了兼容后面的编辑商品分类场景,这里利用一个三元表达式: this.imgData = this.dialogStatus === 'create' // 新增场景,默认是一个空数组 ? [] // 编辑场景,默认就是原来row记录里的数据 : [{ imgId: this.row.img.id, display: this.row.img.url, }] 1 2 3 4 5 6 7 8 来决定this.imgData这个数组应该被初始化成什么数据。接着我们实现了confirm()方法的具体业务逻辑,当点击对话框的“确定”按钮就会走到这里面的逻辑。这里面的逻辑也是非常简单的,就是在保证检验规则都通过的情况下,去调用相应的模型方法(这个套路在本专栏以及真实开发场景下都是很常用的基础套路),这里我们做了一个if判断,根据对话框的业务状态来决定是新增还是编辑,这里我们暂时先实现了新增相关的逻辑。在新增成功后,我们会调用一个消息通知组件给出一个新增成功的提示,并且关闭对话框然后再一次调用获取表格数据方法实现数据刷新。理清脉络之后,我们接下来自然是需要来实现一下这个模型方法,去到我们的Category模型类中,我们新增一个createCategory()模型方法: // src/models/catrogry.js import { get,post } from '@/lin/plugins/axios' class Category { handleError = true async getCategory() {...} async createCategory(name, description, topicImgId){ const res = await post('v1/category', { name, description,topic_img_id:topicImgId }, { handleError: this.handleError }) return res; } } export default new Category() 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 这里我们定义了一个createCategory()方法,方法里面就是发起一个POST请求去调用我们后端的新增商品分类接口,定义好模型方法之后,我们整个新增商品分类的代码实现就算完成了,但是别忘了我们刚刚还在List.vue中定义了一个自定义的图片上传方法,虽然内容也是基本操作,这里当做复习,也给大家讲解一下。我们再次回到List.vue中: <!-- src/views/product/category/List.vue --> <template> <!-- 省略一堆代码 --> </template> <script> import categoryModel from '../../../models/category' import UploadImgs from '@/components/base/upload-imgs', import { customImageUpload } from '../../../lin/utils/file' export default { name: 'List', components: { UploadImgs }, data() {...}, methods:{ // 省略一堆代码 /** * 对话框确认按钮点击事件 */ async confirm(formName) {...}, // 自定义图片上传组件上传方法 async uploadImage(file) { const res = await customImageUpload(file) // 给表单对象属性赋值,因为分类封面图仅会有一张图片,这里直接取第一个元素 this.temp.img = res[0] // 给图片上传组件赋值,因为分类封面图仅会有一张图片,这里直接取第一个元素 return Promise.resolve({ id: res[0].id, url: res[0].url, }) }, } } </script> <style lang="scss" scoped> /* 省略一堆代码 */ </style> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 这里我们同样需要用到图片上传组件以及自定义图片上传的方法,有了前面章节的铺垫,这里我们只需要直接复用我们前面定义好的customImageUpload()方法即可自定义的图片上传。到这里,我们的新增商品分类功能就已经实现完毕了,大家可以自行到浏览器中测试一下效果。以上就是本小节的内容,在下一小节,我们就将来实现一下编辑商品分类功能。 # 编辑商品分类 在上一小节中,我们实现了新增商品分类,在本小节中我们就来实现下编辑商品分类。功能实现的思路就是当点击表格中某一行记录的编辑按钮时,能够弹出一个对话框,点击确定后实现保存数据的更改,思路很简单,很多代码也是复用上一小节的。首先我们先具体实现下编辑按钮的点击事件回调方法: <!-- src/views/product/category/List.vue --> <template> <!-- 省略一堆代码 --> </template> <script> import categoryModel from '../../../models/category' import UploadImgs from '@/components/base/upload-imgs', import { customImageUpload } from '../../../lin/utils/file' export default { name: 'List', components: { UploadImgs }, data() {...}, methods:{ // 省略一堆代码 /** * 编辑按钮点击事件 * @param row */ handleEdit(row) { // 将当前行记录的数据保存起来,用于重置表单的时候恢复图片上传组件的内容 this.row = row // 将当前行的数据赋值到表单数据对象,用于初始化表单数据 // 注意这里需要考虑深拷贝还是浅拷贝的问题 // 如果直接使用this.temp = row(浅拷贝),会发现后面在修改数据的同时,表格的数据也会同步发生变化。 this.temp = JSON.parse(JSON.stringify(row)) // 初始化图片上传组件的图片内容 this.imgData = [{ imgId: row.img.id, display: row.img.url, }] this.dialogStatus = 'update' this.dialogFormVisible = true }, /** * 对话框确认按钮点击事件 */ async confirm(formName) {...}, // 省略一堆代码 } } </script> <style lang="scss" scoped> /* 省略一堆代码 */ </style> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 前面我们定义了handleEdit()方法但是没有实现它的具体逻辑,现在就是时候派上用场了,方法内我们做的事情和新增的时候是差不多的,区别就是多了初始化表单数据和保存当前行记录数据。方法实现好之后,我们就可以到浏览器中来看看效果了,尝试点击表格中某行一行记录的编辑按钮: 效果出来了,这里我们同样是会复用上一小节中所使用的对话框,只不过做了一些行为上的调整,每次会把行记录填充到表单中。同样的,当我们修改完数据之后,点击”确定“按钮,也是会进到我们前面定义的confirm()方法中。在上一小节中,我们只在方法中实现了关于新增分类的业务逻辑,这里我们就需要来补充完善编辑分类的业务逻辑了: <!-- src/views/product/category/List.vue --> <template> <!-- 省略一堆代码 --> </template> <script> import categoryModel from '../../../models/category' import UploadImgs from '@/components/base/upload-imgs', import { customImageUpload } from '../../../lin/utils/file' export default { name: 'List', components: { UploadImgs }, data() {...}, methods:{ // 省略一堆代码 /** * 编辑按钮点击事件 * @param row */ handleEdit(row) {...}, /** * 对话框确认按钮点击事件 */ async confirm(formName) { this.$refs[formName].validate(async (valid) => { if (valid) { try { if (this.dialogStatus === 'create') { // 省略新增场景的业务逻辑代码 } else { // TODO 编辑场景 const res = await categoryModel.editCategory(this.temp.id, this.temp.name, this.temp.description, this.temp.img.id) this.$message.success(res.msg) } this.dialogFormVisible = false await this.getCategory() } catch (e) { this.$message.error(Object.values(e.data.msg).join(';')) } } }) }, // 省略一堆代码 } } </script> <style lang="scss" scoped> /* 省略一堆代码 */ </style> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 这里我们仅需要在Category模型类中新增一个编辑商品分类的方法然后调用即可,打开模型类文件并添加如下方法: // src/models/catrogry.js import { get,post,put } from '@/lin/plugins/axios' class Category { handleError = true async getCategory() {...} async createCategory(name, description, topicImgId){...} async editCategory(id, name, description, topicImgId) { const res = await put(`v1/category/${id}`, { name, description, topic_img_id: topicImgId, }, { handleError: this.handleError }) return res } } export default new Category() 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 这里我们在Category模型类中新增了一个editCategory()方法,方法内就是发起一个PUT类型的HTTP请求去调用后端编辑商品的接口,模型方法定义完毕之后,我们就可以到浏览器中测试一下效果了,各位读者可以自行到页面中点击某条商品分类记录的编辑按钮,尝试修改并确认提交验证下效果。 # 删除商品分类 在上一小节中,我们实现了编辑商品分类的页面功能,在这一小节,我们将实现商品管理的最后一个页面功能,即删除商品分类。功能实现的思路就是当点击表格中某一行记录的删除按钮时,实现记录的删除。 这里更标准的做法是同样需要先弹出一个对话框让用户确认是否删除,确认后再调用相应的接口。具体实现可以参考前面章节的内容,因为本身逻辑很简单也比较重复,这里为了节约篇幅,就只作一个简单的实现,读者可以自行根据需要去定制实现。 打开商品分类管理的List.vue,找到我们前面预先定义好的”删除“按钮点击事件回调方法,添加如下代码: <!-- src/views/product/category/List.vue --> <template> <!-- 省略一堆代码 --> </template> <script> import categoryModel from '../../../models/category' import UploadImgs from '@/components/base/upload-imgs', import { customImageUpload } from '../../../lin/utils/file' export default { name: 'List', components: { UploadImgs }, data() {...}, methods:{ // 省略一堆代码 /** * 删除按钮点击事件 * @param row */ async handleDelete(row) { this.loading = true try { const res = await categoryModel.delCategory([row.id]) this.$message.success(res.msg) this.loading = false await this.getCategory() } catch (e) { this.loading = false this.$message.error(e.msg) } }, } } </script> <style lang="scss" scoped> /* 省略一堆代码 */ </style> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 这里的实现比较简单,我们会在进入回调的时候改变一下表格的loading状态,让用户能够感知到正在删除。接着会去调用模型的方法实现接口调用来删除记录,成功后弹出一个消息通知并重新加载表格数据。接着我们就到模型类中来实现一下这个方法,打开Category模型类,我们新增一个方法: // src/models/catrogry.js import { get,post,put, _delete } from '@/lin/plugins/axios' class Category { handleError = true async getCategory() {...} async createCategory(name, description, topicImgId){...} async editCategory(id, name, description, topicImgId) {...} async delCategory(ids) { const res = await _delete('v1/category', { ids }) return res } } export default new Category() 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 模型方法定义完毕之后,我们就可以到浏览器中测试一下效果了,各位读者可以自行到页面中点击某条商品分类记录的删除按钮,尝试删除一条记录验证下效果。 # 章节回顾 在这个章节中,我们实现了前端部分商品分类管理的页面功能,如同一开始提到的一样,本章节的内容因为本身业务逻辑简单,加上有了前面两个比较复杂的章节内容铺垫,这个章节的内容实现起来可以说是没有什么难度的。其实很多时候我们在开发的时候经常会面临这种开发体验,特别是在做CMS类型的项目的时候,就是很纯粹的CURD。坦白说这个过程是比较枯燥,因为很多都是重复的操作。但是如果在时间允许的情况下,其实有时候我们可能再多点思考和尝试。比如说重复的地方我们是不是可以进行封装和抽象,这样以后在使用的时候可以更方便的调用。或者是在开发前端页面的时候,我们是不是可以在交互上做一些优化和改进,让用户在使用的时候用户体验更佳等等。这也是在重复工作中寻找乐趣或者突破的一种方式,特别是对于那些有一定工作经验但是感觉技术提升遇到瓶颈的同学,很容易在这个过程中找一些新的方向和领悟。本章节的内容就先到此为止了,读者们应该都觉得这章没什么难度,内容不多。是的,正如我们开头说的,这章会比较轻松,只是给大家喘口气缓一缓,权当复习。在下一个章节我们将进行商品库管理的页面模块开发,商品数据作为整个项目的核心基础数据,这部分的前端页面交互会比较复杂(可自行到线上demo体会),但复杂不等于困难,也不需要太担心。各位读者们做好准备就进入下一章的学习吧! 最后更新: 2020-08-02 03:50:30 0/140 评论 0 暂无评论 • 上一页 • 首页 • 1 • 尾页 • 下一页 • 总共1页
__label__pos
0.967128
1. Check out our companion app, Forums for Android! Download from Google Play My buttons won't work... Discussion in 'Application Development' started by DragonRider71, Oct 6, 2012. 1. DragonRider71 DragonRider71 New Member Thread Starter 5 Oct 6, 2012 1 0 5 I have been trying for the last 3-4 days to get my buttons to work and I can't figure out what my problem is. I feel like it is so simple, yet I just don't see it. Current task: Getting the buttons to display a Toast message so that I can ensure the buttons are working. Next task: Create an increase and decrease counter......first things first though I appreciate the help. Code (Text): 1.   2.   3. package com.sports.discgolfou; 4.   5. import android.os.Bundle; 6. import android.app.Activity; 7. import android.view.View; 8. import android.view.View.OnClickListener; 9. import android.widget.Button; 10. import android.widget.TextView; 11. import android.widget.Toast; 12.   13. public class hole_1 extends Activity implements View.OnClickListener{ 14.   15.     @Override 16.     public void onCreate(Bundle savedInstanceState){ 17.         super.onCreate(savedInstanceState); 18.         setContentView(R.layout.activity_page2); 19.         20.         //TextView SC = (TextView) findViewById(R.id.labelShotCounter); 21.         Button ButtonAdd = (Button) findViewById(R.id.buttonAdd); 22.         ButtonAdd.setOnClickListener(this); 23.         Button ButtonMinus = (Button) findViewById(R.id.buttonMinus); 24.         ButtonMinus.setOnClickListener(this); 25.         26.     }//End onCreate 27.   28.   29.     @Override 30.     public void onClick(View v) { 31.         // TODO Auto-generated method stub 32.         if(v.getId() == R.id.buttonAdd) 33.         { 34.             Toast.makeText(this, "Hello World", Toast.LENGTH_LONG).show(); 35.         } 36.         else if(v.getId() == R.id.buttonMinus) 37.         { 38.             Toast.makeText(this, "GoodBye World", Toast.LENGTH_LONG).show(); 39.         } 40.             41.         42.     }//End onClick 43.     44.     45.     46. }//End Activity 47.   48.     Advertisement 2. olijf olijf Member 15 Sep 23, 2012 19 0 15 Male school Rotterdam, Holland did you set the button property's right? onClick : onClick and clickable checked. if that isnt it, post your layout.xml. we can't really see what's going on now. EDIT: make sure you named everything EXACTLY right. uppercase lowercase etc.   3. jonbonazza jonbonazza Well-Known Member 163 Jul 13, 2010 1,934 458 163 Male Your code is correct and should work. The only thing I can guess that might be the problem is that your ids that you are referencing are not the right ids. Can you post your layout xml?   Share This Page Loading...
__label__pos
0.572809
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free. How would i get a table with both horizontal and vertical headers? So e.g header1 header2 header3 header1 1 1 1 header2 2 2 2 header3 3 3 3 share|improve this question      Is it even possible?? Someone, help!!! I need to finish this!!! –  will_code_for_food Sep 8 '11 at 21:49 3 Answers 3 Like @UlrichSchwarz said, you can just use <th> instead of <td> in the first column. Using scope, you can make it more semantically descriptive: jsfiddle <table> <tr> <th></th> <th scope="col">header1</th> <th scope="col">header2</th> <th scope="col">header3</th> </tr> <tr> <th scope="row">header 1</th> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <th scope="row">header 2</th> <td>2</td> <td>2</td> <td>2</td> </tr> <tr> <th scope="row">header 3</th> <td>3</td> <td>3</td> <td>3</td> </tr> </table> share|improve this answer While you can still just <th> the entries in the first column, there is no column-equivalent of <thead>/<tbody> that I'm aware of. share|improve this answer easy. Just leave the first td in the first tr - empty share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.593048
Exercise » Gallery (Create Structure) This page is in progress… Make it Work Cross Browser You probably can’t tell in your browser, but your structure won’t work in Internet Explorer 6–8. Not yet anyway. Add html5shiv.js for IE 6–8 While modern browsers recognize HTML5 elements like header, Internet Explorer 6-8 don’t. Html5shiv.js fixes this problem. Add html5shiv.js to your page the same way you added respond.js to your page: 1. Go to https://github.com/aFarkas/html5shiv 2. Download the zip file (bottom of the right-hand column) 3. Unarchive the zip file on your computer 4. Put the file “html5shiv.js” in the same folder as your HTML document 5. Add the the javascript syntax in the conditional comment you already have in the head of your HTML document.  <!--[if lt IE 9]>  <script type="text/javascript" src="respond.js"></script> <script type="text/javascript" src="html5shiv.js"></script>  <![endif]--> Display:block HTML5 Elements for IE 6–8 Since IE 6-8 don’t recognize HTML5 elements like header, they also don’t know to treat these elements as block elements (elements with a break before and after them, like divs and headings). To fix this, always add a line of syntax to your html5 elements, like so (new syntax in bold): header{ background-color:#CCEEFF; display:block; }   Finding a Good Measure A good measure (line length) is 45-85 characters per line. A quick way to check whether you have a good measure as you design your layout is to highlight the text between 45 and 85 characters on a line. Highlight 45 – 85 Characters In the bibliography lesson, you created a class and applied it to a p element so the entire intro paragraph had unique styling. This time, you only want some of the text to have unique styling, so you’ll create a class and apply it to a span of text in a paragraph. In the CSS, create a class called “measure” with a yellow background like so: .measure{ background-color:#FFEE99; } In the HTML, put your cursor right in front of the very first character of the description for the first film. Start counting characters (including spaces and punctuation). When you get to 45 characters, type the following syntax (new syntax in bold): a fun<span> and Count 40 more characters (including spaces and punctuation). When you get to 40 more characters (for a total of 85 characters), type the following syntax (new syntax in bold): a fun<span> and crusading journey into the digestiv</span>e tract You have wrapped a span around the characters from 45-85 in the first paragraph of text. Apply the class “measure” to this span, just like you applied it to the p in the bibliography, like so (new syntax in bold): a fun<span class="measure"> and crusading journey into the digestiv</span>e tract View Your Web Page The range of text between 45 and 85 characters is highlighted yellow. You can experiment with various column widths without re-counting your line length. Set Your Column Widths The column width is officially fine—it falls within the recommended 45–85 characters. But I like a slightly narrower column for skimming information. The reader’s eye tends to move down the left edge of the text, so if the column is narrower, the reader will catch more information. In the CSS, I set my column widths like so (new syntax in bold): #fall_films{ width:400px; background-color:#EECCFF; float:left; } #spring_films{ width:400px; background-color:#99FFCC; float:left; }
__label__pos
0.951587
What Is iCloud Activation Bypass Tool V1.4? How Can We Use It For iOS Devices? Why We Need iCloud Activation Bypass Tool? If you worried due to iCloud activation lock in your iPhone and thinking about how to resolve this issue, so this article has much for you. Mostly we faced this usual problem in our iOS devices that is very annoying. There are several reasons that your iPhone devices get locked, either your phone is stolen or you forget your passwords, both situations are worst for you. When your iPhone gets locked, it’s just like a brick until you remove the activation lock or bypass it.iCloud Activation Bypass Tool In this article, we are going to share a trick regarding the above-mentioned problem. We are sharing a complete solution for our readers, and exploring the most suitable way to bypass the iCloud activation lock. This tool will definitely help you to easily remove the activation lock and gives access to your phone independently. When your iPhone is getting locked and the activation screen blocked and denies access to your data, so you don’t need to go anywhere or try to search about it. iCloud Activation Bypass Tool V1.4 is the most suitable and reliable source to recover your phone as you have before. One more thing we want to clear here; if you are the first legal owner of your iPhone or having its authorized purchasing invoice, so you can bring it to any Apple official store where they can assist you and remove locking after verifying the ownership. Under this procedure, your device returns back to you with its original factory settings. After that, you can again set your ID and Password along with other safety credentials. If it’s not possible, so don’t worry about that, just go with another option. iCloud Activation Bypass Tool What IS iCloud Activation Bypass Tool V1.4? iCloud Activation Bypass Tool V1.4 helps users to quickly bypass the iCloud Activation lock and disabling the existing iCloud account to which they don’t have access. It is a tool that extremely efficient in easy removal of the iCloud account. It enables users to quickly access the device and sign in with another account. iCloud Activation Bypass Tool Version 1.4 is a very significant tool. But due to don’t having an interface, users will need few configurations to accomplish the bypass of their iOS devices. To find a proper link for downloading iCloud Activation Bypass Tool Version 1.4 is quite difficult because there is no official link for it. If you could do this, follow these simple steps given below to use the iCloud Activation Bypass Tool Version 1.4 to unlock any iOS device. iCloud Activation Bypass Tool V1.4 Step By Step Guide: 1. After downloading, install this efficient tool on your PC. 2. Now, turn on your iOS device and start the setup. 3. Don’t leave until you get to the Wi-Fi page. 4. Now, Tap on the “I” symbol for the next reach to the Wi-Fi network. 5. Then select “Configure Proxy”. 6. Now, enable the “Manual” option 7. Make setting as; • Server: 10.117.220.87 • Port: 1082. 1. Now, Save the details and connect to the network. 2. Now, connect your iOS device to the computer. 3. When you reached the Activation Lock screen, open the bypass tool file on the computer. 4. Now, select all the options on the menu of the screen. 5. Here you need to click on “Connect iCloud Erasing Server”. 6. After that, Click on “Upload Activation File”. 7. Now choose your required file. 8. Wait for the program to unlock the device. 9. Finally, tap on “Next” on your device to let it complete the setup process. Congrats Bypass is successfully accomplished. Watch Video Tutorial You May Also Search: Note: The above information is only for journal public interest, www.Getdroidpro.com or the author is not responsible for any hardware and software loss damage or illegal misusage of the above material. In case of any query or opinion kindly reply to us in the comments section. Your suggestions and ideas are most valuable for us. Leave a Comment Your email address will not be published. Required fields are marked *
__label__pos
0.818671
Title: System and method for content filtering using static source routes Kind Code: A1 Abstract: A packet containing a request for content is initially received at a content filtering router. The packet comprises a destination Internet Protocol (IP) address of a content server that stores the content and a bogus IP address. It is ascertained that the destination IP address is on a list of approved destination IP address. Alternatively, it is ascertained that the destination IP address is on a list of probably unapproved destination IP addresses and the packet is routed in accordance with an alternative IP address to a content filtering server. In this alternative, at the content filtering server the bogus IP address is used to determine a content filtering category and it is ascertained whether the destination IP address with the content filtering category should be filtered based upon a list of IP addresses and associated content filtering categories. Inventors: Donahue, David B. (Mountain View, CA, US) Application Number: 11/490685 Publication Date: 11/16/2006 Filing Date: 07/21/2006 Primary Class: Other Classes: 370/428 International Classes: H04L12/56; H04L12/54; H04L29/06 View Patent Images: Attorney, Agent or Firm: THE DIRECTV GROUP INC (PATENT DOCKET ADMINISTRATION RE/R11/A109, P O BOX 956, EL SEGUNDO, CA, 90245-0956, US) Claims: 1. 1-21. (canceled) 22. A method for filtering content, comprising: receiving at an IP device a packet from a client computer containing a request for content from a server, where said packet comprises a user identifier for a user; determining filter privileges for said user based upon said user identifier; denying the request for content based upon the filter privileges of said user and the IP address of said server; displaying a filtering page with a notification link; establishing that the link has been selected; notifying an administrator of the denied request; accepting a reply from the administrator; and providing content from the server to the user. Description: CROSS-REFERENCE TO RELATED APPLICATIONS The present application is a Continuation of U.S. patent application Ser. No. 10/295,476 filed Nov. 15, 2002, which is a Continuation-In-Part Application of U.S. patent application Ser. No. 10/040,773 filed on Dec. 28, 2001, which is incorporated by reference herein. BACKGROUND OF THE INVENTION 1. Field of the Invention The invention relates to a content filtering system and more particularly to a system and method for controlling user access to a computer network using a content filtering router that filters requests for content by routing them based on their final destination addresses. 2. Description of the Related Art The Internet is a loose network of networked computers spread throughout the world. Many of these networked computers serve content, such as Web pages, that are publicly accessible. This content is typically located through Internet addresses, such as <http://www.company.com/info>, which usually consist of the access protocol or scheme, such as HyperText Transport Protocol (http), the domain name (www.company.com), and optionally the path to a file or resource residing on that server (info). This Internet address is also known as a Uniform Resource Locator (URL). A Domain Name System (DNS) is then used to convert the domain name of a specific computer on the network into a corresponding unique Internet Protocol (IP) address, such as 204.171.64.2. Typically, users access content in one of two ways. The first way is for the user to dick on a Hyperlink. The Hyperlink links a displayed object, such as text or an icon, to a file addressed by a URL. The second way is for the user to enter a URL into a text or address box on an application layer such as a Graphical User Interface (GUI) of a file manager or an Internet browser, such as MICROSOFT'S INTERNET EXPLORER™, and click “Go” or press “Enter.” An application layer is like high-level set-up services for the application program or an interactive user. In the Open Systems Interconnection (OSI) communications model, the Application layer provides services for application program that ensure that communication is possible. The Application layer is NOT the application itself that is doing the communication. It is a service layer that provides these services: (1) Makes sure that the other party is identified and can be reached; (2) if appropriate, authenticates a sender, receiver, or both; (3) makes sure that necessary communication resources, such as a modem in the sender's computer, exist; (4) ensures agreement at both ends about error recovery procedures, data integrity, and privacy; and (5) determines protocol and data syntax rules at the application level. OSI is a standard description or “reference model” for how messages should be transmitted between any two points in a telecommunication network. Currently, OSI is Recommendation X.200 of the ITU-TS, which is incorporated herein by reference. OSI divides telecommunication into seven layers. The layers are in two groups. The upper four layers are used whenever a message passes from or to a user. The lower three layers (up to the network layer) are used when any message passes through the host computer. Messages intended for this computer pass to the upper layers. Messages destined for some other host are not passed up to the upper layers but are forwarded to another host. The seven layers are: Layer 7 (the application layer)—the layer at which communication partners are identified, quality of service is identified, user authentication and privacy are considered, and any constraints on data syntax are identified; Layer 6 (the presentation layer, sometimes called the syntax layer)—the layer, usually part of an operating system, that converts incoming and outgoing data from one presentation format to another; Layer 5 (the session layer)—sets up, coordinates, and terminates conversations, exchanges, and dialogs between the applications at each end. It deals with session and connection coordination; Layer 4 (the transport layer)—manages end-to-end control and error-checking. It ensures complete data transfer; Layer 3 (the network layer)—handles routing and forwarding; Layer 2 (the data-link layer)—provides synchronization for the physical level and does bit-stuffing for strings of 1's in excess of 5. It furnishes transmission protocol knowledge and management; and Layer 1 (the physical layer)—conveys the bit stream through the network at the electrical and mechanical level. It provides the hardware means of sending and receiving data on a carrier. As the Internet grows in size and sophistication, more and more content is becoming accessible to users. This content can be easily accessed by anyone who has a client computer and Internet access. However, some of this content may be unsuitable or inappropriate for all Internet users. For example, violent or adult content may be inappropriate for children. Therefore, in some situations it is desirable to limit and/or control user access to such content. For example, businesses may want to restrict their employees from viewing certain content on the Internet. Likewise, parents may wish to block their children's access to violent or adult content on the Internet. This restriction and/or control of user access to content on the Internet is otherwise known as content filtering. Content filtering allows a system administrator to block or limit content based on traffic type, file type, Web site, or some other category. For example, Web access might be permitted, but file transfers may not. There have been numerous attempts to provide content filtering using special browsers. These special browsers and associated filtering programs typically screen content by word content, site rating, or URL. The software provider of the special browsers typically keep a master list of objectionable content that must be periodically updated in the special browser or associated filtering program on the users client computer. However, many of these existing content filtering systems have a number of drawbacks. First, they need to be installed and configured on each and every client computer where controlled access is desired. Such installation and configuration can be time-consuming, inconvenient, and require a basic understanding of computer hardware and software. Additionally, from time to time, the user may be required to install bug-fixes, patches, or updates to configure or maintain the filtering software. This is because additional content must be continually added to a list of restricted sites. Typically, this list must be periodically downloaded and installed by a user to his/her client computer. Moreover, the software and continually growing list of restricted sites may consume valuable client computer memory and CPU resources (especially for searching lengthy databases of disallowed sites), which, in some cases, may limit or effect overall client computer performance. What is more, many children are typically more computer savvy than their parents and often find ways to circumvent the content filtering software without their parent's knowledge. Another approach to content filtering has been to place filtering software on a proxy server, so that entire networks connected to the proxy server can be filtered. The proxy server typically contains a list of restricted content that is periodically updated. However, each client computer connected to the proxy server must typically also include software that includes the filtering requirements appropriate for that particular client computer. Again this requires software to be installed and configured for each client computer. This is not only time consuming and inconvenient, but may consume much of a system administrators time. If each client computer is not appropriately configured, users may be blocked from content that they should otherwise have access to. Conversely, children and other restricted users may be able to get access to inappropriate content using a particular client computer or alternative software that has not been configured to restrict such content. In addition, conventional filtering can be bypassed. One method of bypassing conventional filtering is by a DNS/Hosts file bypass. Using this method, the IP address of an objectionable host is entered into the hosts file under another (unobjectionable) name. Another method of bypassing conventional filtering is by a local proxy bypass. Using this method, a user can run a proxy and type in all URLs as “http://UserLocation?target”, where “UserLocation” is the URL of the user's own computer and target is the destination site. Conventional content filtering has several other limitations. For example, content filtering is provided on a computer by computer basis. Also, if a filter list is broad and attempts to provide heightened restrictions, appropriate content may be invariably filtered out along with inappropriate or blocked content. On the other hand, if the filter list is too narrow, inappropriate content is more likely to be accessible. Therefore, a need exists for a content filtering system that is easily provisioned for one or more client computers with little or no user intervention, such as installation and configuration of software, or updating a list of filtered content, onto the user's client computer. Moreover, a need exists for a filtering system that cannot easily be circumvented, bypassed, tampered with, or disabled at the client computer level. SUMMARY OF THE INVENTION According to the invention there is provided a configurable content filtering system. This content filtering system provides users with the ability to rapidly filter content on a network. For example, a parent can limit the access that a child has to content by blocking access to content unsuitable to children. The parent can also configure the content filtering system to block different content for different children, based on the age of each child. The content filtering settings can also be made client-computer-specific. For example, if an adult is using one client computer and a child is using another client-computer, the content filtering can be turned off for the client computer being used by the adult and turned on for the client-computer being used by the child. The content filtering system is transparent to the user and no software has to be loaded on the user's client-computer. What is more, no special configuration of the user's web Browser is required. The content filtering process is performed on the network and not on the individual client-computer. Therefore an individual other than the control setting authority (for example, the parent) will not be able to bypass the content filtering controls previously set. According to an embodiment of the invention there is provided a method for filtering content using static source routes. The method uses a rough first pass at a content filtering router, followed with a more detailed check at a filtering server. This preferably speeds up the filtering service provided. A packet containing a request for content is initially received from a client computer at a bi-directional Internet Protocol (IP) communication device. The packet comprises a user identifier and a first destination IP address of a content server that stores content. The bi-directional IP device determines privileges for the user based upon the user identifier and adds the corresponding filter privileges as a bogus IP address that represents one or more filtering categories. The bi-directional IP device also adds a second destination IP address of a content filtering router to the header, as a source specified route, and routes the packet toward the content filtering router. The content filtering router receives the packet containing a request for content and determines whether the first destination IP address is on a list of IP addresses to be filtered. The content filtering router then routes the packet toward a filtering server for filtering if the first destination IP address is on the list of IP addresses. The content filtering server receives the packet and determines that the destination IP address is on a content filtered list that lists IP addresses and associated content filtering categories, by comparing the destination IP address to the list. The content filtering server then establishes whether a content filtering privilege of the content filtering privileges matches an associated content filtering category of an IP address on the content filtered list, and blocks the request for content if the content filtering privilege matches the associated content filtering category. Further according to the invention is a computer program product for use in conjunction with a computer system comprising a client computer, a bi-directional IP device, a content filtering router, and a content filtering server. The computer program product has a computer readable storage medium and a computer program mechanism embedded therein, the computer program mechanism including a communication procedures module for receiving a packet containing a request for content, where the packet comprises a first destination IP address of a content server that stores the content, a second destination IP address of the content filtering router, and content filtering privileges. The computer program mechanism also includes a routing protocol module that utilizes a routing table to determine whether the request for content is to be filtered based on the first destination IP address and routing the request for content toward a filtering server for filtering if the first destination IP address is to be filtered. BRIEF DESCRIPTION OF THE DRAWINGS Additional objects and features of the invention will be more readily apparent from the following detailed description and appended claims when taken in conjunction with the drawings, in which: FIG. 1 is a schematic of the typical system architecture for connecting to the Internet; FIG. 2 is a schematic of a system architecture for content filtering according to an embodiment of the invention; FIG. 3 is a block diagram of the bidirectional IP communication device shown in FIG. 2; FIG. 4 is a block diagram of the filtering router shown in FIG. 2; FIG. 5 is a route diagram of a process for updating a filter list on the service provider shown in FIG. 2; FIG. 6 is a route diagram of a process for updating a filter list on a content filtering router shown in FIG. 2; FIGS. 7A-7B are flow charts of a method for content filtering according to an embodiment of the present invention; FIG. 8A is a route diagram of a request for content that is filtered by a single filtering router according to the method described in relation to FIGS. 7A and 7B; FIG. 8B is a route diagram of a request for content that is filtered by multiple filtering routers according to the method described in relation to FIGS. 7A and 7B; FIG. 9 is a route diagram of the return path of the content to a client computer according to the method described in FIGS. 7A and 7B; FIG. 10 is a schematic of a system architecture for content filtering according to another embodiment of the invention; FIG. 11 is a block diagram of the bi-directional IP communication device shown in FIG. 10; FIG. 12 is a block diagram of the filtering router shown in FIG. 10; FIG. 13 is a block diagram of the filtering server shown in FIG. 10; FIGS. 14A, 14B, and 14C are flow charts of a method for content filtering according to an embodiment of the present invention; and FIG. 15 is a flow chart of a method for providing access by an administrator for a user who is denied based on filtering privileges. Like reference numerals refer to corresponding parts throughout the several views of the drawings. DESCRIPTION OF THE PREFERRED EMBODIMENTS FIG. 1 is a schematic of a typical system architecture 100 for connecting to the Internet. Typically one or more client computers 102(1)-(N) connect to a modem 104, such as a dial-up modem, which in turn connects to the Internet 110 via one or more routers or switches 108. A router is a device that forwards data packets from one computing device to another. Based on routing tables and routing protocols, routers read the network address in each transmitted frame or packet and make a decision on where to send it based on the most expedient route (traffic load, line costs, speed, bad lines, etc.). Routers work at layer 3 in the protocol stack, i.e., the network layer, whereas bridges and switches work at the layer 2, i.e., the data link (Media Access Control (MAC)) layer. Requests for content located on the Internet 110 are transmitted from the client computers 102(1)-(N) to the modem 104 in a frame or packet. The modem 104 then forwards the packet to a first router or switch 108 which in turn forwards the packet to the next router or switch 108, and so on until the packet reaches its intended destination, namely content server 106, coupled to the Internet 110. The content server 106 then serves the requested content back to the client computer 102(1)-(N) that made the request via the most expedient route, i.e., via the same or other routers or switches 108. Each packet request contains an Internet Protocol (IP) header having at least one source IP address, at least one destination IP address, and data, such as a request for content. The source IP address is typically the IP address of the client computer 102(1)-(N) that made the request, while the destination IP address is typically the IP address of the content server 106. The system architecture of a content filtering system 200 according to an embodiment of the invention is shown in FIG. 2. The content filtering system 200 prevents a user from accessing unauthorized content located on a network, such as the Internet 216. Unauthorized content may include undesirable, inappropriate, or extreme content, such as violence, hate, gambling or adult content. One or more client computers 202(1)-(N) connect to a bidirectional IP communication device (IP device) 204. The client computers 202(1)-(N) and IP device 204 are coupled to one another by any suitable means, such as Ethernet, cable, phone line, optical fiber, wireless, or the like. The client computers 202(1)-(N) include any network client device, such as desktop computers, laptop computers, handheld computers, cell phones, or any other network client device that acts to initiate IP connections. Each of the client computers 202(1)-(N) preferably includes network access software, such as an Internet Browser, like MICROSOFT'S INTERNET EXPLORER or NETSCAPE'S NAVIGATOR. Unlike the prior art, such network access software does not need to be specially configured for the content filtering system 200. In fact, because the filter interaction runs on network-based equipment, like the IP device 204, no filtering software needs to be present on the client computers 202(1)-(N) whatsoever. This is especially useful when the client is not capable of loading software. In addition, each client computer 202(1)-(N) is uniquely identifiable by a unique source IP address. The IP device 204 is any communication device that transmits and receives data over IP, preferably a broadband modem or gateway, such as a Digital Subscriber Line (DSL) or cable modem/gateway. The IP device 204 uses a connectivity topology, such as is typically found in, for example, a central office 206. The central office 206 may be a local telephone company switching center (for DSL), a cable company's central office (for cable), a Internet Service Provider's (ISPs) Point of Presence (POP) (for dial-up), or the like. Other methods include satellite cable, wireless networking, or other connectivity topologies. The central office 206 is coupled to the Internet 216 via one or more routers or switches 208 and one or more filtering routers 210, 212, and 214. The routers or switches 208 are the same as the routers or switches 108 described in relation to FIG. 1. The filtering routers 210, 212, and 214 are routers that are used for content filtering as described in further detail below. Each filtering router 210, 212, or 214 is used to filter one category of content, where a category is a type or level of content, such as violent content, adult content, religious content, or the like. For example, filtering router 210 is used to filter possible violent content while filtering router 212 is used to filter possible adult content. In an alternative embodiment, one or more of the filtering routers are combined on a Virtual Local Area Network (VLAN). Content servers 218, a service provider 220, and a list provider 222 are also coupled to the Internet 216. The content servers 218 store and serve content to client computers 202(1)-(N), while the service provider 220 provides the content filtering service described below. The list provider 222 generates, stores, and provides a list of questionable content that may be unsuitable or inappropriate and, therefore, subject to the filtering system. Such a list of content preferably contains numerous URLs or IP addresses of the location of such questionable content. The list also preferably contains each questionable content's associated category, such as religion, entertainment, and adult content. This allows the content filtering system to selectively customize the filtering system for each individual user. A suitable list provider 222 is WEBSENSE of California, U.S.A. WEBSENSE's list of filtered content currently contains 2.6 million Web sites, covering 500 million Web pages. FIG. 3 is a block diagram of the IP device 204 shown in FIG. 2. The IP device 204 preferably comprises at least one data processor or central processing unit (CPU) 302, a memory 310, communications circuitry 304, communication ports 306(1)-(N), and at least one bus 308 that interconnects these components. The communications circuitry 304 and communication ports 306(1)-(N) preferably include one or more Network Interface Cards (NICs) configured to communicate over Ethernet with the client computers 202(1)-(N) (FIG. 2). Memory 310 preferably includes an operating system 312, such as VXWORKS or EMBEDDED LINUX, having instructions for processing, accessing, storing, or searching data, etc. Memory 312 also preferably includes communication procedures 314; filtering procedures 316; authentication procedures 318; a Network Address Translation (NAT)/Firewall service 320; a HTTP (Web) Client and Server 322; HTTP (Web) Pages 324; a filtering database 326; a filtering levels database 330; and a cache 336 for temporarily storing data. The communication procedures 314 are used for communicating with both the client computers 202(1)-(N) (FIG. 2), and the Internet 216 (FIG. 2). The filtering procedures 316 are used for filtering content as explained in further detail below. The authentication procedures 318 are used to authenticate a user for content filtering services. The NAT/Firewall service 320 converts a local IP address of each client computer 202(1)-(N) (FIG. 2) into a globally routable IP address for the Internet and vice versa. It also serves as a firewall by keeping individual IP addresses of the client computers hidden from the outside world. The HTTP (Web) Client and Server 322 requests and serves the HTTP (Web) Pages 324. The filtering database 326 contains a table 328(1)-(N) of: Source IP addresses for each client computer 202(1)-(N) connected to the IP device 204; an indication of whether the filtering service is active for each Source IP address; and an indication of the filtering level for each active Source IP address. The filtering level is preferably a number that indicates the level of filtering that requests from a particular client computer are subject to. For example, all requests from client computer 202(1) may be subject to filtering level 1, which means that requests for content originating from client computer 202(1) will only be subject to filtering for say violent content. The filtering levels database 330 contains a table 332(1)-(N) listing various filtering levels and the IP address of the filtering router that is configured to filter all requests for that filtering level. For ease of explanation, the IP address of each filtering router 210, 212, or 214 (FIG. 2) will hereafter be referred to as a second destination IP address, as compared to a first destination IP address of a content server to where the request for content is sent. For example, if it is determined that requests from a particular client computer are subject to filtering level 3, then such requests are routed first to a filtering router for level one, then to a filtering router for level two, and finally to a filtering router for level three. This filtering system is explained in further detail below. The IP device 204 also contains a cache 336 for temporarily storing data. FIG. 4 is a block diagram of the filtering router 210, 212, or 214 shown in FIG. 2. The filtering routers 210, 212, or 214 preferably comprise at least one data processor or central processing unit (CPU) 402, a memory 410, communications circuitry 404, input ports 406(1)-(N), output ports 430(1)-(N), and at least one bus 408 that interconnects these components. The communications circuitry 404, input ports 406(1)-(N), and output ports 430(1)-(N) are used to communicate with the client computers 202(1)-(N) (FIG. 2), routes/switches 208 (FIG. 2), and the Internet 216 (FIG. 2). Memory 410 preferably includes an operating system 412, such as VXWORKS or EMBEDDED LINUX, having instructions for processing, accessing, storing, or searching data, etc. Memory 410 also preferably includes communication procedures 414; a routing Protocol 416, such as the Border Gateway Protocol (BGP); and a routing table 418, such as a BGP routing table. BGP is a routing protocol that is used to span autonomous systems on the Internet. BGP is used by the filtering routers 210, 212, and/or 214 to determine the appropriate path to forward data toward. BGP is a robust, sophisticated and scalable protocol that was developed by the Internet Engineering Task Force (IETF). For further information on BGP please see Request for Comments (RFCs) 1105, 1163, 1164, 1265, 1266, 1267, 1268, 1269, 1397, and 1403 all of which are incorporated herein by reference. The routing table 418 comprises a list of IP addresses and their associated output port numbers 420(1)-(5) and 422. The list of IP addresses partially contains the IP addresses 420(1)-(4) of content that is to be filtered by a particular filtering router 210, 212, and/or 214. For example, filtering router 210 contains a list of all IP addresses 420(1)-(4) for a specific category, such as violent content. Each IP address 420(1)-(4) of content that is to be filtered is routed to a particular output port, such as output port 1 430(1). This effectively routes a request for filtered content to someplace other that the destination IP address (first destination IP address) of the content server 218 (FIG. 2) that stores the requested content. Requests directed to all other IP addresses 422, i.e., the IP addresses of non-filtered content, are routed to another port, such as port 2, and onward toward the destination IP address (first destination. IP address). A more detailed explanation of this process is provided below in relation to FIGS. 7A and 7B. FIG. 5 is a route diagram of a process for updating a filter list on the service provider 220 shown in FIG. 2. Periodically, or whenever the filter list is updated, the list provider 222 provides for the transmission 710 of (FIG. 7A) the filter list to the service provider 220, preferably via the Internet 216. The service provider 220 then saves 708 (FIG. 7A) the list. Once the updated filter list has been received by the service provider from the list provider, the service provider 220 breaks down the list into individual categories, such as violence, pornography, etc., and associates a particular output port 430 (FIG. 4) of a particular filtering router 210, 212, or 214 with each IP address to be filtered. The service provider then sends the list having individual categories and output ports to the content filtering router, which accepts 706 (FIG. 7A) the list and stores 712 (FIG. 7A) the list in its routing table. FIG. 6 is a route diagram of a process for updating a filter list on a content filtering router shown in FIG. 2. Each individual category has its own filter list, which is transmitted 708 (FIG. 7A) to the particular filtering router 210, 212, or 214 configured to filter the specific category. These individual category lists are preferably transmitted via the Internet 216 and various routers and/or switches 208. The filtering router 210, 212, or 214 then stores 712 (FIG. 7A) the received filter list in its routing table 418 (FIG. 4), preferably overwriting any previous list. FIGS. 7A-7B are flow charts of a method for content filtering according to an embodiment of the present invention. Using any method for requesting content from a content server 218 (FIG. 2), a user of a client computer 202(1)-(N) (FIG. 2) sends 702 a packet containing a request for content to the IP device 204 (FIG. 2). The packet is received 704 by the IP device, which then determines 714 if the filtering system is active for the particular client computer that made the request. This is determined by looking up the IP address 328(1)-(N) (FIG. 3) of the client computer that made the request, in the filtering database 326 (FIG. 3) on the IP device. If it is determined that the filtering system is not active for the client computer that made the request (714—No), then the packet is sent 716 to the content server that stores the requested content. The content server receives 718 the packet and locates and sends 720 the content back to the IP device. The IP device receives and sends 722 the content onto the client computer that made the request. The client computer receives 726 and displays 728 the content. If it is determined that the filtering system is active for the client computer that made the request (714—Yes), then the IP device determines 724 the content filtering level for the particular client computer that made the request. This is done by looking up the corresponding filtering level 328(1)-(N) (FIG. 3) for the IP address of the client computer that made the request. Alternatively, the IP device might require a user identifier and password from the user to apply a filtering level on a user-by-user basis rather than on client-computer-by-client-computer basis. The user identifier is preferably a string of characters that represent a user on the system. Depending on the filtering level to be applied, the IP device then adds static source routing details to the packet. Specifically, the IP device adds 730 one or more filtering router IP address/es (second destination IP address/es) to the portion of the IP header of the packet reserved for “Source Route Options.” Each filtering router then acts as an intermediate hop in a source route, forwarding the source-routed packet to the next specified hop, such as to another filtering router or towards the content server. This is otherwise known as static source routing, which is performed using pre-configured routing tables which remain in effect indefinitely. Dynamic routing, on the other hand, uses special routing information protocols to automatically update the routing table with routes known by peer routers. Further information of static source routing and its loose and strict variations can be found in Request for comments 1122 and 1716, both of which are hereby incorporated by reference. Each one or more filtering router IP address/es (second destination IP address/es) is the IP address for a different filtering router 210, 212, or 214. The packet might be sent to one or more filtering routers depending on the filtering level for a particular client computer. Each filtering router filters for a different category of filtered content. For example, if a user has subscribed to a filtering service to filter pornographic and violent content, but not religious content, each request for content will be sent to both a filtering router for pornographic content and a filtering router for violent content. Once the filtering router IP address/es (second destination IP address/es) have been added to the packet, the IP device sends 732 the packet towards the content filtering router specified in the IP header of the packet, i.e., the second destination IP address. The packet is received 734 by the content filtering router 210, 212, or 214 (FIG. 2), which then determines 736 whether the content server IP address (first destination IP address) is on the list 420 (1)-(4) (FIG. 4) of IP addresses to be filtered in the routing table 418 (FIG. 4). If the content server's IP address (first destination IP address) is not on the list (736—No), then the filtering router's IP address (second destination IP address) is preferably removed 742 from the IP header of the packet. This is done to avoid the content from having to return to the client computer via the filtering router, thereby allowing the content to find the most efficient route back to the client computer using dynamic routing. The packet is then routed 744 to the next destination IP address in the IP header. If the next destination IP address in the IP header is the IP address of another filtering router, i.e., where the request for content is to be filtered for restricted content in a different category, such as violent content, then the packet is routed 744 to the next filtering router (as indicated by arrow 740). The process that occurs at each subsequent filtering router is similar to that described above and repeats until it is routed to a content server. If the next destination IP address is the IP address of the content server (first destination IP address), i.e., the content server's IP address is not on the routing table 418 (FIG. 4) and there are no further IP addresses for other filtering routers in the IP header, then the packet is routed 744 to the content server 218 (FIG. 2). The content server then receives 746 the packet and serves or sends 748 the content toward the IP device using standard dynamic routing. The content is then dynamically routed back to the IP device. The content is received and sent 770 by the IP device to the IP address of the client computer that made the request. The client computer subsequently receives 772 and displays 774 the content. If, however, the content server IP address (first destination IP address) is on the list (736—Yes), then the packet requesting the filtered content is routed 738 someplace other than to the content server 218 (FIG. 2) that stores and serves the requested content. For example, if the requested content contains pornographic material that is to be filtered by a particular filtering router, then the IP address of the content server storing and serving such content will be on the list of IP addresses 420(1)-(4) (FIG. 4) on the routing table 418 (FIG. 4) of that filtering router. In one embodiment, the packet is simply routed to an output port 430 (FIG. 4) that is not coupled to anything, and the packet is simply discarded. In this case, the user will simply be informed that the content cannot be found. Alternatively, the packet can be sent to the service provider 220, which in turn can send a message to the client computer that made the request, informing the user that the requested content has been blocked or filtered. In yet another embodiment the packet can be sent to the service provider, which in turn sends an authentication message to the user. The user must then supply a username and password to turn off the filtering system or allow a lesser filtering level, i.e., allow the user to view more content. FIG. 8A is a route diagram of a request for content that is filtered by a single filtering router 210, according to the method described in relation to FIGS. 7A and 7B. In this scenario, the filtering service is configured to only filter a single category of content, such as violent content. The filtering router that filters this particular category is filtering router 210. The packet containing the request for content travels from the client computer 202(1) to the IP device 204. The IP device adds a second destination IP address of the filtering router 210 to the IP header of the packet and transmits the packet to the central office 206. The central office 206 forwards the packet towards the filtering router 210 having the second destination IP address. The filtering router then checks whether the first destination IP address of the content server 218 to where the request was directed is on its routing table. If the first destination IP address is on the routing table, the filtering router routes the packet someplace other (802) than the content server. If the first destination IP address is not on the routing table, the filtering router routes the packet towards the content server 218. On its way to the content server 218 the packet may pass through other routers or switches 208. FIG. 8B is a route diagram of a request for content that is filtered by multiple filtering routers 210, 212, and 214 according to the method described in relation to FIGS. 7A and 7B. In this scenario, the filtering service is configured to filter three categories of content, such as violent, adult, and religious content. Here, the IP device adds three second destination IP addresses of the filtering routers 210, 212, and 214 to the IP header of the packet. Once the first filtering router 210 ascertains that the first destination IP address is not on the routing table, the first filtering router 210 routes the packet towards the second filtering router 212, and so on. If it is ascertained that the first destination IP address is on one of the routing tables of the filtering routers, then that filtering router can either discard (804) the packet or route the packet towards the service provider 220, as explained above in relation to FIGS. 7A and 7B. FIG. 9 is a route diagram of the return path of the content to the client computer 202(1) according to the method described in FIGS. 7A and 7B. If the first destination IP address of the content server 218 is not on a routing table of a filtering router through which the packet was routed, then the packet is sent to the content server 218. Once the content server receives the packet containing the request for content, it locates the content and transmits it back toward the source IP address of the client computer that made the request. The content is routed dynamically back to the client computer along the most efficient path available. In this way, routers can be used to filter content stored on a network by using only network/IP routes instead of application port/URLs. What is more, filtering software need not be stored or updated on any of the client computers. Periodically, if necessary, a revised list of IP addresses for the filtering routers can be sent to and stored in the filtering levels database 330 (FIG. 3) on the IP device. An updated list of the IP addresses of each client computer that has subscribed to the service, and its filtering level, can also periodically be sent to and stored in the filtering database of the IP device. This allows for a maintenance free system for the user that can be remotely updated from the service provider 220 (FIG. 2). An advantage of the content filtering process is that because the content filtering process is managed through the IP device, the filtering requirements and criteria only need to be set up once, and all client computers are automatically subject to the filtering service. In this way, individual client computers do not need to be individually configured. In addition, the filtering process does not require restricting users to only certain devices in order for the filtering process to be effective, as user names and passwords can be used to update the list of IP addresses and associated filtering levels in the IP device. Additionally, the filtering process requires little user interaction besides signing up for the service. Updating the content filter database on the content filtering server is preferably performed automatically. Now, another embodiment of the present invention is described primarily with reference to FIGS. 10-14. In this embodiment, if a customer has signed up for filtering service, a packet is routed from a client computer to a filtering router 1008 (FIGS. 10 and 12). The filtering router 1008 is configured to allow requests for content located on content servers 218 (FIG. 10) having particular first destination IP addresses to bypass a filtering server 1010 (FIGS. 10 and 13). Conversely, requests for content that is definitely to be filtered are sent to the filtering server 1010 for a final determination on whether the content is restricted. This decreases the number of packets that are routed to the filtering server 1010, thereby improving or optimizing system performance. FIG. 10 is a schematic of a system architecture 1000 for content filtering according to another embodiment of the invention. The system 1000 prevents a user's access to unauthorized content located on a network, such as the Internet. Unauthorized content may include undesirable, inappropriate, or extreme content, such as violence, hate, gambling or adult content. The system 1000 comprises components similar to those in FIG. 2; i.e., client computers 202(1)-(N), routers/switches 208, the Internet or some other network 216, and content servers 218. The system also preferably comprises a bi-directional IP communication device (IP device) 1002, a central office 1004, a filtering router 1008, a filtering server 1010, a service provider 1012, and a list provider 1014. The IP device 1002 is coupled between the client computers 202(1)-(N) and the central office 1004. The filtering router 1008 is coupled between the filtering server 1010 and the central office 1004. The central office 1004, filtering router 1008, and filtering server 1010 are coupled to the Internet 216 via routers/switches 208. Content servers 218, service provider 1012, and list provider 1014 are each coupled to the Internet 216. Each of the client computers 202(1)-(N), described previously with reference to FIG. 2, is preferably uniquely identifiable by an Internet Protocol (IP) address. An IP address is generally a 32-bit numeric address written as four numbers separated by periods, such as 64.12.15.3, also referred to a quad-octet. This sample address contains two pieces of information: a network identifier and a host identifier, where a host is any device that is attached to the network and uses the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol. The client computers 202(1)-(N) are assigned IP addresses either from a subnet of globally routable IP addresses, or from a subnet of private globally non-routable IP addresses defined by the RFC 1597 RFC 1918, both of which are incorporated herein by reference. If a subnet of private non-routable IP addresses is used for the client computers then the IP device 1002 provides Network Address Translation (NAT) services to translate the globally non-routable IP addresses to a globally routable IP address that can be routed globally, i.e., to the Internet. The client computers 202(1)-(N) may be any network client device that acts to initiate IP connections. The IP device 1002 is any device capable of providing communication between the client computers 202(1)-(N) and the Internet 216 and may include a dial-up modem, cable modem, DSL gateway, satellite modem, or the like. The IP device 1002 can act as a router, but preferably has additional capabilities. A central office 1004 preferably includes a network provider, such as SBC or BELL SOUTH. The network provider connects to the Internet 216 through, for example, a Broadband Service Node (BSN) and at least one router/switch 208. The BSN allows service providers to aggregate tens of thousands of subscribers onto one platform and apply highly customized IP services to these subscribers. A suitable BSN is NORTEL NETWORK's SHASTA 5000. The router/switch 208 is preferably a layer 4 switch, such as a SERVERIRON Web Switch made by FOUNDRY NETWORKS, an ALPINE series switch made by EXTREME NETWORKS, both of California U.S.A., or similar switches and routers made by CISCO or JUNIPER. The filtering router 1008 and filtering server 1010 provide content filtering and blocking functionality to users of the client computers 202 (1)-(N) as described below in relation to FIG. 13. The filtering server 1010 preferably comprises a CACHEFLOW Internet caching appliance and/or a number of INKTOMI Traffic servers that perform network caching server functions and work with content filtering databases provided by WEBSENSE or SURFCONTROL (both of California U.S.A.). A content list provider 1014, such as WEBSENSE or SURFCONTROL, generates and provides a list of restricted content and its associated content category, such as hate, violence, religion, and adult categories. A service provider 1012 provides the systems, methods, and protocols for provisioning and administering the content filtering service for a user. This is done by communicating data, such as configuration details, to and from the IP device 1002, filtering router 1008, and/or filtering server 1010. FIG. 11 is a block diagram of the IP device 1002 shown in FIG. 10. The IP device 1002 preferably includes ports 1102(1)-(N), a CPU 1104, communications circuitry 1106, a memory 1108, and a bus 1142 connecting the aforementioned components. The ports 1102(1)-(N), CPU 1104, communications circuitry 1106, memory 1108, and bus 1142 are similar to ports 306(1)-(N) (FIG. 3), CPU 302 (FIG. 3), communications circuitry 304 (FIG. 3), memory 310 (FIG. 3), and bus 308 (FIG. 3), respectively. The memory 1108 preferably includes an operating system 1110, communications procedures 1114, filtering procedures 1116, authentication procedures 1118, a network access translation (NAT)/firewall service 1120, HTTP (Web) client and server 1122, HTTP (Web) pages 1124, a filtering database 1128, a user database 1132, and configuration procedures 1138. The operating system 1110 preferably has instructions for communicating, processing, accessing, storing, or searching data, etc. The operating system 1110 is, for example, VXWORKS or EMBEDDED LINUX. The communication procedures 1114 are used for communicating with both the client computers 202(1)-(N) (FIG. 10), and the Internet 216 (FIG. 10). The filtering procedures 1116 are used for filtering content from the Internet 216 (FIG. 10) as described below in relation to FIG. 14. The authentication procedures 1118 are used to authenticate a user for content filtering services. The NAT/Firewall service 1120 converts a local IP address of each client computer 202(1)-(N) (FIG. 10) into a globally routable IP address for the Internet and vice versa, if necessary. It also serves as a firewall by keeping individual IP addresses of the client computers hidden from the outside world. The HTTP (Web) Client and Server 1122 requests and serves the HTTP (Web) Pages 1124. The filtering database 1128 includes a plurality of entries 1130(1)-(N). Each entry may have multiple fields associated with it, such as an IP address of each client computer (e.g., IP 1), an access policy (e.g., access policy 1), and a filtering privilege (e.g., filtering privilege 1). The filtering database 1128 preferably contains an entry for an IP address associated with each client computer 202(1)-(N) that is subscribed to the filtering service. Each access policy preferably includes user time restriction settings. The user time restriction settings are typically done at the gateway level. For example, a user may have an access policy that allows Internet access only from 3:30 p.m. to 8:30 p.m. The access policy for each user also preferably contains other information such as the type of service (e.g., premium or standard), expiry timeout (e.g., the access policy might expire after 1 hour, requiring the user to resupply his or her username and password), etc. The IP device 1002 can restrict access for users by, for example, comparing the time of allowed access (such as from 3:30 p.m. to 8:30 p.m.) to the time that a request is made. The filter privilege indicates a user's level of access to content on the network. Each filter privilege is associated with a filter category selected from categories such as adult content, hate, violence, gambling, etc. The user database 1132 includes a plurality of entries 1134(1)-(N). Each entry may have multiple fields associated with it, such as a user identifier (e.g., user 1), a password (e.g., password 1), an access policy (e.g., policy 1), and a filtering privilege (e.g., privilege 1). The user database 1132 preferably contains an entry for each user subscribed to the content filtering service. The filtering database 1128 is dynamic and entries 1130(1)-(N) are updated after a user has successfully authenticated against the user database 1132 using the authentication procedures 1118. For example, when a user logs in from a client computer having a particular IP address, he or she is asked for a username and password. The username and password are associated with an access policy and filtering privilege 1130. Once authenticated, the access policy and filtering privilege 1130 associated with the particular IP address of the authenticated user are updated in the filtering database 1128. The configuration procedures 1138 are used for supporting the protocol to and from the service provider 1012 (FIG. 10) for remote configuration and administration of the content filtering service. FIG. 12 is a block diagram of the filtering router 1008 shown in FIG. 10. The filtering router 1008 preferably comprises at least one data processor or central processing unit (CPU) 1204, a memory 1212, communications circuitry 1206, input ports 1202(1)-(N), output ports 1208(1)-(N), and at least one bus 1210 that interconnects the aforementioned components. The communications circuitry 1206, input ports 1202(1)-(N), and output ports 1208(1)-(N) are used to communicate with the client computers 202(1)-(N) (FIG. 10), routes/switches 208 (FIG. 10), and the Internet 216 (FIG. 10). Memory 1212 preferably includes an operating system 1214, communications procedures 1216, and a routing protocol 1218, similar to operating system 412 (FIG. 4), communication procedures 414 (FIG. 4), and routing protocol 416 (FIG. 4), respectively. Memory 1212 also preferably includes a routing table 1220, such as a BGP routing table. The routing table 1220 is used by the filtering router 1008 to determine the appropriate path for routing data. Traffic is preferably routed by the router into two pools, allowed (positive) and possibly not allowed (negative). The routing table 1220 comprises a list 1222(1)-(N), 1224(1)-(N), and 1226 of IP addresses and their associated output port numbers. Note that an IP address is preferably stored in the routing table as an amalgamation of IP addresses, or an IP address block, as explained below. Amalgamated address blocks are used to optimize filtering by reducing the number of entries 1222-1226 in routing table 1220. For example, if a number of IP addresses to be filtered have similar IP addresses, the entire block or subnet including these IP addresses is amalgamated into an IP address block. The IP address block preferably comprises one or more IP addresses, networks, or subnetworks, but may contain no addresses if empty. An address block may be a positive address block, a negative address block, or an other address block, as explained below. Positive address blocks comprise IP addresses of content servers 218 (FIG. 10) storing content that has been pre-approved (i.e., it has been determined that the content on the content server 218, should not be filtered). For example, when the first destination IP address of a content server 218 matches the IP address in a positive address block and the positive filter category is the only filter specified, no further filtering is required and the packet may be routed towards the content server 218, as per usual. Where the positive address block is a subnet, every IP address that falls within the subnet has preferably been pre-approved. Accordingly, since some packets are pre-approved, the number of packets received by the filtering server is reduced. Negative address blocks, on the other hand, comprise one or more IP addresses of content servers containing content that probably falls within a filter category. Accordingly, when the first destination IP address of a packet requesting content from a content server 218 has an IP address in a negative address block, further filtering is typically required. It should be noted that when an IP address of a content server 218 is in a negative address block, the content at the content server 218 may actually be suitable for viewing. In other words, where the negative address block comprises a subnet, some IP addresses on the subnet may contain suitable content. However, the filtering router does not attempt to determine whether content at the associated content servers 218 is appropriate and, instead, routes a request for filtered content to someplace other that the first destination IP address of the content server 218 that stores the content, thereby providing for further determination of whether the IP address of the content server should be filtered. It may seem counterintuitive to amalgamate IP addresses into subnets, when it is known that some of the IP addresses are of content servers containing content that will not ultimately be filtered. However, amalgamating IP addresses into subnets even when some of the IP addresses in the subnet do not fall within a filter category results in larger blocks of grouped IP addresses. This results in a smaller routing table and greatly improved routing performance. Accordingly, in some cases it is desirable to knowingly include IP addresses that are not to be filtered in a negative IP address block. For example, if a subnet has a high proportion of content that falls within a filter category, a negative IP block may include the entire subnet. A more detailed explanation of this process is provided below in relation to FIGS. 14A through 14C. In one embodiment, other IP addresses, or addresses that appear on neither the positive nor negative lists, are routed to a filtering server in a manner similar to addresses on the negative list. In an alternative embodiment, other IP addresses are routed to a different server (not shown) that causes the first IP address to be categorized as restricted or non-restricted content. This could be accomplished, for example, by checking a database or notifying an administrator to update the list. If other IP addresses are blocked, the client computer may receive a block message that is different from the message that could be received for blocking at the filtering server, such as an “unknown content” restriction. In yet another embodiment, the other IP addresses are routed in a manner similar to addresses on the positive list. FIG. 13 is a block diagram of the content filtering server 1010 shown in FIG. 10. The content filtering server 1010 preferably includes at least one data processor or central processing unit (CPU) 1304, a memory 1312, communications circuitry 1306, at least one communication port 1308, user interface devices 1302, and at least one bus 1310 that interconnects the aforementioned components. The communications circuitry 1306 and communication port 1308 allow for communication between the filtering server 1008 (FIG. 10), content filtering server 1010, and the remainder of the network. Memory 1312 preferably includes an operating system 1314, such as VXWORKS, LINUX, SUN SOLARIS, or MICROSOFT WINDOWS having instructions for communicating, processing, accessing, storing, or searching data, etc. Memory 1312 also preferably includes communication procedures 1316; authentication procedures 1318; configuration procedures 1320; a NAT/firewall service 1322; a HTTP (Web) client and server 1324; HTTP (Web) pages 1326; filtering procedures 1328; and an exclusionary content filter database 1330. The communication procedures 1316, including filter routing specifiers, are used for communicating with the Internet 216 (FIG. 10) and the IP device 1002 (FIG. 10). The authentication procedures 1318 authenticate administrators of the server. The NAT/Firewall service 1322 is similar to the NAT/Firewall service 1120. The HTTP (Web) client and server 1324 request and serve the HTTP (Web) pages 1326. The filtering procedures 1328 are used to control access to content on the Internet 216 (FIG. 10). The exclusionary content filter database 1330 comprises a list 1332(1)-(N) of URLs or IP addresses and associated filtering categories for each URL/IP entry. For example, the URL <http://www.adultcontent.com> may be associated with filtering category 1, which is, e.g., adult content. In one embodiment, the associated filtering categories are each 32-bit bit fields. A subset of the bits of the bit field represents a filtering category. Accordingly, in this embodiment, the maximum number of filtering categories is 32 (one category per bit of the bit field). The filtering procedures 1328 compare the URL of the user requested content against a URL (or IP address) of a content entry 1332 in the exclusionary content filter database 1330. The filtering procedures 1328 may also compare the associated filtering categories with the filtering privileges of the user requesting content. In an embodiment, the filtering server provides advanced filter options, such as by-user restrictions and numbers of failures (the user is blocked after a given number of failures). FIGS. 14A-14C are flow charts of a method for content filtering according to an embodiment of the present invention. In FIGS. 14A-14C, the client computer is one of the client computers 202(1)-(N) (FIG. 10); the IP device is the IP device 1002 (FIG. 10); the filtering router is the filtering router 1008 (FIG. 10); the filtering server is the filtering server 1010 (FIG. 10); the content server is one of the content servers 218 (FIG. 10); the service provider is the service provider 1012 (FIG. 10); and the list provider is the list provider 1014 (FIG. 10). Initially, the list provider sends 1418 an updated list of IP addresses to be filtered and their associated filter categories to the filtering server, which accepts 1420 the list to the filtering server. The list provider typically sends a text-based list of addresses. The filtering server prepares 1422 the addresses for the filtering router by converting the list to the form of route locations or routing specifications. Preferably, the preparation includes amalgamating addresses into IP address blocks. The filtering server preferably stores 1424 the updated list. The filtering router accepts 1426 the prepared addresses and configures 1428 its routing table accordingly. Note that the filtering router could be configured at any time by an administrator or updated with the list sent from the filtering server. In an alternative embodiment, the filtering server provides a list to the filtering router that has already been amalgamated at the filtering server into IP address blocks that are stored in the routing table 1220 (FIG. 12). In an alternative embodiment, a administrator may directly configure the filtering router routing table 1220 (FIG. 12) to include amalgamated IP address blocks. When a user wishes to use the system, the user preferably logs on to the system by entering a username and password (not shown) via a HTTP browser web page. This optional logon procedure allows the IP device to update the access policy and filtering privilege 1130 (FIG. 11), for the IP address associated with the user. Thus, the IP device preferably applies filtering categories on a user-by-user basis rather than on client-computer-by-client-computer basis. In any case, using any method for requesting content from the content server, a user of the client computer sends 1402 a packet containing a request for content to the IP device. The packet is received 1404 by the IP device, which then determines 1406, using the filtering procedures 1116 (FIG. 11), if the filtering system is active for the particular client computer that made the request or for the user that previously logged in. The filtering procedures 1116 (FIG. 11) look up in entries 1130(1)-(N) (FIG. 11) the IP address of the client computer that made the request, to make this determination. If it is determined that the filtering system is not active for the client computer that made the request (1406—No), then the packet is sent 1408 to the content server that stores the requested content. The content server receives 1410 the packet and locates and sends 1412 the content back to the IP device. The IP device receives and sends 1414 the content to the client computer that made the request. The client computer receives and displays 1416 the content. If it is determined that the filtering system is active for the client computer that made the request (1406—Yes), then the IP device determines 1431 the content filtering privileges associated with the particular client computer that made the request. This is done by looking up in the filtering database 1128 (FIG. 11) the corresponding filtering privilege for the IP address of the client computer that made the request. If the filtering service is active for the particular client computer, the IP device adds 1432 an IP address of the filtering router (second destination IP address) and a bogus IP address to the IP header of the packet reserved for “Source Route Options.” This allows static routing, which is performed using pre-configured routing tables which remain in effect indefinitely. However, the bogus IP address, even though it is stored in the header as an “IP address,” is not used for routing. Rather, the bogus IP address is used to identify the filtering privileges associated with the client computer. Adding a bogus IP address to the header improves the speed with which the filter categories may be indicated since IP addresses (even bogus ones) can be processed at the network layer. Since an IP address is 32 bits long, a bogus IP address can contain up to 32 filtering categories. A subset of the bits that make up the bogus IP address represent various filtering categories. In one embodiment, if a bit of the bogus IP address has a value of ‘1’, then the filtering category associated with that bit location is applicable. If, on the other hand, the bit at that bit location has a value of ‘0’, then that filtering category is not applicable. For example, a bogus IP address could have the value 132.0.0.0. Each of the four numbers (132, 0, 0, and 0) may be represented by 8 bits. The number 132 is represented by the binary number 01000100, while each of the 0's are represented by the binary number 00000000. Since the bogus IP address in this example has only two bit locations (the second and the sixth) with a value of 1, the user has filtering privileges for all filtering categories except for filtering categories 2 and 6. If, for example, filtering category 2 is violence and category 6 is hate, the user will preferably be blocked from content that is designated violence or hate. By indicating the filtering category in this way, filtering procedures 1116 (FIG. 11) on the filtering server can determine the filtering categories that are applicable for the client computer that requested content. For this embodiment, there are 232 possible filter category combinations. In an alternative embodiment, multiple bits of a bogus IP address could be used to provide greater detail, such as, for example, a filtering level for a single filtering category. Alternatively, multiple bogus IP addresses could be used to provide greater detail or to provide more filtering categories. Once the IP address of the filtering router (second destination IP address) and bogus IP address have been added to the packet, the IP device then sends 1434 the packet towards the content filtering router specified in the IP header of the packet, i.e., toward the second destination IP address. The packet is received 1436 by the content filtering router, which removes 1437 the second destination IP address from the header. This is done to avoid the requested content from having to return to the client computer via the filtering router. This allows the content to find the most efficient route back to the client computer using dynamic routing. Then, the filtering procedures 1116 (FIG. 11) determine 1438 whether the content server IP address (first destination IP address) is in an address block in the routing table 1220 (FIG. 12) of the filtering router. If the content server's IP address (first destination IP address) is in a positive address block (1438), the packet is routed 1440 to the content server as requested. The content server receives 1442 the packet and sends 1444 the content toward the IP device. The content is dynamically routed back to the IP device and received and sent 1446 by the IP device to the client computer that made the request. The client computer subsequently receives and displays 1448 the content. In an alternative embodiment, if the first destination IP address is not in a negative address block (1438), the packet is routed in the same manner as if the first destination IP address is in a positive address block (1438), as just described. If, however, the content server IP address (first destination IP address) is not in any positive address blocks (1438)—or, in an alternative embodiment, if the first destination IP address is in a negative address block (1438)—then the packet requesting the filtered content is routed 1450 to the content filtering server. The filtering server receives 1452 the packet and determines 1454 whether the IP address is associated with content that should be filtered using the filtering procedures 1328 (FIG. 13). The determination is made by comparing the first destination IP address with the URL/IPs of entries 1332(1)-(N) (FIG. 13) in the exclusionary content filter database 1330 (FIG. 13) of the filtering server. If it is determined (1454—No) that the IP address is not on the list of URUIPs, the filtering server sends 1456 the packet on to the content server. The content server receives 1458 the packet and sends 1460 the requested content back to the IP device. The IP device receives the requested content and sends 1462 the content to the client computer that requested the content. The client computer receives and displays 1464 the content. If it is determined (1454—Yes) that the IP address is on the list, then the filtering server compares 1466 the bogus IP address (indicating a filtering privilege) with the filter category associated with the URL/IP in the exclusionary content filter database 1330 (FIG. 13). In a preferred embodiment, the bogus IP address and the filter category are both 32 bits long. For an AND operation that ANDs two bits with the same bit location together, the result is 1 if both of the bits have a value of 1, and the result is 0 if one or both of the bits have a value of 0, at that bit location. Accordingly, a logical bit-wise AND operation, or some other comparing operation, may be used to determine at each bit location whether the bits of the bogus IP address correspond to a filtering category that is represented in the associated filter category of the URL/IP that matches the first destination IP address. This AND operation can be illustrated by, for simplicity, using 4 bits in the following 3 examples: ExampleBogus IPFiltering NumberAddressCategoryResult 11000AND1001=1000 21000AND0111=0000 31000AND0000=0000 In each example, the bogus IP address associated with the request has the first bit location set to “1”. For the purposes of this example, a “1” means the filtering category associated with this bit location is applicable (i.e., the request should be blocked based upon this category). A “0”, on the other hand, means the filtering category associated with the bit location is not applicable (i.e., the request should not be blocked based upon this category). In Example 1, categories 1 and 4 (as indicated by the “1” in the first and fourth bit locations) are the filtering categories associated with the URL/IP in the exclusionary content filter database 1330 (FIG. 13) that matches the first IP address. The filtering privilege for the user requesting the content is for category 1 (as indicated by the ‘1’ in the first bit location). ANDing the filtering privilege and the filtering category together shows that the content should be filtered based upon category 1. For instance, if category 1 is pornography, category 2 is religion, category 3 is hate, and category 4 is violence, the filtering privileges indicated by the bogus IP address of 1000 would be for religion (category 2), hate (category 3) and violence (category 4), but not for pornography (category 1). The filtering category 1001 means that the content server contains content that has been categorized as pornographic (category 1) and violent (category 4). Though the filtering privileges include violence, they do not include pornography. Accordingly, as is illustrated by the result of 1000, the content for this site is blocked because it has been determined to contain pornography (category 1). In Example 2, the filtering categories (0111) are categories 2, 3, and 4. In this case, as is appropriate since the filtering privilege (1000) only disallows category 1, ANDing the filtering privilege and filtering category shows that the content should not be filtered (0000). Thus, if the filtering privileges allow access to religion (category 2), hate (category 3), and violence (category 4), but not pornography (category 1), then access to content that has been determined to contain religion, hate, and violence would not be blocked. In Example 3, the filtering categories (0000) indicate that the content is not blocked, regardless of filtering privilege. As expected, ANDing the filtering privilege and filtering category shows (0000) that the content should not be filtered, regardless of the filtering privilege. In this manner, or by some other comparing operation, the filtering server determines 1468 whether at least one of these filtering categories matches a filtering category associated with the URL in the exclusionary content filter database 1330 (FIG. 13). If there is no match (1468—No), the filtering server sends 1456 the packet to the content server. The content server receives 1458 the packet and sends 1460 the requested content back to the IP device. The IP device receives the requested content and sends 1462 the content to the client computer that requested the content. The client computer receives and displays 1464 the content. If, on the other hand, the filtering server determines that there is a match (1468—Yes), the request for content is blocked 1470. Preferably the server will send 1472 an authorization request to the client computer, including a notification that the request was blocked. In another embodiment, when the request is blocked 1470, the server may simply dump or discard the request (e.g., the packet could be routed to an output port that is not connected to anything). In an alternative embodiment, the packet may be redirected to an alternate server, which receives the packet and sends other content towards the IP device, such as a “blocked content” page. In the preferred embodiment, the IP device forwards 1474 the authorization request to the client computer, which receives 1476 the authorization request. The user may be prompted to enter, for example, a username and password at the client computer. The username and password serve as authorization. The client computer preferably sends 1478 a packet containing a request (preferably the original request for content) along with the authorization. The IP device receives 1480 the packet with authorization. The authentication procedures 1118 determine if the filtering database 1128 may be updated by comparing the username and password to values in the user database 1132. If the username and password are in the user database 1132, the authentication procedures 1118 update 1482 the policy and privilege in the filtering database 1128 associated with the IP address of the requesting client computer with the policy and privilege values in the user database 1132 that are associated with the username and password. This is similar to when a user logs in (described previously). Once the filtering database has been updated, the IP device determines 1431 filtering privileges and continues as previously described. Alternatively, the updating 1482 sets filtering to inactive for that user and the packet is routed as requested. In another embodiment, the packet can be sent to the service provider, which, in turn, can send a message to the client computer that made the request, informing the user that the requested content has been blocked or filtered. Or the service provider could send the authentication message to the user. FIG. 15 illustrates a method for providing access by an administrator for a user who is denied content based on the user's filtering privileges. An IP device first receives 1502 a packet containing a request for content. The IP device determines 1504 the filtering privileges for the user in a manner described above. Using the techniques described above, it is determined 1506 whether the requested content is restricted. If the content is not restricted (1506-N), the IP device requests 1508 the content from the content provider, receives 1510 the content from the content provider, and forwards 1512 the content to the user. If the user is done (1514-Y), the process ends, otherwise (1514-N) the process continues at step 1502. If the content is restricted (1506-Y), then the IP device transmits a filter page for display 1516 to the user. The filter page preferably includes a link to an administrator with full filtering privileges, or at least more filtering privileges than the user has. The link is preferably an email link, but could be any type of link. In an alternative, instead of a link, text is provided. The text could be a phone number or the name of one or more administrators. An administrator may be a parent and the user a child. Or the administrator could be a librarian and the user a patron of the library. If the user does not follow the link (1518-N), the process continues at step 1514. Otherwise (1518-Y), the IP device (or the user) notify 1520 the administrator that access has been denied for the user. The notification preferably includes a message explaining that access was denied and that the user desires access. The notification may also include a link that, when clicked, quickly generates a response with permission to obtain the requested content. When the IP device accepts 1522 the administrator's reply, the IP device determines 1524 whether access is now allowed, based upon the administrator's filtering privileges. If access is no longer restricted (1524-Y), the process continues at step 1508. Otherwise (1524-N) the process continues at step 1514. While the foregoing description and drawings represent the preferred embodiment of the present invention, it will be understood that various additions, modifications and substitutions may be made therein without departing from the spirit and scope of the present invention as defined in the accompanying claims. In particular, it will be clear to those skilled in the art that the present invention may be embodied in other specific forms, structures, arrangements, proportions, and with other elements, materials, and components, without departing from the spirit or essential characteristics thereof. The presently disclosed embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims, and not limited to the foregoing description. Furthermore, it should be noted that the order in which the process is performed may vary without substantially altering the outcome of the process.
__label__pos
0.814869
Signal slots across threads 'Re: [vtkusers] QVTKWidget signal/slot question' - MARC Note that such a protocol must be enforced for the data or resource a mutex is protecting across all threads that. threads signal on. Multithreaded Programming.Qt5 Tutorial: Signals and Slots. In this tutorial, we will learn QtGUI project with signal and slot mechanism. C++/Qt Sharing Data Across Threads - erickveil PPRuNe Forums - View Single Post - A380 appears to cause I'm still an instrument student, but, here is an interesting one I just came across. I'm used to substituting a GPS for DME from a VOR (pull up. Gebrauchsanweisung für qt signal and slots zum download und. implementation qt signals and slots across threads qt signals and slots 5.4 qt signals.Weak signal ham radio communication Brought to you by:. // implementations where the template method is a Qt slot which is.Diez B. Roggisch It depends on the toolkit you use. Qt has thread-safe custom events in 3.x, and afaik signal/slots (and thus events) are generally thread-safe in 4.x.Lock, mutex, semaphore… what's the difference?. args) { // Create the semaphore with 3 slots, where 3 are. but across its many threads,.This section describes the new style of connecting signals and slots. [Python] Gui thread and async jobs. - Grokbase Thread Support in Qt. a thread-safe way of posting events, and signal-slot connections across threads. Signals and Slots Across Threads. A complementary overview of signal mirror development across. History of US Signal MIrror. (see the March 1946 article attached at the start of this thread).How to use QThread properly. do a connect from a signal to a slot,. does not like the parent-child relationship to go across from one thread to another.Connections may be direct (ie. synchronous) or queued (ie. asynchronous).I recently took the strap/harness out of my Baby Trend Flex Loc Galaxy Car seat to clean it. There are 4 slots (two on each side) of the base of the seat, and the. The following code demonstrates the definition, connection and emit of a.You should also be aware that pyuic4 generates code that uses.The reference count of the object being passed is maintained automatically.Thread-Safe Signals/Slots using C++11. Signals may be invoked from multiple threads,. In order to make signal emission lock-free,.List of screw drives Part of a series on. so they do not align to form intersecting slots across the top of. A version with left-hand threads is called an. In this case the mainwindow would be the controller that connects the model (signals from other classes) with view (sets button statuses and statusbar text). WSJT / WSJT / [r4371] /tags/wsjtx-1.4.0-rc1 It seems silly to have to write another class with signals that match all the slots of the MainWindow, and then inherit that class so that all my workers can throw those signal.You should understand the following important concepts about. has_slots<>. The listener connects to the signal by calling the. signal_thread().Multiple slots can connect to a single signal, and a single slot can connect to multiple signals.6.087 Lecture 13 – January 28, 2010 Review. unblocking a waiting thread • Mutex – loc king calls P(s). abstraction to enable communication across a.• signal and slot are Qt keywords, processed by the meta compiler. Code (connecting signals and slots, Qt5 approach) { //need some data to plot. Thread: A380 appears to cause interference on Loc/GS signal. being so long it first swings across the opposite side of the runway before taking an exit thereby. Async Programming - Introduction to Async/Await on ASP.NET Why I dislike Qt signals/slots. observer for a zero-arg signal. Another thread can come along and "fire. two separate signals to get across all the. Chapter 4: Threads Changes by MA Doman 2013 * * * * * * * * * * * * * * * * Signal Handling Signals are used in UNIX systems to notify a process that a particular.You need to post QEvent-derived instances to the main thread.Although PyQt4 allows any Python callable to be used as a slot when connecting.
__label__pos
0.601896
Python is a programming language that is designed for universal purpose. It aims to highlight the code readability with the help of significant indentation. It is portable, as it has the ability to run on multiple operating systems — for example, Windows, Linux, and MacOS. It is an object-oriented programming language, but it also facilitates multiple programming paradigms — for example, procedural and functional programming. About Python Python was created by a Dutch programmer, Guido van Rossum, in the late 1980s. The name Python came from a BBC comedy series called “Monty Python’s Flying Circus.” Van Rossum created Python after his years of experience in the ABC programming language. A non-profit organization was founded in 2001 to promote the development of the Python community and manage several responsibilities for various processes within the Python community. Python remains one of the most popular programming language in the world, according to a survey conducted by Stack Overflow in 2022 and a research project created by PYPL. Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.763996
JAVA EXAMPLE PROGRAMS JAVA EXAMPLE PROGRAMS Publish Your Article Here Program: Assertion method Assert.assertArrayEquals() example. Java Class: org.junit.Assert Assert class provides a set of assertion methods useful for writing tests. assertArrayEquals() method checks that two object arrays are equal or not. If they are not, it throws an AssertionError with the given message. Incase if expected input and actual inputs are null, then they are considered to be equal. It checks whether both arrays are having same number of elements or not, and all elements should be same. It compares based on the order. If mismatch in order results in failure. package com.java2novice.junit.tests; import org.junit.Test; import static org.junit.Assert.*; public class MyAssertArrayEqualsTest { @Test public void myTestMethod(){ /** * we are demonstrating the usage of assertArrayEquals() * method here, so we are preparing input data here itself. * In real scenario, we will consider the methods returned * value which suppose to be test, as a input. */ //assume that the below array represents expected result String[] expectedOutput = {"apple", "mango", "grape"}; //assuem that the below array is returned from the method //to be tested. String[] methodOutput = {"apple", "mango", "grape"}; assertArrayEquals(expectedOutput, methodOutput); } } << Previous Program | Next Program >> blog comments powered by Disqus Java JUnit Examples 1. Simple JUnit test using @Test annotation. 2. List of JUnit annotations. 3. Assertion method Assert.assertArrayEquals() example. 4. How to do JUnit test for comapring two list of user defined objects? 5. Assertion method Assert.assertEquals() example. 6. Assertion method Assert.assertFalse() example. 7. Assertion method Assert.assertTrue() example. 8. Assertion method Assert.assertNotNull() example. 9. Assertion method Assert.assertNull() example. 10. Assertion method Assert.assertNotSame() example. 11. Assertion method Assert.assertSame() example. Knowledge Centre Stream and types of Streams A Stream is an abstraction that either produces or consumes information. There are two types of Streams and they are: Byte Streams: Provide a convenient means for handling input and output of bytes. Byte Streams classes are defined by using two abstract classes, namely InputStream and OutputStream. Character Streams: Provide a convenient means for handling input & output of characters. Character Streams classes are defined by using two abstract classes, namely Reader and Writer. Famous Quotations Insanity: doing the same thing over and over again and expecting different results. -- Albert Einstein About Author Most Visited Pages Other Interesting Sites Reference: Java™ Platform Standard Ed. 7 - API Specification | Java is registered trademark of Oracle. Privacy Policy | Copyright © 2017 by Nataraja Gootooru. All Rights Reserved.
__label__pos
0.981537
  Version menu_open Wwise SDK 2023.1.4 AK::Wwise::Plugin::XmlNodeType Namespace Reference Types of possible XML elements. See MSDN documentation topics for XmlNodeType. More... Enumerations enum  NodeType {   Attribute = 2, CDATA = 4, Comment = 8, Document = 9,   DocumentFragment = 11, DocumentType = 10, Element = 1, EndElement = 15,   EndEntity = 16, Entity = 6, EntityReference = 5, None = 0,   Notation = 12, ProcessingInstruction = 7, SignificantWhitespace = 14, Text = 3,   Whitespace = 13, XmlDeclaration = 17 }   Detailed Description Types of possible XML elements. See MSDN documentation topics for XmlNodeType. Was this page helpful? Need Support? Questions? Problems? Need more info? Contact us, and we can help! Visit our Support page Tell us about your project. We're here to help. Register your project and we'll help you get started with no strings attached! Get started with Wwise
__label__pos
0.999013
PHP funkce HackForum PHP funkce# Ahoj! Mam nasledujici funkci v PHP, ktera "vysaje" text, ktery je mezi dvema retezci. Ptam se Vas, zda nevite jak funkci optimalizovat, aby byla rychlejsi, pripadne mate nejake efektivnejsi reseni teto situace? Dekuji function vysaj($zacatek,$konec,$text) { $delka=strlen($zacatek); $text = StrStr($text, $zacatek); $text=substr($text,$delka); $pomocnej=StrStr($text, $konec); return str_replace($pomocnej,"",$text); } (odpovdt) Aoj | 82.27.163.*2.4.2013 0:52 re: PHP funkce# Zkus pout regulrn vraz: function vysaj2($zacatek,$konec,$text) { preg_match("/^($zacatek)(.+)($konec)$/", $text, $output); return $output[2]; } ---------- Teprve kdy vstvte s hackingem a ulhte s mylenkou na nj, mte anci bt hackerem. (odpovdt) .cCuMiNn. | E-mail | Website | PGP2.4.2013 19:02 re: PHP funkce# function vysaj3($begin, $end, $text) { $t = strpos($text, $begin) + strlen($begin); return substr($text, $t, strpos($text, $end) - $t); } (odpovdt) independent2.4.2013 20:30 re: PHP funkce# Dekuju moc!! otestuju a urcite dam vedet! ;) (odpovdt) Aoj | 82.27.163.*3.4.2013 1:44 re: PHP funkce# @.cCuMiNn.: Vae een bych rozhodn nedoporuoval. Testoval jste ho, co se do rychlosti a funknosti(undefined index notice) te? een od independenta je urit lep volbou. (odpovdt) mb0y | E-mail3.4.2013 6:32 re: PHP funkce# mb0y: 1) pro oeten funknosti sta dopedu inicializovat pole $output[]. 2) mon se budete divit, ale m een je skuten nejrychlej, a to zhruba v tomto pomru: - vysaj1() : 10 - vysaj2() : 7 - vysaj3() : 8 ---------- Teprve kdy vstvte s hackingem a ulhte s mylenkou na nj, mte anci bt hackerem. (odpovdt) .cCuMiNn. | E-mail | Website | PGP3.4.2013 8:35 re: PHP funkce# Kdyz jsem to posilal, chtel jsem to zmerit taky, ale ta funkce vysaj2() mi prave nejak nechtela fungovat a protoze mam s regexama minimum zkusenosti, nevedel jsem, jak to opravit (a nevim to do ted). Aoj to podle me chce pouzivat na vyparsovani textu mezi tagy a imho by moje reseni skutecne melo byt nejrychlejsi. Ten preg_match imho bude prochazet cely text a hledat vsechny vyskyty, zatimco strpos se u prvniho vyskytu zastavi. Zvlaste u delsich textu by rozdil mohl byt markantni. Mozna se ale pletu. vysaj2 mi hazi nasledujici error: Notice: Undefined offset: 2 Inicializovat pole tedy nestaci, protoze ten preg_match tam proste zadny vysledek neulozi. (odpovdt) independent3.4.2013 16:53 re: PHP funkce# OK, omlouvm se a beru zpt. Nkde se vloudil njak otek a fn 2. je skuten nejpomalej. vsledek testu: TRUE testy 1: 00:02 - Vrac: " dlouheho textu a jeho " 2: 00:05 - Vrac: " dlouheho textu a jeho " 3: 00:02 - Vrac: " dlouheho textu a jeho " FALSE testy 1: 00:02 - Vrac: "" 2: 00:03 - Vrac: "" 3: 00:02 - Vrac: "tek dlouheho textu a jeho k" ---------- Teprve kdy vstvte s hackingem a ulhte s mylenkou na nj, mte anci bt hackerem. (odpovdt) .cCuMiNn. | E-mail | Website | PGP3.4.2013 21:09 re: PHP funkce# Kd: <?php function vysaj1($zacatek,$konec,$text) { $delka=strlen($zacatek); $text = StrStr($text, $zacatek); $text=substr($text,$delka); $pomocnej=StrStr($text, $konec); return str_replace($pomocnej,"",$text); } function vysaj2($zacatek,$konec,$text) { preg_match("/^($zacatek)(.+)($konec)$/", $text, $output); return isset($output[2])?$output[2]:''; } function vysaj3($begin, $end, $text) { $t = strpos($text, $begin) + strlen($begin); return substr($text, $t, strpos($text, $end) - $t); } $zacatek = "zacatek"; $konec = "konec"; $text = "zacatek dlouheho textu a jeho konec"; echo "<h2>TRUE testy</h2>"; $time = Time(); for ($i=0; $i<500000; $i++) { $a = vysaj1($zacatek, $konec, $text); } echo "<b>1: </b>".StrFTime("%M:%S", Time()-$time) .' - Vrac: "'.vysaj1($zacatek, $konec, $text).'"<br><br>'; $time = Time(); for ($i=0; $i<500000; $i++) { $a = vysaj2($zacatek, $konec, $text); } echo "<b>2: </b>".StrFTime("%M:%S", Time()-$time) .' - Vrac: "'.vysaj2($zacatek, $konec, $text).'"<br><br>'; $time = Time(); for ($i=0; $i<500000; $i++) { $a = vysaj3($zacatek, $konec, $text); } echo "<b>3: </b>".StrFTime("%M:%S", Time()-$time) .' - Vrac: "'.vysaj3($zacatek, $konec, $text).'"<br><br>'; $zacatek = "blaf"; $konec = "blaf"; $text = "zacatek dlouheho textu a jeho konec"; echo "<h2>FALSE testy</h2>"; $time = Time(); for ($i=0; $i<500000; $i++) { $a = vysaj1($zacatek, $konec, $text); } echo "<b>1: </b>".StrFTime("%M:%S", Time()-$time) .' - Vrac: "'.vysaj1($zacatek, $konec, $text).'"<br><br>'; $time = Time(); for ($i=0; $i<500000; $i++) { $a = vysaj2($zacatek, $konec, $text); } echo "<b>2: </b>".StrFTime("%M:%S", Time()-$time) .' - Vrac: "'.vysaj2($zacatek, $konec, $text).'"<br><br>'; $time = Time(); for ($i=0; $i<500000; $i++) { $a = vysaj3($zacatek, $konec, $text); } echo "<b>3: </b>".StrFTime("%M:%S", Time()-$time) .' - Vrac: "'.vysaj3($zacatek, $konec, $text).'"<br><br>'; ?> ---------- Teprve kdy vstvte s hackingem a ulhte s mylenkou na nj, mte anci bt hackerem. (odpovdt) .cCuMiNn. | E-mail | Website | PGP3.4.2013 21:30 re: PHP funkce# [code=php] /** * Vysv ze zadanho textu st mezi dvma definovanmi etzci * V ppad, e je nalezen text zanajc a konc poadovanmi * etzci, je navrcen. * Je hledn pouze prvn vskyt - zleva doprava, nebo zprava doleva ($rtl). * * @param string $zacinaNa etzec, kterm zan text * @param string $konciNa etzec, kterm kon hledan text * @param string $text Text ve kterm se hled * @optional boolean $rtl Pokud je true, hled od konce * @return string|boolean V ppad nalezen poadovanch etzc vrt text mezi, jinak vrac false */ function vysaj4($zacinaNa, $konciNa, $text, $rtl = false) { // hledn zleva doprava nebo zprava doleva? $fce = $rtl ? "strrpos" : "strpos"; // zkusme najt start $ret = $fce($text, $zacinaNa) + strlen($konciNa); if($ret === false){ return false; } return substr($text, $ret, $fce($text, $konciNa) - $ret); } (odpovdt) mb0y | E-mail4.4.2013 8:38 re: PHP funkce# WOW!! Naprosto vyerpvajc!! Neekal jsem tolik odpovd. Nakonec dost zajmav! Oceuji vai iniciativu!! Dky moc!:) (odpovdt) Aoj | 82.27.163.*7.4.2013 1:48 Zpt Svou ideln brigdu na lto najdete na webu Ideln brigda         BBCode
__label__pos
0.862424
robots.txt 1 Robots.txt is a text file that instructs web robots or crawlers on which pages of a website to crawl or avoid crawling. It is an essential file that can help website owners control which pages they want search engines to index and which pages they don’t. In this article, we will discuss how to create a robots.txt file and how to implement it on a website. Basic robots.txt Format The format of a robots.txt file is quite simple. Here’s an example of the basic structure: User-agent: [user-agent name] Disallow: [URL string not to be crawled]   User-agent: [user-agent name] Disallow: [URL string not to be crawled] User-agent refers to the web robot you want to give instructions to, and Disallow refers to the URL string that you want to prevent the robot from crawling. Creating a Robots.txt File Creating a robots.txt file is a simple process. All you need is a text editor such as Notepad or TextEdit. Here are the steps to create a robots.txt file: 1. Open a new document in your text editor. 2. Type in the user agent and Disallow directives for the pages you want to block. 3. Save the file as “robots.txt” on the root directory of your website.   Implementing Robots.txt on a Website After creating the robots.txt file, the next step is to implement it on your website. Here are the steps to implement robots.txt on a website: 1. Upload the robots.txt file to the root directory of your website. This is the top-level directory of your website, usually where your homepage is located. 2. Verify the file’s existence by typing in your website’s URL followed by /robots.txt. For example, if your website’s URL is https://www.example.com/, type in https://www.example.com/robots.txt. If the file is uploaded correctly, you should see the contents of your robots.txt file displayed on the screen. 3. Test your robots.txt file using a robots.txt checker tool to ensure it works correctly. Tips for Creating an Effective Robots.txt File Here are some tips for creating a practical robots.txt file: • Make sure you use the correct syntax when creating your robots.txt file. A single error in syntax can cause the file to malfunction. • Include a sitemap directive in your robots.txt file to help search engines find all the pages on your website. • Use wildcards to block multiple pages at once. For example, you can use “Disallow: /blog/*” to block all pages in your blog section. • Avoid using robots.txt to block sensitive or private information. If the information is confidential, it should not be on a public-facing website in the first place. • Regularly update your robots.txt file to keep it current with any changes to your website’s structure. The Robots.txt file plays an important role in SEO. It is a powerful tool that helps website owners control which pages they want search engines to crawl and index. By using a Robots.txt file, you can provide instructions to search engine robots on which pages to crawl and which pages to ignore. Here are some ways that Robots.txt can help with SEO: • Prevent Crawling of Duplicate Content: If you have duplicate content on your website, you can use the Robots.txt file to block the search engines from crawling those pages. This helps to prevent duplicate content issues that could harm your SEO efforts. • Improve Crawling Efficiency: By using Robots.txt to instruct search engine robots to avoid crawling certain pages, you can help improve the efficiency of the crawling process. This can help search engines to crawl and index your website more efficiently. • Protect Sensitive Pages: You can use Robots.txt to protect sensitive pages on your website from being crawled by search engine robots. This is particularly important if you have pages that contain confidential information or personal data. • Manage Crawling Frequency: You can use Robots.txt to manage the frequency at which search engine robots crawl your website. By controlling the crawling frequency, you can help ensure that your website is crawled regularly but not too often. • Block Pages with Thin Content: If you have pages on your website with thin or low-quality content, you can use Robots.txt to prevent search engine robots from crawling those pages. This helps to prevent those pages from being indexed, which can have a positive impact on your SEO efforts. Robots.txt is an essential file that can help website owners control which pages they want search engines to index and which pages they don’t. Creating a robots.txt file is a simple process, and implementing it on your website is easy. Following the tips outlined in this article, you can create an effective robots.txt file that helps improve your website’s SEO and protects your content. For the latest tech news and reviews, follow Rohit Auddy on Twitter, Facebook, and Google News. Similar Posts Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.88995
/* $NetBSD: irix_prctl.c,v 1.18 2002/10/14 21:14:25 manu Exp $ */ /*- * Copyright (c) 2001-2002 The NetBSD Foundation, Inc. * All rights reserved. * * This code is derived from software contributed to The NetBSD Foundation * by Emmanuel Dreyfus. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. All advertising materials mentioning features or use of this software * must display the following acknowledgement: * This product includes software developed by the NetBSD * Foundation, Inc. and its contributors. * 4. Neither the name of The NetBSD Foundation nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. */ #include __KERNEL_RCSID(0, "$NetBSD: irix_prctl.c,v 1.18 2002/10/14 21:14:25 manu Exp $"); #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include struct irix_sproc_child_args { struct proc **isc_proc; void *isc_entry; void *isc_arg; size_t isc_len; int isc_inh; struct proc *isc_parent; struct irix_share_group *isc_share_group; int isc_child_done; }; static void irix_sproc_child __P((struct irix_sproc_child_args *)); static int irix_sproc __P((void *, unsigned int, void *, caddr_t, size_t, pid_t, struct proc *, register_t *)); static struct irix_shared_regions_rec *irix_isrr_create __P((vaddr_t, vsize_t, int)); #ifdef DEBUG_IRIX static void irix_isrr_debug __P((struct proc *)); #endif int irix_sys_prctl(p, v, retval) struct proc *p; void *v; register_t *retval; { struct irix_sys_prctl_args /* { syscallarg(int) option; syscallarg(void *) arg1; } */ *uap = v; int option = SCARG(uap, option); #ifdef DEBUG_IRIX printf("irix_sys_prctl(): option = %d\n", option); #endif switch(option) { case IRIX_PR_GETSHMASK: { /* Get shared resources */ struct proc *p2; int shmask = 0; struct irix_emuldata *ied; p2 = pfind((pid_t)SCARG(uap, arg1)); if (p2 == p || SCARG(uap, arg1) == 0) { /* XXX return our own shmask */ return 0; } if (p2 == NULL) return EINVAL; ied = (struct irix_emuldata *)p->p_emuldata; if (ied->ied_shareaddr) shmask |= IRIX_PR_SADDR; if (p->p_fd == p2->p_fd) shmask |= IRIX_PR_SFDS; if (p->p_cwdi == p2->p_cwdi); shmask |= (IRIX_PR_SDIR|IRIX_PR_SUMASK); *retval = (register_t)shmask; return 0; break; } case IRIX_PR_LASTSHEXIT: /* "Last sproc exit" */ /* We no nothing */ break; case IRIX_PR_GETNSHARE: { /* Number of sproc share group memb.*/ struct irix_emuldata *ied; struct irix_emuldata *iedp; struct irix_share_group *isg; int count; ied = (struct irix_emuldata *)p->p_emuldata; if ((isg = ied->ied_share_group) == NULL) { *retval = 0; return 0; } count = 0; (void)lockmgr(&isg->isg_lock, LK_SHARED, NULL); LIST_FOREACH(iedp, &isg->isg_head, ied_sglist) count++; (void)lockmgr(&isg->isg_lock, LK_RELEASE, NULL); *retval = count; return 0; break; } case IRIX_PR_TERMCHILD: { /* Get SIGHUP when parent's exit */ struct irix_emuldata *ied; ied = (struct irix_emuldata *)(p->p_emuldata); ied->ied_termchild = 1; break; } case IRIX_PR_ISBLOCKED: { /* Is process blocked? */ pid_t pid = (pid_t)SCARG(uap, arg1); struct irix_emuldata *ied; struct proc *target; struct pcred *pc; if (pid == 0) pid = p->p_pid; if ((target = pfind(pid)) == NULL) return ESRCH; if (irix_check_exec(target) == 0) return 0; pc = p->p_cred; if (!(pc->pc_ucred->cr_uid == 0 || \ pc->p_ruid == target->p_cred->p_ruid || \ pc->pc_ucred->cr_uid == target->p_cred->p_ruid || \ pc->p_ruid == target->p_ucred->cr_uid || \ pc->pc_ucred->cr_uid == target->p_ucred->cr_uid)) return EPERM; ied = (struct irix_emuldata *)(target->p_emuldata); *retval = (ied->ied_procblk_count < 0); return 0; break; } default: printf("Warning: call to unimplemented prctl() command %d\n", option); return EINVAL; break; } return 0; } int irix_sys_pidsprocsp(p, v, retval) struct proc *p; void *v; register_t *retval; { struct irix_sys_pidsprocsp_args /* { syscallarg(void *) entry; syscallarg(unsigned) inh; syscallarg(void *) arg; syscallarg(caddr_t) sp; syscallarg(irix_size_t) len; syscallarg(irix_pid_t) pid; } */ *uap = v; /* pid is ignored for now */ printf("Warning: unsupported pid argument to IRIX sproc\n"); return irix_sproc(SCARG(uap, entry), SCARG(uap, inh), SCARG(uap, arg), SCARG(uap, sp), SCARG(uap, len), SCARG(uap, pid), p, retval); } int irix_sys_sprocsp(p, v, retval) struct proc *p; void *v; register_t *retval; { struct irix_sys_sprocsp_args /* { syscallarg(void *) entry; syscallarg(unsigned) inh; syscallarg(void *) arg; syscallarg(caddr_t) sp; syscallarg(irix_size_t) len; } */ *uap = v; return irix_sproc(SCARG(uap, entry), SCARG(uap, inh), SCARG(uap, arg), SCARG(uap, sp), SCARG(uap, len), 0, p, retval); } int irix_sys_sproc(p, v, retval) struct proc *p; void *v; register_t *retval; { struct irix_sys_sproc_args /* { syscallarg(void *) entry; syscallarg(unsigned) inh; syscallarg(void *) arg; } */ *uap = v; return irix_sproc(SCARG(uap, entry), SCARG(uap, inh), SCARG(uap, arg), NULL, p->p_rlimit[RLIMIT_STACK].rlim_cur, 0, p, retval); } static int irix_sproc(entry, inh, arg, sp, len, pid, p, retval) void *entry; unsigned int inh; void *arg; caddr_t sp; size_t len; pid_t pid; struct proc *p; register_t *retval; { int bsd_flags = 0; struct exec_vmcmd vmc; int error; struct proc *p2; struct irix_sproc_child_args *isc; struct irix_emuldata *ied; struct irix_emuldata *iedp; struct irix_share_group *isg; segsz_t stacksize; #ifdef DEBUG_IRIX printf("irix_sproc(): entry = %p, inh = %x, arg = %p, sp = 0x%08lx, len = 0x%08lx, pid = %d\n", entry, inh, arg, (u_long)sp, (u_long)len, pid); #endif if (len == 0) return EINVAL; if (inh & IRIX_PR_SFDS) bsd_flags |= FORK_SHAREFILES; if (inh & (IRIX_PR_SUMASK|IRIX_PR_SDIR)) { bsd_flags |= FORK_SHARECWD; /* Forget them so that we don't get warning below */ inh &= ~(IRIX_PR_SUMASK|IRIX_PR_SDIR); } /* We know how to do PR_SUMASK and PR_SDIR together only */ if (inh & IRIX_PR_SUMASK) printf("Warning: unimplemented IRIX sproc flag PR_SUMASK\n"); if (inh & IRIX_PR_SDIR) printf("Warning: unimplemented IRIX sproc flag PR_SDIR\n"); /* * If revelant, initialize the share group structure */ ied = (struct irix_emuldata *)(p->p_emuldata); if (ied->ied_share_group == NULL) { isg = malloc(sizeof(struct irix_share_group), M_EMULDATA, M_WAITOK); lockinit(&isg->isg_lock, PZERO|PCATCH, "sharegroup", 0, 0); isg->isg_refcount = 0; (void)lockmgr(&isg->isg_lock, LK_EXCLUSIVE, NULL); LIST_INIT(&isg->isg_head); LIST_INSERT_HEAD(&isg->isg_head, ied, ied_sglist); isg->isg_refcount++; (void)lockmgr(&isg->isg_lock, LK_RELEASE, NULL); ied->ied_share_group = isg; } /* * Setting up child stack */ if (inh & IRIX_PR_SADDR) { if (sp == NULL) { /* * All share group members have vm_maxsaddr set * to the bottom of the lowest stack in address space, * therefore we map the new stack there. */ sp = p->p_vmspace->vm_maxsaddr; /* Compute new stacks's bottom address */ sp = (caddr_t)trunc_page((u_long)sp - len); } /* Now map the new stack */ bzero(&vmc, sizeof(vmc)); vmc.ev_addr = trunc_page((u_long)sp); vmc.ev_len = round_page(len); vmc.ev_prot = UVM_PROT_RWX; vmc.ev_flags = UVM_FLAG_COPYONW|UVM_FLAG_FIXED|UVM_FLAG_OVERLAY; vmc.ev_proc = vmcmd_map_zero; #ifdef DEBUG_IRIX printf("irix_sproc(): new stack addr=0x%08lx, len=0x%08lx\n", (u_long)sp, (u_long)len); #endif /* Normally it cannot be NULL since we just initialized it */ if ((isg = ied->ied_share_group) == NULL) panic("irix_sproc: NULL ied->ied_share_group"); IRIX_VM_SYNC(p, error = (*vmc.ev_proc)(p, &vmc)); if (error) return error; /* Update stack parameters for the share group members */ ied = (struct irix_emuldata *)p->p_emuldata; stacksize = (p->p_vmspace->vm_minsaddr - sp) / PAGE_SIZE; (void)lockmgr(&isg->isg_lock, LK_EXCLUSIVE, NULL); LIST_FOREACH(iedp, &isg->isg_head, ied_sglist) { iedp->ied_p->p_vmspace->vm_maxsaddr = (caddr_t)sp; iedp->ied_p->p_vmspace->vm_ssize = stacksize; } (void)lockmgr(&isg->isg_lock, LK_RELEASE, NULL); } /* * Arguments for irix_sproc_child() * This will be freed by the child. */ isc = malloc(sizeof(*isc), M_TEMP, M_WAITOK); isc->isc_proc = &p2; isc->isc_entry = entry; isc->isc_arg = arg; isc->isc_len = len; isc->isc_inh = inh; isc->isc_parent = p; isc->isc_share_group = isg; isc->isc_child_done = 0; if (inh & IRIX_PR_SADDR) { ied->ied_shareaddr = 1; } if ((error = fork1(p, bsd_flags, SIGCHLD, (void *)sp, len, (void *)irix_sproc_child, (void *)isc, retval, &p2)) != 0) return error; /* * The child needs the parent to stay alive until it has * copied a few things from it. We sleep whatever happen * until the child is done. */ while (!isc->isc_child_done) (void)tsleep(&isc->isc_child_done, PZERO, "sproc", 0); free(isc, M_TEMP); retval[0] = (register_t)p2->p_pid; retval[1] = 0; return 0; } static void irix_sproc_child(isc) struct irix_sproc_child_args *isc; { struct proc *p2 = *isc->isc_proc; int inh = isc->isc_inh; struct proc *parent = isc->isc_parent; struct frame *tf = (struct frame *)p2->p_md.md_regs; struct frame *ptf = (struct frame *)parent->p_md.md_regs; struct pcred *pc; struct plimit *pl; struct irix_emuldata *ied; struct irix_emuldata *parent_ied; #ifdef DEBUG_IRIX printf("irix_sproc_child()\n"); #endif /* * Handle shared VM space. The process private arena is not shared */ if (inh & IRIX_PR_SADDR) { int error; vaddr_t min, max; vsize_t len; struct irix_shared_regions_rec *isrr; /* * First, unmap the whole address space */ min = vm_map_min(&p2->p_vmspace->vm_map); max = vm_map_max(&p2->p_vmspace->vm_map); uvm_unmap(&p2->p_vmspace->vm_map, min, max); /* * Now, copy the mapping from the parent for shared regions */ parent_ied = (struct irix_emuldata *)parent->p_emuldata; LIST_FOREACH(isrr, &parent_ied->ied_shared_regions, isrr_list) { min = isrr->isrr_start; len = isrr->isrr_len; max = min + len; /* If this is a private region, skip */ if (isrr->isrr_shared == IRIX_ISRR_PRIVATE) continue; /* Copy the new mapping from the parent */ error = uvm_map_extract(&parent->p_vmspace->vm_map, min, len, &p2->p_vmspace->vm_map, &min, 0); if (error != 0) { #ifdef DEBUG_IRIX printf("irix_sproc_child(): error %d\n", error); #endif isc->isc_child_done = 1; wakeup(&isc->isc_child_done); killproc(p2, "failed to initialize share group VM"); } } /* Map and initialize the process private arena (unshared) */ error = irix_prda_init(p2); if (error != 0) { isc->isc_child_done = 1; wakeup(&isc->isc_child_done); killproc(p2, "failed to initialize the PRDA"); } } /* * Handle shared process UID/GID */ if (inh & IRIX_PR_SID) { pc = p2->p_cred; parent->p_cred->p_refcnt++; p2->p_cred = parent->p_cred; if (--pc->p_refcnt == 0) { crfree(pc->pc_ucred); pool_put(&pcred_pool, pc); } } /* * Handle shared process limits */ if (inh & IRIX_PR_SULIMIT) { pl = p2->p_limit; parent->p_limit->p_refcnt++; p2->p_limit = parent->p_limit; if(--pl->p_refcnt == 0) limfree(pl); } /* * Setup PC to return to the child entry point */ tf->f_regs[PC] = (unsigned long)isc->isc_entry; tf->f_regs[RA] = 0; /* * Setup child arguments */ tf->f_regs[A0] = (unsigned long)isc->isc_arg; tf->f_regs[A1] = 0; tf->f_regs[A2] = 0; tf->f_regs[A3] = 0; if (ptf->f_regs[S3] == (unsigned long)isc->isc_len) { tf->f_regs[S0] = ptf->f_regs[S0]; tf->f_regs[S1] = ptf->f_regs[S1]; tf->f_regs[S2] = ptf->f_regs[S2]; tf->f_regs[S3] = ptf->f_regs[S3]; } /* * Join the share group */ ied = (struct irix_emuldata *)(p2->p_emuldata); parent_ied = (struct irix_emuldata *)(parent->p_emuldata); ied->ied_share_group = parent_ied->ied_share_group; (void)lockmgr(&ied->ied_share_group->isg_lock, LK_EXCLUSIVE, NULL); LIST_INSERT_HEAD(&ied->ied_share_group->isg_head, ied, ied_sglist); ied->ied_share_group->isg_refcount++; (void)lockmgr(&ied->ied_share_group->isg_lock, LK_RELEASE, NULL); if (inh & IRIX_PR_SADDR) ied->ied_shareaddr = 1; /* * wakeup the parent as it can now die without * causing a panic in the child. */ isc->isc_child_done = 1; wakeup(&isc->isc_child_done); /* * Return to userland for a newly created process */ child_return((void *)p2); return; } int irix_sys_procblk(p, v, retval) struct proc *p; void *v; register_t *retval; { struct irix_sys_procblk_args /* { syscallarg(int) cmd; syscallarg(pid_t) pid; syscallarg(int) count; } */ *uap = v; int cmd = SCARG(uap, cmd); struct irix_emuldata *ied; struct irix_emuldata *iedp; struct irix_share_group *isg; struct proc *target; struct pcred *pc; int oldcount; int error, last_error; struct irix_sys_procblk_args cup; /* Find the process */ if ((target = pfind(SCARG(uap, pid))) == NULL) return ESRCH; /* May we stop it? */ pc = p->p_cred; if (!(pc->pc_ucred->cr_uid == 0 || \ pc->p_ruid == target->p_cred->p_ruid || \ pc->pc_ucred->cr_uid == target->p_cred->p_ruid || \ pc->p_ruid == target->p_ucred->cr_uid || \ pc->pc_ucred->cr_uid == target->p_ucred->cr_uid)) return EPERM; /* Is it an IRIX process? */ if (irix_check_exec(target) == 0) return EPERM; ied = (struct irix_emuldata *)(target->p_emuldata); oldcount = ied->ied_procblk_count; switch (cmd) { case IRIX_PROCBLK_BLOCK: ied->ied_procblk_count--; break; case IRIX_PROCBLK_UNBLOCK: ied->ied_procblk_count++; break; case IRIX_PROCBLK_COUNT: if (SCARG(uap, count) > IRIX_PR_MAXBLOCKCNT || SCARG(uap, count) < IRIX_PR_MINBLOCKCNT) return EINVAL; ied->ied_procblk_count = SCARG(uap, count); break; case IRIX_PROCBLK_BLOCKALL: case IRIX_PROCBLK_UNBLOCKALL: case IRIX_PROCBLK_COUNTALL: SCARG(&cup, cmd) = cmd -IRIX_PROCBLK_ONLYONE; SCARG(&cup, count) = SCARG(uap, count); last_error = 0; /* * If the process does not belong to a * share group, do it just for the process */ if ((isg = ied->ied_share_group) == NULL) { SCARG(&cup, pid) = SCARG(uap, pid); return irix_sys_procblk(p, &cup, retval); } (void)lockmgr(&isg->isg_lock, LK_SHARED, NULL); LIST_FOREACH(iedp, &isg->isg_head, ied_sglist) { /* Recall procblk for this process */ SCARG(&cup, pid) = iedp->ied_p->p_pid; error = irix_sys_procblk(iedp->ied_p, &cup, retval); if (error != 0) last_error = error; } (void)lockmgr(&isg->isg_lock, LK_RELEASE, NULL); return last_error; break; default: printf("Warning: unimplemented IRIX procblk command %d\n", cmd); return EINVAL; break; } /* * We emulate the process block/unblock using SIGSTOP and SIGCONT * signals. This is not very accurate, since on IRIX theses way * of blocking a process are completely separated. */ if (oldcount >= 0 && ied->ied_procblk_count < 0) /* blocked */ psignal(target, SIGSTOP); if (oldcount < 0 && ied->ied_procblk_count >= 0) /* unblocked */ psignal(target, SIGCONT); return 0; } int irix_prda_init(p) struct proc *p; { int error; struct exec_vmcmd evc; struct irix_prda *ip; struct irix_prda_sys ips; bzero(&evc, sizeof(evc)); evc.ev_addr = (u_long)IRIX_PRDA; evc.ev_len = sizeof(struct irix_prda); evc.ev_prot = UVM_PROT_RW; evc.ev_proc = *vmcmd_map_zero; if ((error = (*evc.ev_proc)(p, &evc)) != 0) return error; ip = (struct irix_prda *)IRIX_PRDA; bzero(&ips, sizeof(ips)); ips.t_pid = p->p_pid; /* * The PRDA ID must be unique for a PRDA. IRIX uses a small * integer, but we don't know how it is chosen. The PID * should be unique enough to get the work done. */ ips.t_prid = p->p_pid; error = copyout(&ips, (void *)&ip->sys_prda.prda_sys, sizeof(ips)); if (error) return error; /* Remeber the PRDA is private */ irix_isrr_insert((vaddr_t)IRIX_PRDA, sizeof(ips), IRIX_ISRR_PRIVATE, p); return 0; } int irix_vm_fault(p, vaddr, fault_type, access_type) struct proc *p; vaddr_t vaddr; vm_fault_t fault_type; vm_prot_t access_type; { int error; struct irix_emuldata *ied; struct vm_map *map; ied = (struct irix_emuldata *)p->p_emuldata; map = &p->p_vmspace->vm_map; if (ied->ied_share_group == NULL || ied->ied_shareaddr == 0) return uvm_fault(map, vaddr, fault_type, access_type); /* share group version */ (void)lockmgr(&ied->ied_share_group->isg_lock, LK_EXCLUSIVE, NULL); error = uvm_fault(map, vaddr, fault_type, access_type); irix_vm_sync(p); (void)lockmgr(&ied->ied_share_group->isg_lock, LK_RELEASE, NULL); return error; } /* * Propagate changes to address space to other members of the share group */ void irix_vm_sync(p) struct proc *p; { struct proc *pp; struct irix_emuldata *iedp; struct irix_emuldata *ied = (struct irix_emuldata *)p->p_emuldata; struct irix_shared_regions_rec *isrr; vaddr_t min; vaddr_t max; vsize_t len; int error; LIST_FOREACH(iedp, &ied->ied_share_group->isg_head, ied_sglist) { if (iedp->ied_shareaddr != 1 || iedp->ied_p == p) continue; pp = iedp->ied_p; error = 0; /* for each region in the target process ... */ LIST_FOREACH(isrr, &iedp->ied_shared_regions, isrr_list) { /* skip regions private to the target process */ if (isrr->isrr_shared == IRIX_ISRR_PRIVATE) continue; /* * XXX We should also skip regions private to the * original process... */ /* The region is shared */ min = isrr->isrr_start; len = isrr->isrr_len; max = min + len; /* Drop the region */ uvm_unmap(&pp->p_vmspace->vm_map, min, max); /* Clone it from the parent */ error = uvm_map_extract(&p->p_vmspace->vm_map, min, len, &pp->p_vmspace->vm_map, &min, 0); if (error) break; } if (error) killproc(pp, "failed to keep share group VM in sync"); } return; } static struct irix_shared_regions_rec * irix_isrr_create(start, len, shared) vaddr_t start; vsize_t len; int shared; { struct irix_shared_regions_rec *new_isrr; new_isrr = malloc(sizeof(struct irix_shared_regions_rec), M_EMULDATA, M_WAITOK); new_isrr->isrr_start = start; new_isrr->isrr_len = len; new_isrr->isrr_shared = shared; return new_isrr; } /* * Insert a record for a new region in the list. The new region may be * overlaping or be included in an existing region. */ void irix_isrr_insert(start, len, shared, p) vaddr_t start; vsize_t len; int shared; struct proc *p; { struct irix_emuldata *ied = (struct irix_emuldata *)p->p_emuldata; struct irix_shared_regions_rec *isrr; struct irix_shared_regions_rec *new_isrr; vaddr_t end, cur_start, cur_end; int cur_shared; start = trunc_page(start); len = round_page(len); end = start + len; new_isrr = irix_isrr_create(start, len, shared); /* Do we need to insert the new region at the begining of the list? */ if (LIST_EMPTY(&ied->ied_shared_regions) || LIST_FIRST(&ied->ied_shared_regions)->isrr_start > start) { LIST_INSERT_HEAD(&ied->ied_shared_regions, new_isrr, isrr_list); } else { /* Find the place where to insert it */ LIST_FOREACH(isrr, &ied->ied_shared_regions, isrr_list) { cur_start = isrr->isrr_start; cur_end = isrr->isrr_start + isrr->isrr_len; cur_shared = isrr->isrr_shared; /* * if there is no intersection between inserted * and current region: skip to next region */ if (cur_end <= start) continue; /* * if new region is included into the current * region. Right-crop the current region, * insert a new one, and insert a new region * for the end of the split region */ if (cur_end > end && cur_start < start) { isrr->isrr_len = start - isrr->isrr_start; LIST_INSERT_AFTER(isrr, new_isrr, isrr_list); isrr = new_isrr; new_isrr = irix_isrr_create(end, cur_end - end, cur_shared); LIST_INSERT_AFTER(isrr, new_isrr, isrr_list); /* Nothing more to do, exit now */ #ifdef DEBUG_IRIX irix_isrr_debug(p); #endif return; } /* * if inserted block overlap some part * of current region: right-crop current region * and insert the new region */ if (start < cur_end) { isrr->isrr_len = start - cur_start; LIST_INSERT_AFTER(isrr, new_isrr, isrr_list); /* exit the FOREACH loop */ break; } } } /* * At this point, we inserted the new region (new_isrr) but * it may be overlaping with next regions, so we need to clean * this up and remove or crop next regions */ LIST_FOREACH(isrr, &ied->ied_shared_regions, isrr_list) { cur_start = isrr->isrr_start; cur_end = isrr->isrr_start + isrr->isrr_len; /* skip until we get beyond new_isrr */ if (cur_start <= start) continue; if (end >= cur_end) { /* overlap */ LIST_REMOVE(isrr, isrr_list); free(isrr, M_EMULDATA); /* isrr is now invalid */ isrr = new_isrr; continue; } /* * Here end < cur_end, therefore we need to * right-crop the current region */ isrr->isrr_start = end; isrr->isrr_len = cur_end - end; break; } #ifdef DEBUG_IRIX irix_isrr_debug(p); #endif return; } #ifdef DEBUG_IRIX static void irix_isrr_debug(p) struct proc *p; { struct irix_emuldata *ied = (struct irix_emuldata *)p->p_emuldata; struct irix_shared_regions_rec *isrr; printf("isrr for pid %d\n", p->p_pid); LIST_FOREACH(isrr, &ied->ied_shared_regions, isrr_list) { printf(" addr = %p, len = %p, shared = %d\n", (void *)isrr->isrr_start, (void *)isrr->isrr_len, isrr->isrr_shared); } } #endif /* DEBUG_IRIX */
__label__pos
0.991494
Source code for eradiate.scenes.bsdfs._black from __future__ import annotations import attrs from ._core import BSDF from ...kernel import UpdateParameter [docs]@attrs.define(eq=False, slots=False) class BlackBSDF(BSDF): """ Black BSDF [``black``]. This BSDF models a perfectly absorbing surface. It is equivalent to a :class:`.DiffuseBSDF` with zero reflectance. """ @property def template(self) -> dict: # Inherit docstring result = { "type": "diffuse", "reflectance": {"type": "uniform", "value": 0.0}, } if self.id is not None: result["id"] = self.id return result @property def params(self) -> dict[str, UpdateParameter]: # Inherit docstring return {}
__label__pos
0.985944
Rating: [](ctf=hack.lu-2015) [](type=coding) [](tags=random) # GuessTheNumber (coding-150) ``` The teacher of your programming class gave you a tiny little task: just write a guess-my-number script that beats his script. He also gave you some hard facts: he uses some LCG with standard glibc LCG parameters the LCG is seeded with server time using number format YmdHMS (python strftime syntax) numbers are from 0 up to (including) 99 numbers should be sent as ascii string You can find the service on school.fluxfingers.net:1523 ``` ```bash $ nc school.fluxfingers.net 1523 Welcome to the awesome guess-my-number game! It's 22.10.2015 today and we have 13:01:49 on the server right now. Today's goal is easy: just guess my 100 numbers on the first try within at least 30 seconds from now on. Ain't difficult, right? Now, try the first one: 99 Wrong! You lost the game. The right answer would have been '61'. Quitting. ``` So here we have a random number generator and we have to guess 100 numbers on the server. Searching about LCG we land on [this](http://rosettacode.org/wiki/Linear_congruential_generator#C) page. The seed is given by the server in the message.It is the standard rand_r function in glibc with same parameters. Also all numbers are less than 100 so we need a mod 100 for each number. ```c int main(int argc, char **argv) { rseed = atoi(argv[1]); int i; for (i = 0; i <100; i++) printf("%d ", rand()%100); return 0; } ``` By a little hit and trial we find that the server generates 100 numbers using the above algorithm and checks in reverse order. So [here](guess.py) is a quick dirty client to handle the same. ```python from pwn import * import subprocess s=remote('school.fluxfingers.net',1523) a=s.recv(1000,timeout=1) print a all=re.search('have .* on',a).group()[5:13].split(':') seed=20151021000000 seed+=int(all[0])*10000 seed+=int(all[1])*100 seed+=int(all[2]) res=subprocess.check_output(["./lol", str(seed)]).split()[::-1] print res,"len",len(res) for i in res: print i, s.send(str(i)) a=s.recv(100) print a ``` Flag >flag{don't_use_LCGs_for_any_guessing_competition} Original writeup (https://github.com/ByteBandits/writeups/tree/master/hack.lu-ctf-2015/coding/GuessTheNumber/sudhackar).
__label__pos
0.988425