https://media.notthebee.com/articles/674600c54fba7674600c54fba8.jpg
Axios’ CEO said this week that regular citizens can’t be journalists because they don’t have fancy credentials and demanded that we take corporate news seriously again.
Not the Bee
Just another WordPress site
https://media.notthebee.com/articles/674600c54fba7674600c54fba8.jpg
Axios’ CEO said this week that regular citizens can’t be journalists because they don’t have fancy credentials and demanded that we take corporate news seriously again.
Not the Bee
https://gizmodo.com/app/uploads/2024/11/philips-sonicare-5300-protectiveclean.jpg
Ready to upgrade your tooth-brushing game beyond that basic manual brush? It’s time to get something that can actually clean your teeth effectively. The Philips Sonicare ProtectiveClean 5300 is like having a tiny dental hygienist living in your bathroom. It buzzes at 62,000 movements per minute and tells you when you’re brushing too hard, which is something most of us are guilty of. But most of all, it gives you that deep clean that you’re craving, and you can stave off dental issues and cavities by using it.
And wouldn’t you know it – this Black Friday, Amazon’s knocked this down to $60, which is a solid $50 off its usual $110 price tag. At 45% off, it’s hitting that sweet spot where investing in your dental health doesn’t require taking out a second mortgage. Plus, it comes with three brush heads, which means you’re set for most of the year.
This electric toothbrush has a little something for everyone, which is part of what makes it such a great option for making sure your teeth are sparkling clean. Got sensitive gums? There’s a mode for that. Fighting coffee stains? There’s a whitening mode. All you really need to do is supply toothpaste and work the brush along all the important spots. Then your reward for being diligent is a whole mouth full of sparkling, glittery teeth with a superior clean that feels much more complete than what you’d get from a basic toothbrush.
Just want a basic clean? Yep, there’s a mode for that too. The pressure sensor is particularly clever – it actually tells you when you’re going too hard, which helps protect your gums from that overzealous morning brushing. And unlike some electric toothbrushes that die after a few days, this one keeps going for two weeks on a single charge, perfect for trips where you might forget the charger.
Anyone who’s been told they brush too hard at the dentist can use this brush, as can anyone who wants to upgrade their oral care without spending hundreds. The included travel case is actually decent too, not just a flimsy plastic afterthought.
At $59.96, it’s a pretty solid investment in keeping your dentist happy at your next checkup. And let’s be honest, anything that makes dentist visits less stressful is worth considering. Just don’t get too attached to the gentle buzzing sound, because you might start missing it when you’re away from home.
Gizmodo
I recently became aware of WeSQL. A MySQL-compatible database that separates compute and storage, using S3 as the storage layer. The product uses a columnar format by default which is significantly more space-efficient than InnoDB.
WeSQL introduces a new storage engine called SmartEngine using a LSM-tree-based structure that is ideal for a storage bucket implementation, and documentation shows the implementation of raft replication to combat latency concerns. There is a lot more information to review, the serverless architecture and WeScale, a database proxy and resource manager.
It was very easy to take it for an initial spin using a docker container and an AWS S3 bucket. I would really like to try CloudFlare R2 which implements the S3 API.
Under the covers there are over 180 new variables comprising 83 for the smartengine, 57 for raft, and 22 for objectstore and more. This implies a lot of tunable options and a lot of complexity to optimize for a variety of workloads using the 79 new status variables.
I was able to launch a demo and confirm
mysql> SELECT VERSION(); +-----------+ | VERSION() | +-----------+ | 8.0.35 | +-----------+ 1 row in set (0.01 sec) mysql> SELECT @@wesql_version; +-----------------+ | @@wesql_version | +-----------------+ | 0.1.0 | +-----------------+ 1 row in set (0.00 sec)
One of my early tests showed that it does not support FOREIGN KEYS, which is not a major concern.
ERROR 1235 (42000) at line 10: SE currently doesn't support foreign key constraints
I did have some subsequent issues with the current docs version 8.0.35-0.1.0_beta1.37
and I did revert to a prior version from docs earlier this week 8.0.35-0.1.0_beta1.gedaf338.36
. Given it’s a very new product I am sure there is a lot of ongoing development.
This is just a quick introduction but it’s a definitely a different architecture in the RDBMS landscape for MySQL compatibility. I hope to run some more tests using the provided sysbench use cases and my own workloads to delve under the covers more.
branch_objectstore_id clone_autotune_concurrency clone_block_ddl clone_buffer_size clone_ddl_timeout clone_delay_after_data_drop clone_donor_timeout_after_network_failure clone_enable_compression clone_max_concurrency clone_max_data_bandwidth clone_max_network_bandwidth clone_ssl_ca clone_ssl_cert clone_ssl_key clone_valid_donor_list initialize_branch_objectstore_id initialize_from_objectstore initialize_objectstore_bucket initialize_objectstore_endpoint initialize_objectstore_provider initialize_objectstore_region initialize_objectstore_use_https initialize_repo_objectstore_id initialize_smartengine_objectstore_data objectstore_bucket objectstore_endpoint objectstore_mtr_test_bucket_dir objectstore_provider objectstore_region objectstore_use_https raft_replication_allow_no_valid_entry raft_replication_appliedindex_force_delay raft_replication_archive_log_bin_index raft_replication_archive_recovery raft_replication_archive_recovery_stop_datetime raft_replication_auto_leader_transfer raft_replication_auto_leader_transfer_check_seconds raft_replication_auto_reset_match_index raft_replication_check_commit_index_interval raft_replication_checksum raft_replication_cluseter_info_on_objectstore raft_replication_cluster_id raft_replication_cluster_info raft_replication_configure_change_timeout raft_replication_current_term raft_replication_disable_election raft_replication_disable_fifo_cache raft_replication_dynamic_easyindex raft_replication_election_timeout raft_replication_flow_control raft_replication_force_change_meta raft_replication_force_recover_index raft_replication_force_reset_meta raft_replication_force_single_mode raft_replication_force_sync_epoch_diff raft_replication_heartbeat_thread_cnt raft_replication_io_thread_cnt raft_replication_large_batch_ratio raft_replication_large_event_split_size raft_replication_large_trx raft_replication_learner_heartbeat raft_replication_learner_node raft_replication_learner_pipelining raft_replication_learner_timeout raft_replication_log_cache_size raft_replication_log_level raft_replication_log_type_node raft_replication_max_delay_index raft_replication_max_log_size raft_replication_max_packet_size raft_replication_min_delay_index raft_replication_mts_recover_use_index raft_replication_new_follower_threshold raft_replication_optimistic_heartbeat raft_replication_pipelining_timeout raft_replication_prefetch_cache_size raft_replication_prefetch_wakeup_ratio raft_replication_prefetch_window_size raft_replication_purged_gtid raft_replication_recover_backup raft_replication_recover_new_cluster raft_replication_reset_prefetch_cache raft_replication_send_timeout raft_replication_start_index raft_replication_sync_follower_meta_interva raft_replication_with_cache_log raft_replication_worker_thread_cnt recovery_snapshot_from_objectstore recovery_snapshot_only recovery_snapshot_timestamp recovery_snapshot_tmpdir repo_objectstore_id server_id_on_objectstore serverless smartengine_auto_shrink_enabled smartengine_auto_shrink_schedule_interval smartengine_batch_group_max_group_size smartengine_batch_group_max_leader_wait_time_us smartengine_batch_group_slot_array_size smartengine_block_cache_size smartengine_block_size smartengine_bottommost_level smartengine_bulk_load_size smartengine_compact smartengine_compaction_delete_percent smartengine_compaction_task_extents_limit smartengine_compaction_threads smartengine_compression_options smartengine_compression_per_level smartengine_concurrent_writable_file_buffer_num smartengine_concurrent_writable_file_buffer_switch_limit smartengine_concurrent_writable_file_single_buffer_size smartengine_data_dir smartengine_deadlock_detect smartengine_disable_auto_compactions smartengine_disable_instant_ddl smartengine_disable_online_ddl smartengine_disable_parallel_ddl smartengine_dump_memtable_limit_size smartengine_enable_2pc smartengine_estimate_cost_depth smartengine_flush_delete_percent smartengine_flush_delete_percent_trigger smartengine_flush_delete_record_trigger smartengine_flush_log_at_trx_commit smartengine_flush_memtable smartengine_flush_threads smartengine_hotbackup smartengine_idle_tasks_schedule_time smartengine_level0_file_num_compaction_trigger smartengine_level0_layer_num_compaction_trigger smartengine_level1_extents_major_compaction_trigger smartengine_level2_usage_percent smartengine_level_compaction_dynamic_level_bytes smartengine_lock_scanned_rows smartengine_lock_wait_timeout smartengine_master_thread_compaction_enabled smartengine_master_thread_monitor_interval_ms smartengine_max_background_dumps smartengine_max_free_extent_percent smartengine_max_row_locks smartengine_max_shrink_extent_count smartengine_max_write_buffer_number_to_maintain smartengine_memtable_size smartengine_min_write_buffer_number_to_merge smartengine_mutex_backtrace_threshold_ns smartengine_parallel_flush_log smartengine_parallel_read_threads smartengine_parallel_recovery_thread_num smartengine_parallel_wal_recovery smartengine_pause_background_work smartengine_persistent_cache_dir smartengine_persistent_cache_mode smartengine_persistent_cache_size smartengine_purge_invalid_subtable_bg smartengine_query_trace_print_slow smartengine_query_trace_sum smartengine_query_trace_threshold_time smartengine_rate_limiter_bytes_per_sec smartengine_reset_pending_shrink smartengine_row_cache_size smartengine_scan_add_blocks_limit smartengine_shrink_allocate_interval smartengine_shrink_table_space smartengine_sort_buffer_size smartengine_stats_dump_period_sec smartengine_strict_collation_check smartengine_strict_collation_exceptions smartengine_table_cache_numshardbits smartengine_table_cache_size smartengine_total_max_shrink_extent_count smartengine_total_memtable_size smartengine_total_wal_size smartengine_unsafe_for_binlog smartengine_wal_dir smartengine_wal_recovery_mode smartengine_write_disable_wal snapshot_archive snapshot_archive_dir snapshot_archive_expire_auto_purge snapshot_archive_expire_seconds snapshot_archive_innodb_tar_mode snapshot_archive_on_objectstore snapshot_archive_period snapshot_archive_smartengine_backup_checkpoint snapshot_archive_smartengine_tar_mode table_on_objectstore wesql_version
Com_show_consensuslogs Com_raft_replication_start Com_raft_replication_stop Com_native_admin_proc Com_native_trans_proc Com_show_consensuslog_events Smartengine_block_cache_miss Smartengine_block_cache_hit Smartengine_block_cache_add Smartengine_block_cache_index_miss Smartengine_block_cache_index_hit Smartengine_block_cache_filter_miss Smartengine_block_cache_filter_hit Smartengine_block_cache_data_miss Smartengine_block_cache_data_hit Smartengine_row_cache_add Smartengine_row_cache_hit Smartengine_row_cache_miss Smartengine_memtable_hit Smartengine_memtable_miss Smartengine_number_keys_written Smartengine_number_keys_read Smartengine_number_keys_updated Smartengine_bytes_written Smartengine_bytes_read Smartengine_block_cachecompressed_miss Smartengine_block_cachecompressed_hit Smartengine_wal_synced Smartengine_wal_bytes Smartengine_write_self Smartengine_write_other Smartengine_write_wal Smartengine_number_superversion_acquires Smartengine_number_superversion_releases Smartengine_number_superversion_cleanups Smartengine_number_block_not_compressed Smartengine_snapshot_conflict_errors Smartengine_wal_group_syncs Smartengine_rows_deleted Smartengine_rows_inserted Smartengine_rows_updated Smartengine_rows_read Smartengine_system_rows_deleted Smartengine_system_rows_inserted Smartengine_system_rows_updated Smartengine_system_rows_read Smartengine_max_level0_layers Smartengine_max_imm_numbers Smartengine_max_level0_fragmentation_rate Smartengine_max_level1_fragmentation_rate Smartengine_max_level2_fragmentation_rate Smartengine_max_level0_delete_percent Smartengine_max_level1_delete_percent Smartengine_max_level2_delete_percent Smartengine_all_flush_megabytes Smartengine_all_compaction_megabytes Smartengine_top1_subtable_size Smartengine_top2_subtable_size Smartengine_top3_subtable_size Smartengine_top1_mod_mem_info Smartengine_top2_mod_mem_info Smartengine_top3_mod_mem_info Smartengine_global_external_fragmentation_rate Smartengine_write_transaction_count Smartengine_pipeline_group_count Smartengine_pipeline_group_wait_timeout_count Smartengine_pipeline_copy_log_size Smartengine_pipeline_copy_log_count Smartengine_pipeline_flush_log_size Smartengine_pipeline_flush_log_count Smartengine_pipeline_flush_log_sync_count Smartengine_pipeline_flush_log_not_sync_count
Planet MySQL
https://www.ammoland.com/wp-content/uploads/2016/11/Gun-Rights-Court-500×281.jpg
At the 2024 Federalist Society National Lawyers Convention, Professor Mark W. Smith delivered a compelling speech on the Second Amendment, emphasizing the Supreme Court’s decision in New York State Rifle & Pistol Association v. Bruen and its profound impact on gun rights in America.
Understanding the “Unqualified Command” of the Second Amendment
Professor Smith began by highlighting the Supreme Court’s characterization of the Second Amendment as an “unqualified command.” This designation underscores that the right to keep and bear arms is fundamental and not subject to arbitrary restrictions. He stressed that any ambiguity in historical context should default to the clear text of the Second Amendment, ensuring that the government bears the burden of justifying any limitations on this right.
The Role of Historical Analogues in Gun Control Legislation
A significant portion of the speech focused on how courts should evaluate historical precedents when assessing modern gun control laws. Professor Smith outlined key criteria for determining suitable historical analogues:
By adhering to these guidelines, courts can ensure that modern interpretations of the Second Amendment remain faithful to its original intent.
Applying the “Why” & “How” Analysis
Professor Smith introduced the “why” and “how” framework to assess the relevance of historical laws to contemporary issues:
He illustrated this with the Supreme Court’s decision in District of Columbia v. Heller, where the Court found that historical bans on “dangerous and unusual” weapons did not justify modern handgun bans, as handguns are commonly used for lawful purposes today.
The Impact of Bruen & Rahimi on Second Amendment Jurisprudence
Discussing the Bruen decision, Professor Smith noted that the Court rejected New York’s restrictive “may issue” permitting system, finding no historical precedent for such limitations on public carry. He also addressed the Rahimi case, emphasizing that it represents a routine application of the principles established in Heller and Bruen, reinforcing the necessity for courts to adhere to historical context when evaluating gun control measures.
Guarding Against Overgeneralization
To prevent the erosion of Second Amendment rights through overly broad interpretations, Professor Smith proposed several safeguards:
By implementing these measures, courts can maintain a faithful interpretation of the Second Amendment, safeguarding it from dilution through generalized reasoning.
Let’s Get it Done!
Professor Smith’s address serves as a vital reminder of the importance of adhering to the original understanding of the Second Amendment. His insights provide a robust framework for evaluating modern gun control laws, ensuring that the fundamental right to keep and bear arms remains protected for future generations.
How Has the Bruen Decision Impacted the 2nd Amendment Litigation Landscape? ~ DEEP DIVE
AmmoLand Shooting Sports News
https://gizmodo.com/app/uploads/2024/11/how-to-train-your-dragon-live-action.jpg
Universal Pictures has finally given us our first look at its live-action How to Train Your Dragon film.
How to Train Your Dragon follows a tribe of Vikings who display their worth by hunting mighty dragons. That is, save for the chieftain’s son, Hiccup (Mason Thames of The Black Phone fame), who befriends a jet-black dragon named Toothless and trains it to become man’s best friend—all while keeping their training sessions a secret from his aforementioned tribe of dragon-killing Vikings.
The teaser trailer gets everything out of the way that detractors would break their bingo cards out for when it comes to the song and dance of yet another live-action project from Hollywood. First off, it shows us the sweeping vistas of a mountainous countryside, impressive ship craftsmanship, and original film actor Gerard Butler in a Viking get-up as the chieftain, Stoick. The trailer also gives us a glimpse of Hiccup and Toothless meeting for the first time… all the while replicating the original film’s big referential moment of Toothless accepting Hiccup’s head pats.
While we couched today’s trailer by saying it was our first look, it would behoove us to mention that How to Train Your Dragon’s teaser trailer leaked ahead of its official release in non-U.S. territories. Though, in a glass half-full look at the whole situation, folks weren’t complaining about how bad the trailer looked like they did with the first official still image of the film, pointing to its desaturated colors and overall lack of whimsy.
As the trailer showcases, the movie’s CG animation for Toothless looks pretty spectacular and the set pieces for the Viking village are also a comparable to the animated films. Although the brief teaser nails the important component of making Toothless look good, time will tell whether the film will measure up to its originator’s sense of heart and humor.
How to Train Your Dragon originally released in 2010. Alongside Butler, the film starred Jay Baruchel as Hiccup and America Ferrera as his love interest, Astrid. It became so popular that it garnered two sequel films, five short films, and two TV series. The live-action film was written and directed by Dean DeBlois, who co-wrote and co-directed the animated original, and wrote and directed its two sequels. Additional cast members include Nico Parker (The Last of Us) as Astrid, Nick Frost as Gobber, and Julian Dennison (Deadpool 2) as Fishlegs.
How To Train Your Dragon is slated to release June 13, 2025.
Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.
Gizmodo
https://cdn-fastly.thefirearmblog.com/media/2024/11/19/03131/canto-arms-dl-44-hero-upgraded-blaster.jpg?size=720×845&nocrop=1
We live in a weird world, a world where someone will take a perfectly fine AR-15 action and turn it into a wannabe Star Wars blaster. Yet, if you can accept that madness, Canto Arms wants to talk to you. They have just announced an upgrade to their Star Wars-esque pistol, and present to the well-heeled buyer the new DL-44 Hero.
Movie Guns @ TFB:
At this point, some TFB readers are wondering what on earth the weirdo pistol pictured above could possibly be. Others might remember back in 2021 when we showed you the first Canto Arms DL44 pistol.
Back then, we told you:
In recent years, Star Wars fans have begun to ditch their dreams of building a Mauser C96 based replica of Han Solo’s DL-44 blaster due to soaring costs, and have a new hope in the much more affordable AR-15 DL-44 clones. Enter Canto Arms, which, at present have a dedicated line of AR-15 DL-44 clones for sale, both as complete .22LR pistols, and as parts kits, plus many other accessory options to complete the DL-44 build you never thought possible. Purists will no doubt scoff at a non-Mauser blaster clone, but with Canto Arms’ Nocturne 22 Heavy Blaster, you can have a close representation of Han Solo’s blaster for less than half of what a C96 conversion would cost, and you get to save the remaining Mausers at the same time.
So if you want to concealed carry like Luke Skywalker but on a Tusken Raider budget, the DL-44 is expensive, but a lot less expensive than building a replica Star Wars blaster from a C96 (and don’t get us started on the cost of building an E-11 Stormtrooper Blaster from a Sterling submachine gun…).
It looks as if the DL-44 Hero is basically a gussied-up version of the original .22LR pistol.
In the film industry, a hero prop weapon is a meticulously crafted piece designed to dazzle the audience with up close detail. Inspired by this concept, we proudly present the DL-44 Hero – a stunningly detailed blaster that captures the very essence of this legendary weapon. From the finely machined detail of the billet upper and lower receivers to the authentic Aurebesh engravings, this blaster is sure to turn heads at the range.
So, Star Wars-universe engravings on a custom upper and lower (the lower is from Strike Industries). There’s also a round knob on the receiver that wasn’t on the original DL-44, making the Hero look more like the original C96-based blaster used in the Star Wars movies. There’s an improved mag release, machined from billet, and trigger upgrades from Longitudinal Grind. A BoreBuddy quiet bolt group is available as an option.
Canto Arms throws in a nylon X-Form mag with the purchase, and you can buy more if you want matching reloads. Canto Arms also includes the scope, rings and 45-degree mounts, but you must install the scope yourself.
Asking price? $1,499.95. See more details on Canto Arms’ website here.
Photos: Canto Arms
The Firearm Blog
This was principally written for my SQL students but I thought it might be useful to others. SQL calculation are performed row-by-row in the SELECT-list. In its simplest form without even touching a table, you can add two literal numbers like this:
SELECT 2 + 2 AS result;
It will display the result of the addition to the column alias result as a derived table, or the following result:
+--------+
| result |
+--------+
| 4 |
+--------+
1 row in set (0.00 sec)
Unfortunately, the use of literal values as shown above doesn’t really let you see how the calculation is made row-by-row because it only returns only one row. You can rewrite the two literal values into one variable by using a Common Table Expressions (CTEs). The CTE creates an struct tuple with only one x element. Another way to describe what the CTE does would say, it creates a derived table named struct with a single x column in the SELECT-list.
The CTE runs first, then a subsequent query may use the CTE’s derived table results. Below is a query that uses the value in the struct.x derived table (or references the struct tuple’s x element) twice while assigning the value to a new column alias, labelled result. The FROM clause places the struct tuple in the queries namespace, which lets you reference it in the SELECT-list.
WITH struct AS
(SELECT 2 AS x)
SELECT struct.x + struct.x AS result
FROM struct;
Like the literal example, it will display the result of the addition to the column alias result as a derived table of one row:
+--------+
| result |
+--------+
| 4 |
+--------+
1 row in set (0.00 sec)
Having laid a basis for a simple calculation in one row, let’s expand the example and demonstrate how to perform row-by-row calculations. The example requires introducing some new concepts. One uses the UNION ALL set operator to fabricate a CTE derived table with three rows. Another uses a comma within the WITH clause to create two derived tables or CTEs. The last uses the CROSS JOIN to add the single row CTE’s single y column to each of the rows returned by the multiple row CTE.
The CROSS JOIN is a Cartesian product, which multiplies the rows in one table against the rows in another table while adding the columns from each table. That means fabricating a table of one column and one row lets you put a variable into all the rows of another table or set of tables combined through an equijoin or non-equijoin operation.
The query below takes a struct1 derived table of one column and three rows and a struct2 derived table of one column and one row, then uses a CROSS JOIN to create a new derived table, which would be a table of two columns and three rows. The Cartesian product only provides the two columns that we will multiply to create new data.
The SELECT-list lets us fabricate a new column where we multiply the values of column x and column y to create a set of new results in column result.
WITH struct1 AS
(SELECT 1 AS x UNION ALL
SELECT 2 AS x UNION ALL
SELECT 3 AS x)
, struct2 AS
(SELECT 10 AS y)
SELECT struct1.x AS x
, struct2.y AS y
, struct1.x * struct2.y AS result
FROM struct1 CROSS JOIN struct2;
The query returns the following results, which show the values used to calculate the result and the result:
+---+----+--------+
| x | y | result |
+---+----+--------+
| 1 | 10 | 10 |
| 2 | 10 | 20 |
| 3 | 10 | 30 |
+---+----+--------+
3 rows in set (0.00 sec)
As a rule, the columns x and y would not be displayed in the final derived table. You would only see the result columns’ values.
Let’s use an example from Alan Bwaulieu’s Learning SQL book with a twist. Rather than manually fabricating the ordinal numbers twice, let’s use the scope reference of a subsequent CTE to reference an earlier CTE. That would create two ten row tables of one column each, or a Cartesian product of a 100 row table with two columns. Then, let’s use the SELECT-list lets us fabricate only a new column, which will add 1 to the numbers 0 to 99 to give us the numbers 1 to a 100.
WITH ones AS
(SELECT 0 AS x UNION ALL
SELECT 1 AS x UNION ALL
SELECT 2 AS x UNION ALL
SELECT 3 AS x UNION ALL
SELECT 4 AS x UNION ALL
SELECT 5 AS x UNION ALL
SELECT 6 AS x UNION ALL
SELECT 7 AS x UNION ALL
SELECT 8 AS x UNION ALL
SELECT 9 AS x )
, tens AS
(SELECT x * 10 AS x FROM ones)
SELECT ones.x + tens.x + 1 AS ordinal
FROM ones CROSS JOIN tens
ORDER BY ordinal;
It returns the following result set:
+---------+
| ordinal |
+---------+
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
| 10 |
| 11 |
...
| 98 |
| 99 |
| 100 |
+---------+
100 rows in set (0.00 sec)
Moving on to more complex math, let’s create a numerals table with the result from our prior query. It will enable calculating the factors of exponents. The easiest way to create the table is shown below (only caveat is that it will build it with a biting rather than an int data type).
CREATE TABLE numerals AS
WITH ones AS
(SELECT 0 AS x UNION ALL
SELECT 1 AS x UNION ALL
SELECT 2 AS x UNION ALL
SELECT 3 AS x UNION ALL
SELECT 4 AS x UNION ALL
SELECT 5 AS x UNION ALL
SELECT 6 AS x UNION ALL
SELECT 7 AS x UNION ALL
SELECT 8 AS x UNION ALL
SELECT 9 AS x )
, tens AS
(SELECT x * 10 AS x FROM ones)
SELECT ones.x + tens.x + 1 AS ordinal
FROM ones CROSS JOIN tens
ORDER BY ordinal;
It can be described after running the foregoing script in MySQL as:
+---------+--------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------+--------+------+-----+---------+-------+
| ordinal | bigint | NO | | 0 | |
+---------+--------+------+-----+---------+-------+
1 row in set (0.00 sec)
The next query accepts a substitution variable into the WITH clause, which means an external program will call it. (Although, you could use a session level variable, which I would discourage.) This query returns the factors for any given exponent:
WITH magic AS
(SELECT %s AS vkey)
SELECT CONCAT(magic.vkey,'^',LOG(magic.vkey,n.ordinal)) AS powers
, n.ordinal AS result
FROM numerals n CROSS JOIN magic
WHERE MOD(n.ordinal,magic.vkey) = 0
AND LOG(magic.vkey,n.ordinal) REGEXP '^[0-9]*$'
OR n.ordinal = 1
ORDER BY n.ordinal;
FYI, the regular expression is used to guarantee only integer return values and the 1 returns the identity property of an exponent raised to the zero power.
Assuming you created the numerals table, put the foregoing query in a query.sql file (because I was to lazy to write the full parameter handling), and you run it in the same directory as this Python program, it’ll take any valid integer as a value.
#!/usr/bin/python
# ------------------------------------------------------------
# Name: power.py
# Date: 19 Oct 2024
# ------------------------------------------------------------
# Purpose:
# -------
# The program shows you how to provide a single agrument
# to a query and print the formatted output.
#
# You can call the program:
#
# ./power.py 3
#
# ------------------------------------------------------------
# Import libraries.
import sys
import mysql.connector
from mysql.connector import errorcode
# ============================================================
# Define a local padding function.
# ============================================================
def pad(valueIn):
# Define local variable.
padding = ''
# Convert single digit numbers to strings.
if isinstance(valueIn,int) and len(str(valueIn)) == 1:
padding = ' '
# Return padding space.
return padding
# ============================================================
# End local function defintion.
# ============================================================
# Define any local variables.
powerIn = 2
query = ""
# ============================================================
# Capture argument list minus the program name.
# ============================================================
arguments = sys.argv[1:]
# ============================================================
# If one or more arguments exists and the first one is an
# a string that can cast to an int, convert it to an int,
# assign it to a variable, and ignore any other arguments
# in the list.
# ============================================================
if len(arguments) >= 1 and arguments[0].isdigit():
powerIn = int(arguments[0])
# ============================================================
# Use a try-catch block to read and parse a query from a
# a file found in the same local directory as the Python
# program.
# ============================================================
try:
file = open('query.sql','r')
query = file.read().replace('\n',' ').replace(';','')
file.close()
except IOError:
print("Could not read file:", fileName)
# ============================================================
# Attempt connection in a try-catch block.
# ============================================================
# --------------------------------------------------------
# Open connection, bind variable in query and format
# query output before closing the cursor.
# --------------------------------------------------------
try:
# Open connection.
cnx = mysql.connector.connect(user='student', password='student',
host='127.0.0.1',
database='studentdb')
# Create cursor.
cursor = cnx.cursor()
# Execute cursor, and coerce string to tuple.
cursor.execute(query, (powerIn,))
# Display the rows returned by the query.
for (powers, result) in cursor:
print((" {} is: {}").format(powers, pad(result) + str(result)))
# Close cursor.
cursor.close()
# --------------------------------------------------------
# Handle MySQL exception
# --------------------------------------------------------
except mysql.connector.Error as e:
if e.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
elif e.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exist")
else:
print("Error code:", e.errno) # error number
print("SQLSTATE value:", e.sqlstate) # SQLSTATE value
print("Error message:", e.msg) # error message
# --------------------------------------------------------
# Close connection after try-catch completes.
# --------------------------------------------------------
# Close the connection when the try block completes.
else:
cnx.close()
If you forget to call it with a numeric parameter, it uses 2 as the default. You would call it as follows from a Linux prompt from the local directory:
./power.py
It returns:
2^0 is: 1
2^1 is: 2
2^2 is: 4
2^3 is: 8
2^4 is: 16
2^5 is: 32
2^6 is: 64
If you call it with a numeric parameter, it uses the numeric value. You would call it as follows from a Linux prompt from the local directory:
./power.py 3
It returns:
3^0 is: 1
3^1 is: 3
3^2 is: 9
3^3 is: 27
3^4 is: 81
As always, I hope the post helps folks sort out how and why things work.
Planet MySQL
Classic first-person shooters Unreal (1998) and Unreal Tournament are now available for free on the Internet Archive, with official OK from publisher Epic Games. An Epic spokesperson confirmed to PC Gamer that users are permitted to "independently link to and play these versions." Players can download the games directly from the Internet Archive and apply patches from Github for modern Windows compatibility, or use simplified installers through oldunreal.com. Both titles run on current hardware despite their age, though users may need to adjust dated default settings like 640×480 resolution and inverted mouse controls.
Read more of this story at Slashdot.
Slashdot
MySQL system variables configure the server’s operation, and the SET statement is used to change system variable. The MySQL SET statements have various options for specifying changes to SYSTEM VARIABLE. It’s important to understand how these changes are reflected in current sessions (connections), later sessions, and after database server restarts.Planet MySQL
https://feeds.feedblitz.com/~/i/907904180/0/theonlinephotographer
I’m generally a cheapskate, but this time I decided to splurge. I solved the M4 Mac Mini’s dearth of USB-A ports by buying a Caldigit TS4. It’s got 18 connections, including two card slots and three audio jacks. Eight effingteen.
Total overkill, but you know what? I’ve been suffering from Apple’s apparently innate tendency to skimp on ports, bays, slots and jacks my whole life. My whole life. The transition away from floppies was a pain (and I lost a lot of data, eventually—who knew that later on they’d come up with drives and software to get data off old 3.5 floppies, after a stretch when you couldn’t?); I missed CD drives when they went away; I missed the SD card slot when it went away; and, almost every Apple computer I’ve ever owned or worked on—18 is my latest best count—lacks adequate ports. Even the very first Macintosh, the original 1984 128k, should have had two floppy drives when it only had one, so you wouldn’t have had to sit there exchanging floppies back and forth, back and forth, while it copied the application to a new disk. I just decided, the hell with economy. I’m a man, and I’m gonna get me enough ports. For once.
Even so, for the M4 Mac Mini, I think I’m going to recommend—or pass along a recommendation for—this:
It’s the Xcellon Pro 10 hub, USB-C 3.2 Gen 2, recommended privately by a reader who knows this stuff. At issue is that it’s capable of passing along 10 Gbps, like the front ports on the M4 Minis. If you use a 5 Gbps hub, you’re choking off half the happy flow of data to whatever you have plugged into it. A little on the expensive side, but that just gives it a better chance of not being super-Chin…er, super-cheap in build.
The Caldigit was a breeze to install but a giant pain to site. I have a sit-stand desk, so the entire welter of wires needs to be free enough to allow the desk to travel up and down. I ended up having to site a separate small table next to the desk to support the power supplies; the one in front is for the TS4. (The other is for the JBOD enclosure on the left.) Neither the cord from the plug to the power supply nor the one from the power supply to the unit were long enough on their own to allow enough travel. Hence the need to site the power supply halfway up.
I hated to do it. Give it a month and the power supplies will be buried.
The Caldigit did neaten up my desk considerably. The white wire from the hard drive housing doesn’t look good, true, but it’s the best I could do. It’s Thunderbolt 2, which, as you probably don’t remember, used cables with the same termination as Mini-DisplayPort. But they had to be Thunderbolt certified. So, as I understand it, and I could be wrong, not all Mini-DisplayPort male-to-male cables will do. A 1-meter Thunderbolt 2 cable would allow me to pass this wire under the desk—emphasis on 1-meter—but those apparently don’t exist to be bought any more. Or at least not ones that I can know for sure are suitable for 20-Gbps data transmission. The best I could do four years ago was this 0.5-meter Apple Thunderbolt 2 cable with Mini DisplayPort male-to-male terminations along with the Apple Thunderbolt-to-Type-C-Thunderbolt 3 adapter. Which is not quite long enough, but oh well. Are you bored yet? I’m certainly getting there.
Unfortunately, there is one casualty of all this. The Caldigit leaves my beautiful new Wise Advanced Co. Taiwanese-made card reader orphaned. (Somehow I knew I wasn’t going to get to keep this, because I like it. Whoops, no self-pity.) Anyway, I tested, and the SD card reader in the Caldigit is as fast as the Wise Advanced. Anyway, if you need a really nice CFexpress Type B card reader in a nice aluminum housing that’s only been used about ten times, that also reads UHS-II SDXC cards (here’s a link), let me know. It’s for sale. You can have it for a nice price.
Mike
Original contents copyright 2024 by Michael C. Johnston and/or the bylined author. All Rights Reserved. Links in this post may be to our affiliates; sales through affiliate links may benefit this site. As an Amazon Associate I earn from qualifying purchases. (To see all the comments, click on the "Comments" link below or on the title of this post.)
Featured Comments from:
The Online Photographer