Many people focus on optimizing queries, but totally neglect optimizing indexes especially if they are taking up too much space, and hence the need to optimize them by resizing them. So today, i will share how to perform index optimization, something I regularly do as a DBA. Before you attempt to try anything in production, i would recommend you first do it in our test environment.
mysql> select * from INFORMATION_SCHEMA.INDEX_STATISTICS order by TABLE_NAME;+------------------------+--------------------+--------------------------+--------------------+| TABLE_SCHEMA | TABLE_NAME | INDEX_NAME | ROWS_READ |+------------------------+--------------------+--------------------------+--------------------+| db_awesome | forward_token | PRIMARY | 5728478 || db_awesome | forward_token | btree__platform | 7189059 || db_awesome | forward_token | btree__terr_id | 21232311 || db_awesome | forward_token | btree__pdid | 523454037 || db_awesome | forward_token | btree__msno__pdid | 937466404 || db_awesome | cr_usr | uid | 5269 || db_awesome | cr_usr | pdid | 796543123 || db_awesome | cr_usr | PRIMARY | 30902935218 |+-----------------------+---------------------+--------------------------+--------------------+
forward_token table structure
CREATE TABLE `forward_token` (`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,`msno` bigint(20) unsigned NOT NULL,`pdid` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL,`service` varchar(15) COLLATE utf8mb4_unicode_ci DEFAULT NULL,`terr_id` tinyint(3) unsigned DEFAULT NULL,`platform` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL,`token` varchar(300) COLLATE utf8mb4_unicode_ci NOT NULL,`created_at` int(10) unsigned NOT NULL,`updated_at` int(10) unsigned NOT NULL,`deleted_at` int(10) unsigned DEFAULT NULL,PRIMARY KEY (`id`),UNIQUE KEY `btree__msno__pdid` (`msno`,`pdid`),KEY `btree__pdid` (`pdid`),KEY `btree__platform` (`platform`),KEY `btree__token` (`token`),KEY `btree__updated_at` (`updated_at`),KEY `btree__service` (`service`),KEY `btree__terr_id` (`terr_id`)) ENGINE=InnoDB AUTO_INCREMENT=57726854 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci ROW_FORMAT=COMPRESSED
cr_usr table structure
CREATE TABLE `cr_usr` (`msno` bigint(20) NOT NULL DEFAULT '0',`uid` varchar(128) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',`pdid` varchar(100) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',`start` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',`last` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',`is_valid` tinyint(4) NOT NULL DEFAULT '1',`m_state` int(11) NOT NULL COMMENT 'member state',`pcname` varchar(128) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',`ip` varchar(16) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',`apple_token` varchar(254) COLLATE utf8_unicode_ci NOT NULL,`gcm_registration_uid` varchar(255) COLLATE utf8_unicode_ci NOT NULL DEFAULT '' COMMENT 'Google Cloud Message registration_id',`token_is_valid` tinyint(3) unsigned NOT NULL DEFAULT '0',PRIMARY KEY (`msno`,`pd`),KEY `uid` (`uid`),KEY `pdid` (`pdid`),KEY `last` (`last`)) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ROW_FORMAT=COMPRESSED
Above are the current table structures in db_awesome. And below are the table structures
that i am proposing following index optimization. So here is my pan:
forward_token
CREATE TABLE `forward_token` (`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,`msno` bigint(20) unsigned NOT NULL,`pdid` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL,`service` varchar(15) COLLATE utf8mb4_unicode_ci DEFAULT NULL,`terr_id` tinyint(3) unsigned DEFAULT NULL,`platform` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL,`token` varchar(300) COLLATE utf8mb4_unicode_ci NOT NULL,`created_at` int(10) unsigned NOT NULL,`updated_at` int(10) unsigned NOT NULL,`deleted_at` int(10) unsigned DEFAULT NULL,PRIMARY KEY (`id`),UNIQUE KEY `btree__msno__pdid` (`msno`,`pdid`),KEY `btree__pdid` (`pdid`(8)),KEY `btree__token` (`token`(32))) ENGINE=InnoDB AUTO_INCREMENT=57726854 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci ROW_FORMAT=COMPRESSED
Reasoning:
- changed index btree__pdid from size varchar(100) to size 8 only. It should be enough to provide left most unique index to filter on.
- removed index btree__terr_id.
- removed index btree__platform.
- removed index btree__service
- changed index btree__token size from varchar(300) to maybe varchar(32). It should be enough.
- Removed index btree__updated_at as it is rarely used if at all used. Indexes can have impact on inserts/updates, so removing unused or super rarely used indexes is better.
cr_usr table
CREATE TABLE `cr_usr` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT,`msno` bigint(20) NOT NULL DEFAULT '0',`uid` varchar(128) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',`pdid` varchar(100) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',`start` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',`last` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',`is_valid` tinyint(4) NOT NULL DEFAULT '1',`m_state` int(11) NOT NULL COMMENT 'member state',`pcname` varchar(128) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',`ip` varchar(16) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',`apple_token` varchar(254) COLLATE utf8_unicode_ci NOT NULL,`gcm_registration_uid` varchar(255) COLLATE utf8_unicode_ci NOT NULL DEFAULT '' COMMENT 'Google Cloud Message registration_id',`token_is_valid` tinyint(3) unsigned NOT NULL DEFAULT '0',PRIMARY KEY (`id`),UNIQUE KEY `btree__msno__pdid` (`msno`,`pdid`),KEY `pdid` (`pdid`(8)),KEY `last` (`last`)) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ROW_FORMAT=COMPRESSED
Reasoning:
- instead of using msno + pdid as PK, i have created an auto_incremented INT id
- added unique index on msno and pdid
- removed index uid. Its not used much. Also changed the name to btree__uid in compliance with our index naming policy.
- changed index btree__pdid from size varchar(100) to size 8 only. It should be enough to provide left most unique index to filter on.
Index Optimization Process
Now, lets start by testing to make sure the proposed index resizing above do not impact data:
forward_token
SELECT substr(pdid,1,8), COUNT(1) from `forward_token` GROUP BY substr(pdid,1,8) ORDER BY COUNT(1) DESC LIMIT 10;+-----------------------+--------------+| substr(pdid,1,8) | COUNT(1) |+----------------------+---------------+| ba486c37 | 8181443 || ba48653c | 694097 || 3D34B92D | 616301 || 3730B029 | 319546 || 3730B02E | 235819 || 3730BB2A | 235681 || 3730BE2C | 234621 || 3730BB2F | 233834 || 3730BD2E | 229668 | | 3730BA2B | 214094 |+----------------------+--------------+
cr_user
select substr(pdid,1,8), COUNT(1) from `cr_usr` GROUP BY substr(pdid,1,8) ORDER BY COUNT(1) DESC LIMIT 10;+------------------+----------------+| substr(pdid,1,8) | COUNT(1) |+------------------+----------------+| ba486c37 | 25151354 || ba48653c | 3045032 || 3D3CB829 | 1194343 || 3D34B92D | 1063054 || 3730B029 | 974002 || 3730BE2C | 779162 || 3730B02F | 726741 || 3730BA2B | 641778 || 3730BE21 | 632473 || 3730BC2E | 618935 |+------------------+----------------+
Now lets test for size 32 (only on forward_token table for now, index btree__token) just for the sake of it. Though 32 seems fairly good as demonstrated believe, i think we can cut more on this index and save disk space.
select substr(token,1,32), COUNT(1) from `forward_token` GROUP BY substr(token,1,32) ORDER BY COUNT(1) DESC limit 10;+------------------------------------------------------+----------+| substr(token,1,32) | COUNT(1) |+------------------------------------------------------+----------+| 3437B229ACD5F9421D5F9298AB1D85D2 | 3224 || 3434BA2AA8DAFF451F569390A8198A | 1465 || ba486c37882617fe1bd224a8851d8b38 | 1116 || ba486c37882617fe1bd227a8851d8b38 | 882 || 3730BD28A5DAF942175F9292AB1C88 | 864 || ba486c37882617fe19d126b78f1f8132 | 780 || ba486c37882617fe1ad227a8851d8b38 | 734 || ba486c37882617fe1ad126b78f1f8132 | 664 || ba486c37882617fe1bd222a8851d8b38 | 539 || ba486c37882617fe18d224a8851d8b38 | 535 |+------------------------------------------------------+----------+
I also tried to test between 15 - 22 and maybe some other numbers too in effort find the lowest number that works better.
select substr(token,1,16), COUNT(1) from `forward_token` GROUP BY substr(token,1,16) ORDER BY COUNT(1) DESC limit 10;+----------------------------------+---------------+| substr(token,1,16) | COUNT(1) |+----------------------------------+---------------+| ba486c37882617fe | 6572815 || ba486c37882616fe | 603821 || ba48653cdf6d1dfe | 237183 || 3730B02EADD9F947 | 191765 || 3730BB2FAAD8F945 | 175009 || 3730BD2EACD6F944 | 161463 || 3730B12CA9D9F944 | 138539 || ba486c37882619fe | 124238 || KKKKKKKKKKBOX-KK | 119392 || 3D34B92DADDAF947 | 114129 |+----------------------------------+--------------+select substr(token,1,17), COUNT(1) from `forward_token` GROUP BY substr(token,1,17) ORDER BY COUNT(1) DESC limit 10;+----------------------------------+---------------+| substr(token,1,17) | COUNT(1) |+----------------------------------+---------------+| ba486c37882617fe1 | 6572815 || ba486c37882616fe1 | 603821 || ba48653cdf6d1dfe1 | 237183 || 3730B02EADD9F9471 | 191765 || 3730BB2FAAD8F9451 | 175009 || 3730BD2EACD6F9441 | 161463 || 3730B12CA9D9F9441 | 138539 || ba486c37882619fe1 | 124238 || 3D34B92DADDAF9471 | 114129 || 3D34B92DAEDFFC471 | 104772 |+---------------------------------+----------------+select substr(token,1,18), COUNT(1) from `forward_token` GROUP BY substr(token,1,18) ORDER BY COUNT(1) DESC limit 10;+---------------------------+---------------+| substr(token,1,18) | COUNT(1) |+---------------------------+---------------+| ba486c37882617fe1bd | 3243025 || ba486c37882617fe1ad | 1813067 || ba486c37882617fe19d | 940530 || ba486c37882617fe18d | 576193 || ba486c37882616fe1cd | 249987 || ba486c37882616fe19d | 155160 || ba486c37882616fe1bd | 131373 || ba486c37882619fe19d | 99751 || ba48653cdf6d1cfe19d | 90121 || ba48653cdf6d1dfe1ad | 87289 |+---------------------------+---------------+select substr(token,1,19), COUNT(1) from `forward_token` GROUP BY substr(token,1,19) ORDER BY COUNT(1) DESC limit 10;+--------------------------+---------------+| substr(token,1,19) | COUNT(1) |+---------------------------+--------------+| ba486c37882617fe1bd | 3243025 || ba486c37882617fe1ad | 1813067 || ba486c37882617fe19d | 940530 || ba486c37882617fe18d | 576193 || ba486c37882616fe1cd | 249987 || ba486c37882616fe19d | 155160 || ba486c37882616fe1bd | 131373 || ba486c37882619fe19d | 99751 || ba48653cdf6d1cfe19d | 90121 || ba48653cdf6d1dfe1ad | 87289 |+---------------------------+--------------+select substr(token,1,20), COUNT(1) from `forward_token` GROUP BY substr(token,1,20) ORDER BY COUNT(1) DESC limit 10;+----------------------------+-----------------+| substr(token,1,20) | COUNT(1) |+-----------------------------+----------------+| ba486c37882617fe1bd2 | 2886726 || ba486c37882617fe1ad2 | 1031735 || ba486c37882617fe19d1 | 940528 || ba486c37882617fe1ad1 | 781332 || ba486c37882617fe18d2 | 514419 || ba486c37882617fe1bd1 | 356299 || ba486c37882616fe1cd2 | 145888 || ba486c37882616fe1bd1 | 131373 || ba486c37882616fe19d2 | 121417 || ba486c37882616fe1cd1 | 104099 |+----------------------------+-----------------+select substr(token,1,21), COUNT(1) from `forward_token` GROUP BY substr(token,1,21) ORDER BY COUNT(1) DESC limit 10;+-------------------------------+-------------+| substr(token,1,21) | COUNT(1)|+------------------------------+---------------+| ba486c37882617fe1bd22 | 2886726 || ba486c37882617fe1ad22 | 1031735 || ba486c37882617fe19d12 | 58943 || ba486c37882617fe18d22 | 514419 || ba486c37882617fe1ad12 | 488427 || ba486c37882617fe19d15 | 334043 || ba486c37882617fe1ad15 | 278637 || ba486c37882617fe1bd12 | 223021 || ba486c37882616fe1cd22 | 145888 || ba486c37882617fe1bd15 | 126391 |+-------------------------------+-------------+select substr(token,1,22), COUNT(1) from `forward_token` GROUP BY substr(token,1,22) ORDER BY COUNT(1) DESC limit 10;+------------------------+--------------------+| substr(token,1,22) | COUNT(1) |+------------------------+--------------------+| ba486c37882617fe1ad227 | 1031730 || ba486c37882617fe1bd224 | 926516 || ba486c37882617fe1bd227 | 813305 || ba486c37882617fe1bd222 | 489372 || ba486c37882617fe18d224 | 453973 || ba486c37882617fe1bd225 | 368440 || ba486c37882617fe1bd223 | 289093 || ba486c37882616fe1cd227 | 145888 || ba486c37882619fe19d224 | 74090 || ba486c37882617fe18d227 | 60446 |+--------------------------------+------------+
I will go with 18. With the biggest count being 3243025. The total number of records is about 18102687, so 3243025 is in the 20%sh approximate, which i think is a good number for most indexes to be effective.
To prove the reasoning behind 20% being a good compromise, i created a test table below and filled it with some data
mysql> show create table product\G*************************** 1. row ***************************Table: productCreate Table: CREATE TABLE `product` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT,`name` varchar(100) NOT NULL,`isbn` varchar(60) DEFAULT NULL,PRIMARY KEY (`id`),KEY `btree_name` (`name`(7))) ENGINE=InnoDB AUTO_INCREMENT=47 DEFAULT CHARSET=utf81 row in set (0.00 sec)
Below is the current data in the product table i created:
select id, name from product;+----+--------------------------------------------------------------------+| id | name |+----+--------------------------------------------------------------------+| 2 | Samsung smart TV HD with Wifi 55 inches screen curve || 5 | Sharp smart TV HD with Wifi 55 inches screen curve || 8 | BenQ smart TV HD with Wifi 55 inches screen curve || 11 | Sharp easy TV HD with Wifi 55 inches screen curve || 14 | Sharp eden TV HD with Wifi 55 inches screen curve || 17 | Sharp edge TV HD with Wifi 55 inches screen curve |+----+--------------------------------------------------------------------+6 rows in set (0.00 sec)
So if we try to select the data and filter on name using "Sharp...", we can see that at 50%, it performs a full scan as demonstrated in the results below meaning the index is not being utilized:
explain select * from product wherename='Sharp edge TV HD with Wifi 55 inches screen curve'\G*************************** 1. row ***************************id: 1select_type: SIMPLEtable: producttype: ALLpossible_keys: btree__namekey: NULLkey_len: NULLref: NULLrows: 6Extra: Using where1 row in set (0.01 sec)mysql>
This is not the case for "BenQ..." which is approximately about 17% (almost 20%). As demonstrated below, the index is being utilized as expected:
2016-02-25 16:42:04 9604 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=db-kdb-slave-6-relay-bin' to avoid this problem. 2016-02-25 16:42:05 9604 [Warning] Recovery from master pos 552554502 and file mysqld-bin.001163. Previous relay log pos and relay log file had been set to 552554714, ./db-kdb-slave-6-relay-bin.002933 respectively. 2016-02-25 16:42:05 9604 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
Same if we use "Samsung..." which is also about 17% (close to 20%) as demonstrated below:
explain extended select * from product wherename='BenQ smart TV HD with Wifi 55 inches screen curve'\G*************************** 1. row ***************************id: 1select_type: SIMPLEtable: producttype: refpossible_keys: btree__namekey: btree_namekey_len: 23ref: constrows: 1filtered: 100.00Extra: Using where1 row in set (0.00 sec)mysql>
explain extended select * from product wherename='Samsung smart TV HD with Wifi 55 inches screen curve';*************************** 1. row ***************************id: 1select_type: SIMPLEtable: producttype: refpossible_keys: btree__namekey: btree_namekey_len: 23ref: constrows: 1filtered: 100.00Extra: Using where1 row in set (0.00 sec)mysql>
Below are more cases of "Sharp...." at 50% but with the query tweaked a bit by filtering by the substring size to which the indexsize has been reduced to:
mysql> explain extended select substr(name,1,7), count(name)from product where substr(name,1,7)='Sharp ed';*************************** 1. row ***************************id: 1select_type: SIMPLEtable: producttype: ALLpossible_keys: NULLkey: NULLkey_len: NULLref: NULLrows: 6filtered: 100.00Extra: Using where1 row in set (0.00 sec)mysql>
As we have witnesses in the previous results above, a full scan is performed even if the results below only shows three records returned. The query above is the same as the one below, but the one above shows what the query optimizer is really doing:
mysql> select substr(name,1,7), count(name) from product where substr(name,1,7)='Sharp e';+--------------------+----------------+| substr(name,1,7) | count(name) |+--------------------+----------------+| Sharp e | 3 |+--------------------+----------------+1 row in set (0.00 sec)
Now, we have increased the total number of records, and sharp... is no longer 50%, but now only 20% of total records as shown in the table below:
select id, name from product;+----+--------------------------------------------------------------------+| id | name |+----+--------------------------------------------------------------------+| 2 | Samsung smart TV HD with Wifi 55 inches screen curve || 5 | Sharp smart TV HD with Wifi 55 inches screen curve || 8 | BenQ smart TV HD with Wifi 55 inches screen curve || 11 | Sharp easy TV HD with Wifi 55 inches screen curve || 14 | Sharp eden TV HD with Wifi 55 inches screen curve || 17 | Sharp edge TV HD with Wifi 55 inches screen curve || 20 | Sony edge TV HD with Wifi 55 inches screen curve || 23 | Panasonic edge TV HD with Wifi 55 inches screen curve || 26 | Fiji edge TV HD with Wifi 55 inches screen curve || 29 | LG edge TV HD with Wifi 55 inches screen curve || 32 | Lenovo edge TV HD with Wifi 55 inches screen curve || 35 | Toshiba edge TV HD with Wifi 55 inches screen curve || 38 | iview edge TV HD with Wifi 55 inches screen curve || 41 | Reca edge TV HD with Wifi 55 inches screen curve || 44 | Insimia edge TV HD with Wifi 55 inches screen curve |+----+--------------------------------------------------------------------+15 rows in set (0.00 sec)
With 20% of total records, we can see from results below that the index is now being utilized and the query does not perform as full scan like before when "sharp..." consisted of 50% of total records:
mysql> explain extended select * from product wherename='Sharp edge TV HD with Wifi 55 inches screen curve';*************************** 1. row ***************************id : 1select_type: SIMPLEtable: producttype: refpossible_keys: btree_namekey: btree_namekey_len: 23ref: constrows: 3filtered: 100.00Extra: Using where1 row in set (0.00 sec)mysql>
The last tests above show that for an index to be utilized, the records being filtered on the where clause should be around 20%. This is not the definite number, but a good number to consider when deciding whether to create and index or not.
This solves the problem of knowing whether an index is being utilized or not. The other problem is deciding whether 3.2 million, though utilizing indexes, is fast enough for our needs. There has to be a trade off between reducing disk usage with indexes, and perhaps some performance. But it is a decision that has to be suggested to the developers and come up with a compromise.
After discussing with my colleague and involved developers and failing to come up with a good percentage compromise (Developers insisting on performance, while us, the DBA team also worry about disk usage), Developers finally suggested that we just create a new index on msno which they can use on the query without affecting results, and just switch the (msno + pdid) unique index to become (pdid + msno) unique. So the query will be filtered in msno and not pdid now.
So here are the final tables after making changes as suggested:
CONCLUSION:CREATE TABLE `cr_usr` (`id` int(10) unsigned NOT NULL AUTO_INCREMENT,`msno` bigint(20) NOT NULL DEFAULT '0',`uid` varchar(128) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',`pdid` varchar(100) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',`start` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',`last` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',`is_valid` tinyint(4) NOT NULL DEFAULT '1',`m_state` int(11) NOT NULL COMMENT 'member state',`pcname` varchar(128) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',`ip` varchar(16) COLLATE utf8_unicode_ci NOT NULL DEFAULT '',`apple_token` varchar(254) COLLATE utf8_unicode_ci NOT NULL,`gcm_registration_id` varchar(255) COLLATE utf8_unicode_ci NOT NULL DEFAULT '' COMMENT 'Google Cloud Message registration_id',`token_is_valid` tinyint(3) unsigned NOT NULL DEFAULT '0',PRIMARY KEY (`id`),UNIQUE KEY `btree__pdid_msno` (`pdid`,`msno`),KEY `last` (`last`),KEY `btree__msno` (`msno`)) ENGINE=InnoDB AUTO_INCREMENT=125577019 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci ROW_FORMAT=COMPRESSEDCREATE TABLE `forward_token` (`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,`msno` bigint(20) unsigned NOT NULL,`pdid` varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL,`service` varchar(15) COLLATE utf8mb4_unicode_ci DEFAULT NULL,`terr_id` tinyint(3) unsigned DEFAULT NULL,`platform` varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL,`token` varchar(300) COLLATE utf8mb4_unicode_ci NOT NULL,`created_at` int(10) unsigned NOT NULL,`updated_at` int(10) unsigned NOT NULL,`deleted_at` int(10) unsigned DEFAULT NULL,PRIMARY KEY (`id`),UNIQUE KEY `btree__pdid_msno` (`pdid`,`msno`),KEY `btree__token` (`token`(10)),KEY `btree__msno` (`msno`)) ENGINE=InnoDB AUTO_INCREMENT=54680263 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci ROW_FORMAT=COMPRESSED
Following the changes and close collaboration with developers, all involved parties are pleased with the plan. So i guess we can go ahead and implement it in production. The exercise above also shows how to collaborate with developers as a DBA in order to support their needs better while also making sure that the database standards and procedures are followed correctly without wasting resources especially when it comes to index creation. Most of these indexes were created long time ago when developers were allowed to perform DDL directly on production servers.
No comments:
Post a Comment